<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 24, 2013 at 12:24 PM, Russell Bryant <span dir="ltr"><<a href="mailto:rbryant@redhat.com" target="_blank">rbryant@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 07/23/2013 06:00 PM, Clint Byrum wrote:<br>
> This is really interesting work, thanks for sharing it with us. The<br>
> discussion that has followed has brought up some thoughts I've had for<br>
> a while about this choke point in what is supposed to be an extremely<br>
> scalable cloud platform (OpenStack).<br>
><br>
> I feel like the discussions have all been centered around making "the"<br>
> scheduler(s) intelligent. There seems to be a commonly held belief that<br>
> scheduling is a single step, and should be done with as much knowledge<br>
> of the system as possible by a well informed entity.<br>
><br>
> Can you name for me one large scale system that has a single entity,<br>
> human or computer, that knows everything about the system and can make<br>
> good decisions quickly?<br>
><br>
> This problem is screaming to be broken up, de-coupled, and distributed.<br>
><br>
> I keep asking myself these questions:<br>
><br>
> Why are all of the compute nodes informing all of the schedulers?<span style="color:rgb(34,34,34)"> </span><span style="color:rgb(34,34,34)"> </span></div></div></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">
><br>
> Why are all of the schedulers expecting to know about all of the compute nodes?<br></div></div></blockquote><div><br></div><div>So the scheduler can try to find the globally optimum solution, see below.</div><div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
><br>
> Can we break this problem up into simpler problems and distribute the load to<br>
> the entire system?<br>
><br>
> This has been bouncing around in my head for a while now, but as a<br>
> shallow observer of nova dev, I feel like there are some well known<br>
> scaling techniques which have not been brought up. Here is my idea,<br>
> forgive me if I have glossed over something or missed a huge hole:<br>
><br>
> * Schedulers break up compute nodes by hash table, only caring about<br>
> those in their hash table.<br>
> * Schedulers, upon claiming a compute node by hash table, poll compute<br>
> node directly for its information.<br></div></div></blockquote><div><br></div><div>For people who want to schedule on information that is constantly changing (such as CPU load, memory usage etc). How often would you poll?</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
> * Requests to boot go into fanout.<br>
> * Schedulers get request and try to satisfy using only their own compute<br>
> nodes.<br>
> * Failure to boot results in re-insertion in the fanout.<br></div></div></blockquote><div><br></div><div>With this model we loose the ability to find the global optimum host to schedule on, and can only find an optimal solution. Which sounds like a reasonable scale trade off. Going forward I can image nova having several different schedulers for different requirements. As someone who is deploying at a massive scale will probably accept an optimal solution (and a scheduler that scales better) but someone with a smaller cloud will want the globally optimum solution.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
><br>
> This gives up the certainty that the scheduler will find a compute node<br>
> for a boot request on the first try. It is also possible that a request<br>
> gets unlucky and takes a long time to find the one scheduler that has<br>
> the one last "X" resource that it is looking for. There are some further<br>
> optimization strategies that can be employed (like queues based on hashes<br>
> already tried.. etc).<br>
><br>
> Anyway, I don't see any point in trying to hot-rod the intelligent<br>
> scheduler to go super fast, when we can just optimize for having many<br>
> many schedulers doing the same body of work without blocking and without<br>
> pounding a database.<br>
<br>
</div></div>These are some *very* good observations. I'd like all of the nova folks<br>
interested in this are to give some deep consideration of this type of<br>
approach.<br>
<span class="HOEnZb"><font color="#888888"><br></font></span></blockquote><div><br></div><div>I agree an approach like this is very interesting and is something worth exploring, especially at the summit. There are some clear pros and cons to an approach like this. For example this will scale better, but cannot find the optimum node to schedule on. My question is, at what scale does it make sense to adopt an approach like this? And how can we improve our current scheduler to scale better, not that it will ever scale better then the idea proposed here. </div>
<div><br></div><div>While talking about scale there are some other big issues, such as RPC that need be be sorted out as well.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class="HOEnZb"><font color="#888888">
--<br>
Russell Bryant<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>