[openstack-dev] Scheduler proposal

Ian Wells ijw.ubuntu at cack.org.uk
Fri Oct 9 21:36:00 UTC 2015


On 9 October 2015 at 12:50, Chris Friesen <chris.friesen at windriver.com>
wrote:

> Has anybody looked at why 1 instance is too slow and what it would take to
>
>> make 1 scheduler instance work fast enough? This does not preclude the
>> use of
>> concurrency for finer grain tasks in the background.
>>
>
> Currently we pull data on all (!) of the compute nodes out of the database
> via a series of RPC calls, then evaluate the various filters in python code.
>

I'll say again: the database seems to me to be the problem here.  Not to
mention, you've just explained that they are in practice holding all the
data in memory in order to do the work so the benefit we're getting here is
really a N-to-1-to-M pattern with a DB in the middle (the store-to-DB is
rather secondary, in fact), and that without incremental updates to the
receivers.

I suspect it'd be a lot quicker if each filter was a DB query.
>

That's certainly one solution, but again, unless you can tell me *why* this
information will not all fit in memory per process (when it does right
now), I'm still not clear why a database is required at all, let alone a
central one.  Even if it doesn't fit, then a local DB might be reasonable
compared to a centralised one.  The schedulers don't need to work off of
precisely the same state, they just need to make different choices to each
other, which doesn't require a that's-mine-hands-off approach; and they
aren't going to have a perfect view of the state of a distributed system
anyway, so retries are inevitable.

On a different topic, on the weighted choice: it's not 'optimal', given
this is a packing problem, so there isn't a perfect solution.  In fact,
given we're trying to balance the choice of a preferable host with the
chance that multiple schedulers make different choices, it's likely worse
than even weighting.  (Technically I suspect we'd want to rethink whether
the weighting mechanism, is actually getting us a benefit.)
-- 
Ian.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151009/f06f70d2/attachment.html>


More information about the OpenStack-dev mailing list