[openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder

Alex Glikson GLIKSON at il.ibm.com
Mon Nov 18 08:28:50 UTC 2013


Boris Pavlovic <bpavlovic at mirantis.com> wrote on 18/11/2013 08:31:20 AM:

> Actually schedulers in nova and cinder are almost the same. 

Well, this is kind of expected, since Cinder scheduler started as a 
copy-paste of the Nova scheduler :-) But they already started diverging 
(not sure whether this is necessarily a bad thing or not).

> >> So, Cinder (as well as Neutron, and potentially others) would 
> need to be hooked to Nova rpc? 
> 
> As a first step, to prove approach yes, but I hope that we won't 
> have "nova" or "cinder" scheduler at all. We will have just 
> scheduler that works well. 

So, do you envision this code being merged in Nova first, and then move 
out? Start as a new "thing" from the beginning? 
Also, when it will be separate (probably not in icehouse?), will the 
communication continue being over RPC, or would we need to switch to REST? 
This could be conceptually similar to the communicate between cells today, 
via a separate RPC.

By the way, since the relationships between resources are likely to reside 
in Heat DB, it could make sense to have this "thing" as a new Engine under 
Heat umbrella (as discussed in couple of other threads, you are also 
likely to need orchestration, when dealing with groups of resources).

> >> Instances of memcached. In an environment with multiple 
> schedulers. I think you mentioned that if we have, say, 10 
> schedulers, we will also have 10 instances of memcached. 
> 
> Actually we are going to make implementation based on sqlalchemy as 
> well. In case of memcached I just say one of arch, that you could 
> run on each server with scheduler service memcahced instance. But it
> is not required, you can have even just one memcached instance for 
> all scheulers (but it is not HA).

I am not saying that having multiple instances of memcached is wrong - 
just that it would require some work.. It seems that one possible approach 
could be partitioning -- each scheduler will take care of a subset of the 
environment (availability zone?). This way data will be naturally 
partitioned too, and the data in memcached's will not need to be 
synchronized. Of course, making this HA would also require some effort 
(something like ZooKeeper could be really useful to manage all of this - 
configuration of each scheduler, ownership of underlying 'zones', leader 
election, etc).

Regards,
Alex

> Best regards,
> Boris Pavlovic 
> ---
> Mirantis Inc. 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131118/e1ab923b/attachment.html>


More information about the OpenStack-dev mailing list