[openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder
khanh-toan.tran at cloudwatt.com
Mon Nov 18 09:51:24 UTC 2013
>>> Is it really OK to drop these tables? Could Nova can work without them
(e.g. rollback)? And if Ceilometer is about to ask nova for host state
>>> Yes it is OK, because now ceilometer and other projects could ask
scheduler about host state. (I don't see any problems)
IMO, since the scheduler doesnt communicate directly to hypervisors,
thats the role of computes, thus we should not rely on it for collecting
the host state data. I think that it should be inversed, i.e. scheduler
relies on others, s.a. Ceilometer for that matter. But this means we have
to deal with data synchronization.
>By the way, since the relationships between resources are likely to
reside in Heat DB, it could make sense to have this "thing" as a new
Engine under Heat umbrella (as discussed in couple of other threads, you
are also likely to need orchestration, when dealing with groups of
Im not so sure that this scheduler should fall into Heat. Heat does not
know *every* compute, it communicates with Nova-API and thats all it
knows. Scheduler has the complete knowledge of the infrastructure, and
responses to question which compute hosts which VM. Thus whoever the
scheduler responds to should be able to communicate with *every* computes.
For instance, the scheduler can directly initiate VM like in the old days,
or have some conductor for this task, or some orchestration like you said.
Of course, Heat can call this scheduler which will initiate the VMs is a
sound scenario. But otherwise for Heat to have this scheduler integrated
is too much intrusive into the infra.
De : Alex Glikson [mailto:GLIKSON at il.ibm.com]
Envoyé : lundi 18 novembre 2013 09:29
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to
leverage oslo schduler/filters for nova and cinder
Boris Pavlovic < <mailto:bpavlovic at mirantis.com> bpavlovic at mirantis.com>
wrote on 18/11/2013 08:31:20 AM:
> Actually schedulers in nova and cinder are almost the same.
Well, this is kind of expected, since Cinder scheduler started as a
copy-paste of the Nova scheduler :-) But they already started diverging
(not sure whether this is necessarily a bad thing or not).
> >> So, Cinder (as well as Neutron, and potentially others) would
> need to be hooked to Nova rpc?
> As a first step, to prove approach yes, but I hope that we won't
> have "nova" or "cinder" scheduler at all. We will have just
> scheduler that works well.
So, do you envision this code being merged in Nova first, and then move
out? Start as a new "thing" from the beginning?
Also, when it will be separate (probably not in icehouse?), will the
communication continue being over RPC, or would we need to switch to REST?
This could be conceptually similar to the communicate between cells today,
via a separate RPC.
By the way, since the relationships between resources are likely to reside
in Heat DB, it could make sense to have this "thing" as a new Engine under
Heat umbrella (as discussed in couple of other threads, you are also
likely to need orchestration, when dealing with groups of resources).
> >> Instances of memcached. In an environment with multiple
> schedulers. I think you mentioned that if we have, say, 10
> schedulers, we will also have 10 instances of memcached.
> Actually we are going to make implementation based on sqlalchemy as
> well. In case of memcached I just say one of arch, that you could
> run on each server with scheduler service memcahced instance. But it
> is not required, you can have even just one memcached instance for
> all scheulers (but it is not HA).
I am not saying that having multiple instances of memcached is wrong -
just that it would require some work.. It seems that one possible approach
could be partitioning -- each scheduler will take care of a subset of the
environment (availability zone?). This way data will be naturally
partitioned too, and the data in memcached's will not need to be
synchronized. Of course, making this HA would also require some effort
(something like ZooKeeper could be really useful to manage all of this -
configuration of each scheduler, ownership of underlying 'zones', leader
> Best regards,
> Boris Pavlovic
> Mirantis Inc.
Aucun virus trouvé dans ce message.
Analyse effectuée par AVG - www.avg.fr
Version: 2014.0.4158 / Base de données virale: 3629/6844 - Date:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev