[openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder
Boris Pavlovic
bpavlovic at mirantis.com
Mon Nov 18 06:31:20 UTC 2013
Khanh-Toan,
>> There is a need for a scheduler that can scheduling a group of resources
as a whole, which is difficult to realize due to the separation of Nova,
Cinder –scheduler. Thus I’m in favor of a dedicated scheduler component.
What we need is to have one scheduler that is able effectively to store all
data,
>> However, before talking about API or implementation, wouldn't it be
better to see if the nova/cinder scheduler is independent enough to be
separated from the core, in particular the data that the require to make a
proper scheduling decision? It is reasonable to look again at the current
architecture of Nova and Cinder to see which relation that nova-scheduler
and cinder-scheduler have with the rest of nova/cinder components, and
which data they take from nova/cinder DB and whether these data can be
separated from Nova/Cinder.
Before starting implementation of new scheduler we made this investigation..
Actually schedulers in nova and cinder are almost the same. And they are
pretty separated from the core of projects:
1) They are separated services
2) Other services (e.g. compute api) already calls scheduler through rpc
3) Scheduler services call other services (e.g. compute manager) through rpc
4) Code base for scheduler service and other services are different (common
parts are already mostly in oslo)
Only thing that is hard bind with scheduler is project DB.
But after switching to separated storage (e.g. memcached) scheduler won't
depend on project db.
>> Is it really OK to drop these tables? Could Nova can work without them
(e.g. rollback)? And if Ceilometer is about to ask nova for host state
metrics ?
Yes it is OK, because now ceilometer and other projects could ask scheduler
about host state. (I don't see any problems)
Alex,
>> So, Cinder (as well as Neutron, and potentially others) would need to be
hooked to Nova rpc?
As a first step, to prove approach yes, but I hope that we won't have
"nova" or "cinder" scheduler at all. We will have just scheduler that works
well.
>> I was referring to external (REST) APIs. E.g., to specify affinity.
Yes this should be moved as well to scheduler API..
>> Instances of memcached. In an environment with multiple schedulers. I
think you mentioned that if we have, say, 10 schedulers, we will also have
10 instances of memcached.
Actually we are going to make implementation based on sqlalchemy as well.
In case of memcached I just say one of arch, that you could run on each
server with scheduler service memcahced instance. But it is not required,
you can have even just one memcached instance for all scheulers (but it is
not HA).
Best regards,
Boris Pavlovic
---
Mirantis Inc.
On Sun, Nov 17, 2013 at 9:27 PM, Alex Glikson <GLIKSON at il.ibm.com> wrote:
> Boris Pavlovic <bpavlovic at mirantis.com> wrote on 15/11/2013 05:57:20 PM:
>
>
> > >> How do you envision the life cycle of such a scheduler in terms
> > of code repository, build, test, etc?
>
> >
> > As a first step we could just make it inside nova, when we finish
> > and prove that this approach works well we could split it out the
> > nova in separated project and integrate with devstack and so on so on...
>
> So, Cinder (as well as Neutron, and potentially others) would need to be
> hooked to Nova rpc?
>
> > >> What kind of changes to provisioning APIs do you envision to
> > 'feed' such a scheduler?
> >
> > At this moment nova.scheduler is already separated service with amqp
> > queue, what we need at this moment is to add 1 new rpc method to it.
> > That will update state of some host.
>
> I was referring to external (REST) APIs. E.g., to specify affinity.
>
> > >> Also, there are some interesting technical challenges (e.g.,
> > state management across potentially large number of instances of
> memcached).
> >
> > 10-100k keys-values is nothing for memcached. So what kind of instances?
>
> Instances of memcached. In an environment with multiple schedulers. I
> think you mentioned that if we have, say, 10 schedulers, we will also have
> 10 instances of memcached.
>
> Regards,
> Alex
>
> > Best regards,
> > Boris Pavlovic
> >
> >
>
> > On Sun, Nov 10, 2013 at 4:20 PM, Alex Glikson <GLIKSON at il.ibm.com>
> wrote:
> > Hi Boris,
> >
> > This is a very interesting approach.
> > How do you envision the life cycle of such a scheduler in terms of
> > code repository, build, test, etc?
> > What kind of changes to provisioning APIs do you envision to 'feed'
> > such a scheduler?
> > Any particular reason you didn't mention Neutron?
> > Also, there are some interesting technical challenges (e.g., state
> > management across potentially large number of instances of memcached).
> >
> > Thanks,
> > Alex
> >
> >
> > Boris Pavlovic <bpavlovic at mirantis.com> wrote on 10/11/2013 07:05:42 PM:
> >
> > > From: Boris Pavlovic <bpavlovic at mirantis.com>
> > > To: "OpenStack Development Mailing List (not for usage questions)"
> > > <openstack-dev at lists.openstack.org>,
> > > Date: 10/11/2013 07:07 PM
> > > Subject: Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to
> > > leverage oslo schduler/filters for nova and cinder
> > >
> > > Jay,
> > >
> > > Hi Jay, yes we were working about putting all common stuff in oslo-
> > > scheduler. (not only filters)
> > >
> > > As a result of this work we understood, that this is wrong approach.
> > > Because it makes result code very complex and unclear. And actually
> > > we didn't find the way to put all common stuff inside oslo. Instead
> > > of trying to make life too complex we found better approach.
> > > Implement scheduler aaS that can scale (current solution has some
> > > scale issues) & store all data from nova, cinder & probably other
> places.
> > >
> > > To implement such approach we should change a bit current
> architecture:
> > > 1) Scheduler should store all his data (not nova.db & cinder.db)
> > > 2) Scheduler should always have own snapshot of "wold" state, and
> > > sync it with another schedulers using something that is quite fast
> > > (e.g. memcached)
> > > 3) Merge schedulers rpc methods from nova & cinder in one scheduler
> > > (it is possbile if we store all data from cinder & nova in one
> sceduler).
> > > 4) Drop cinder, and nova tables that store host states (as we don't
> > > need them)
> > >
> > > We implemented already base start (mechanism that store snapshot of
> > > world state & sync it between different schedulers):
> > >
> > > https://review.openstack.org/#/c/45867/ (it is still bit in WIP)
> > >
> > > Best regards,
> > > Boris Pavlovic
> > > ---
> > > Mirantis Inc.
> > >
> > >
> >
> > > On Sun, Nov 10, 2013 at 1:59 PM, Jay Lau <jay.lau.513 at gmail.com>
> wrote:
> > > I noticed that there is already a bp in oslo tracing what I want to
> do:
> > > https://blueprints.launchpad.net/oslo/+spec/oslo-scheduler
> >
> > > Thanks,
> >
> > > Jay
> >
> > >
> >
> > > 2013/11/9 Jay Lau <jay.lau.513 at gmail.com>
> > > Greetings,
> > >
> > > Now in oslo, we already put some scheduler filters/weights logic
> > > there and cinder is using oslo scheduler filters/weights logic,
> > > seems we want both nova&cinder use this logic in future.
> > >
> > > Found some problems as following:
> > > 1) In cinder, some filters/weight logic reside in cinder/openstack/
> > > common/scheduler and some filter/weight logic in cinder/scheduler,
> > > this is not consistent and also will make some cinder hackers
> > > confused: where shall I put the scheduler filter/weight.
> > > 2) Nova is not using filter/weight from oslo and also not using
> > > entry point to handle all filter/weight.
> > > 3) There is not enough filters in oslo, we may need to add more
> > > there: such as same host filter, different host filter, retry filter
> etc.
> > >
> > > So my proposal is as following:
> > > 1) Add more filters to oslo, such as same host filter, different
> > > host filter, retry filter etc.
> > > 2) Move all filters/weight logic in cinder from cinder/scheduler to
> > > cinder/openstack/common/scheduler
> > > 3) Enable nova use filter/weight logic from oslo (Move all filter
> > > logic to nova/openstack/common/scheduler) and also use entry point
> > > to handle all filters/weight logic.
> > >
> > > Comments?
> > >
> > > Thanks,
> > >
> > > Jay
> > >
> > >
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131118/9d8f9426/attachment.html>
More information about the OpenStack-dev
mailing list