[openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder
Alex Glikson
GLIKSON at il.ibm.com
Sun Nov 17 17:27:31 UTC 2013
Boris Pavlovic <bpavlovic at mirantis.com> wrote on 15/11/2013 05:57:20 PM:
> >> How do you envision the life cycle of such a scheduler in terms
> of code repository, build, test, etc?
>
> As a first step we could just make it inside nova, when we finish
> and prove that this approach works well we could split it out the
> nova in separated project and integrate with devstack and so on so on...
So, Cinder (as well as Neutron, and potentially others) would need to be
hooked to Nova rpc?
> >> What kind of changes to provisioning APIs do you envision to
> 'feed' such a scheduler?
>
> At this moment nova.scheduler is already separated service with amqp
> queue, what we need at this moment is to add 1 new rpc method to it.
> That will update state of some host.
I was referring to external (REST) APIs. E.g., to specify affinity.
> >> Also, there are some interesting technical challenges (e.g.,
> state management across potentially large number of instances of
memcached).
>
> 10-100k keys-values is nothing for memcached. So what kind of instances?
Instances of memcached. In an environment with multiple schedulers. I
think you mentioned that if we have, say, 10 schedulers, we will also have
10 instances of memcached.
Regards,
Alex
> Best regards,
> Boris Pavlovic
>
>
> On Sun, Nov 10, 2013 at 4:20 PM, Alex Glikson <GLIKSON at il.ibm.com>
wrote:
> Hi Boris,
>
> This is a very interesting approach.
> How do you envision the life cycle of such a scheduler in terms of
> code repository, build, test, etc?
> What kind of changes to provisioning APIs do you envision to 'feed'
> such a scheduler?
> Any particular reason you didn't mention Neutron?
> Also, there are some interesting technical challenges (e.g., state
> management across potentially large number of instances of memcached).
>
> Thanks,
> Alex
>
>
> Boris Pavlovic <bpavlovic at mirantis.com> wrote on 10/11/2013 07:05:42 PM:
>
> > From: Boris Pavlovic <bpavlovic at mirantis.com>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev at lists.openstack.org>,
> > Date: 10/11/2013 07:07 PM
> > Subject: Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to
> > leverage oslo schduler/filters for nova and cinder
> >
> > Jay,
> >
> > Hi Jay, yes we were working about putting all common stuff in oslo-
> > scheduler. (not only filters)
> >
> > As a result of this work we understood, that this is wrong approach.
> > Because it makes result code very complex and unclear. And actually
> > we didn't find the way to put all common stuff inside oslo. Instead
> > of trying to make life too complex we found better approach.
> > Implement scheduler aaS that can scale (current solution has some
> > scale issues) & store all data from nova, cinder & probably other
places.
> >
> > To implement such approach we should change a bit current
architecture:
> > 1) Scheduler should store all his data (not nova.db & cinder.db)
> > 2) Scheduler should always have own snapshot of "wold" state, and
> > sync it with another schedulers using something that is quite fast
> > (e.g. memcached)
> > 3) Merge schedulers rpc methods from nova & cinder in one scheduler
> > (it is possbile if we store all data from cinder & nova in one
sceduler).
> > 4) Drop cinder, and nova tables that store host states (as we don't
> > need them)
> >
> > We implemented already base start (mechanism that store snapshot of
> > world state & sync it between different schedulers):
> >
> > https://review.openstack.org/#/c/45867/ (it is still bit in WIP)
> >
> > Best regards,
> > Boris Pavlovic
> > ---
> > Mirantis Inc.
> >
> >
>
> > On Sun, Nov 10, 2013 at 1:59 PM, Jay Lau <jay.lau.513 at gmail.com>
wrote:
> > I noticed that there is already a bp in oslo tracing what I want to
do:
> > https://blueprints.launchpad.net/oslo/+spec/oslo-scheduler
>
> > Thanks,
>
> > Jay
>
> >
>
> > 2013/11/9 Jay Lau <jay.lau.513 at gmail.com>
> > Greetings,
> >
> > Now in oslo, we already put some scheduler filters/weights logic
> > there and cinder is using oslo scheduler filters/weights logic,
> > seems we want both nova&cinder use this logic in future.
> >
> > Found some problems as following:
> > 1) In cinder, some filters/weight logic reside in cinder/openstack/
> > common/scheduler and some filter/weight logic in cinder/scheduler,
> > this is not consistent and also will make some cinder hackers
> > confused: where shall I put the scheduler filter/weight.
> > 2) Nova is not using filter/weight from oslo and also not using
> > entry point to handle all filter/weight.
> > 3) There is not enough filters in oslo, we may need to add more
> > there: such as same host filter, different host filter, retry filter
etc.
> >
> > So my proposal is as following:
> > 1) Add more filters to oslo, such as same host filter, different
> > host filter, retry filter etc.
> > 2) Move all filters/weight logic in cinder from cinder/scheduler to
> > cinder/openstack/common/scheduler
> > 3) Enable nova use filter/weight logic from oslo (Move all filter
> > logic to nova/openstack/common/scheduler) and also use entry point
> > to handle all filters/weight logic.
> >
> > Comments?
> >
> > Thanks,
> >
> > Jay
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131117/c055206f/attachment.html>
More information about the OpenStack-dev
mailing list