[openstack-dev] [Nova] support for multiple active scheduler policies/drivers
Alex Glikson
GLIKSON at il.ibm.com
Mon Jul 29 15:15:22 UTC 2013
It is certainly an interesting idea to have a policy service managed via
APIs, and to have scheduler as a potential consumer of such as service.
However, I suspect that this requires more discussion, and certainly can't
be added for Havana (you can count on me to suggest it as a topic for the
upcoming design summit).
Moreover, I think the currently proposed implementation (incorporating
some of the initial feedback provided in this thread) introduces 80% of
the value, with 20% of the effort and complexity.
If anyone has specific suggestions on how to make it better without adding
another 1000 lines of code -- I would be more than glad to adjust.
IMO, it is better to start simple in Havana, start getting feedback from
the field regarding specific usability/feature requirements earlier rather
than later, and incrementally improve going forward. The current design
provides clear added value, while not introducing anything that would be
conceptually difficult to change in the future (e.g., no new APIs, no
schema changes, fully backwards compatible).
By the way, the inspiration for the current design was the multi-backend
support in Cinder, where a similar approach is used to define multiple
Cinder backends in cinder.conf, and to use a simple logic to select the
appropriate one at runtime base on the name of the corresponding section.
Regards,
Alex
P.S. the code is ready for review.. Jenkins is still failing, but this
seems to be due to a bug which has been reported, fixed and will be merged
soon.
"Day, Phil" <philip.day at hp.com> wrote on 28/07/2013 01:29:22 PM:
> From: "Day, Phil" <philip.day at hp.com>
> To: OpenStack Development Mailing List
<openstack-dev at lists.openstack.org>,
> Date: 28/07/2013 01:36 PM
> Subject: Re: [openstack-dev] [Nova] support for multiple active
> scheduler policies/drivers
>
>
>
> > From: Joe Gordon [mailto:joe.gordon0 at gmail.com]
> > Sent: 26 July 2013 23:16
> > To: OpenStack Development Mailing List
> > Subject: Re: [openstack-dev] [Nova] support for multiple active
> scheduler policies/drivers
> >
> >
> >>
> >> On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson <GLIKSON at il.ibm.com>
wrote:
> >>> Russell Bryant <rbryant at redhat.com> wrote on 24/07/2013 07:14:27 PM:
> >>
> >>
> >>> I really like your point about not needing to set things up via a
config
> >>> file. That's fairly limiting since you can't change it on the fly
via
> >>> the API.
>
> >>True. As I pointed out in another response, the ultimate goal
> would be to have policies as 'first class citizens' in Nova,
> including a DB table, API, >>etc. Maybe even a separate policy
> service? But in the meantime, it seems that the approach with config
> file is a reasonable compromise in >>terms of usability, consistency
> and simplicity.
>
> I think we need to be looking in the future to being able to
> delegate large parts of the functionality that is currently "admin
> only" in Nova, and a large part of that is moving things like this
> from the config file into APIs. Once we have the Domain capability
> in ketystone fully available to services like Nova we need to think
> more about ownership of resources like hosts, and being able to
> delegate this kind of capability.
>
>
> >I do like your idea of making policies first class citizens in
> Nova, but I am not sure doing this in nova is enough. Wouldn't we
> need similar things >in Cinder and Neutron? Unfortunately this
> does tie into how to do good scheduling across multiple services,
> which is another rabbit hole all >together.
> >
> > I don't like the idea of putting more logic in the config file, as
> it is the config files are already too complex, making running any
> OpenStack >deployment require some config file templating and some
> metadata magic (like heat). I would prefer to keep things like
> this in aggregates, or >something else with a REST API. So why not
> build a tool on top of aggregates to push the appropriate metadata
> into the aggregates. This will >give you a central point to manage
> policies, that can easily be updated on the fly (unlike config files).
>
> I agree with Jo on this point, and his is the approach we're taking
> with the Pcloud / whole-host-allocation blueprint:
>
> https://review.openstack.org/#/c/38156/
> https://wiki.openstack.org/wiki/WholeHostAllocation
>
> I don't think realistically we'll be able to land this in Havana now
> (as much as anything I don't think it had enough air time yet to be
> sure we have a consensus on all of the details) but Rackspace are
> now helping with part of this and we do expect to have something in
> a PoC / Demonstratable state for the Design Summit to provide a more
> focused discussion. Because the code is layered on top of existing
> aggregate and scheduler features its pretty easy to keep it as
> something we can just keep rebasing.
>
> Regards,
> Phil
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130729/79970a6a/attachment.html>
More information about the OpenStack-dev
mailing list