[openstack-dev] [Nova] support for multiple active scheduler policies/drivers

Alex Glikson GLIKSON at il.ibm.com
Wed Jul 24 21:33:56 UTC 2013


"Day, Phil" <philip.day at hp.com> wrote on 24/07/2013 12:39:16 PM:
> 
> If you want to provide a user with a choice about how much overcommit
> they will be exposed to then doing that in flavours and the 
> aggregate_instance_extra_spec filter seems the more natural way to 
> do this, since presumably you'd want to charge differently for those
> and the flavour list is normally what is linked to the pricing model. 

So, there are 2 aspects here. First, whether policy should be part of 
flavor definition or separate. I claim that in some cases it would make 
sense to specify it separately. For example, if we want to support 
multiple policies for the same virtual hardware configuration, making 
policy to be part of the flavor extra spec would potentially multiply the 
number of virtual hardware configurations, which is what flavors 
essentially are, by the number of policies -- contributing to explosion in 
the number of flavors in the system. Moreover, although in some cases you 
would want the user to be aware and distinguish between policies, this is 
not always the case. For example, the admin may want to apply 
consolidation/packing policy in one aggregate, and spreading in another. 
Showing two different flavors does seem reasonable in such cases. 

Secondly, even if the policy *is* defined in flavor extra spec, I can see 
value in having a separate filter to handle it. I personally see the main 
use-case for the extra spec filter in supporting matching of capabilities. 
Resource management policy is something which should be hidden, or at 
least abstracted, from the user. And enforcing it with a separate filter 
could be a 'cleaner' design, and also more convenient -- both from 
developer perspective and admin perspective.

> I also like the approach taken by the recent changes to the ram 
> filter where the scheduling characteristics are defined as 
> properties of the aggregate rather than separate stanzas in the 
> configuration file.

Indeed, subset of the scenarios we had in mind can be implemented by 
making each property of each filter/weight an explicit key-value of the 
aggregate, and making each of the filters/weights aware of those aggregate 
properties.
However, our design have several potential advantages, such as:
1) different policies can have different sets of filters/weights
2) different policies can be even enforced by different drivers
3) the configuration is more maintainable -- the admin defines policies in 
one place, and not in 10 places (if you have large environment with 10 
aggregates). One of the side-effects is improved consistency -- if the 
admin needs to change a policy, he needs to do it in one place, and he can 
be sure that all the aggregates comply to one of the valid policies. 
4) the developer of filters/weights does need to care whether the 
parameters are persisted -- nova.conf or aggregate properties

> An alternative, and the use case I'm most interested in at the 
> moment, is where we want the user to be able to define the 
> scheduling policies on a specific set of hosts allocated to them (in
> this case they pay for the host, so if they want to oversubscribe on
> memory/cpu/disk then they should be able to). 
[...]
> Its not clear to me if what your proposing addresses an additional 
> gap between this and the combination of the aggregate_extra_spec 
> filter + revised filters to get their configurations from aggregates) ?

IMO, this can be done with our proposed implementation. 
Going forward, I think that policies should be first-class citizens 
(rather than static sections in nova.conf, or just sets of key-value pairs 
associated with aggregates). Then we can provide APIs to manage them in a 
more flexible manner.

Regards,
Alex

> Cheers,
> Phil
> 
> > -----Original Message-----
> > From: Russell Bryant [mailto:rbryant at redhat.com]
> > Sent: 23 July 2013 22:32
> > To: openstack-dev at lists.openstack.org
> > Subject: Re: [openstack-dev] [Nova] support for multiple active 
scheduler
> > policies/drivers
> > 
> > On 07/23/2013 04:24 PM, Alex Glikson wrote:
> > > Russell Bryant <rbryant at redhat.com> wrote on 23/07/2013 07:19:48 PM:
> > >
> > >> I understand the use case, but can't it just be achieved with 2
> > >> flavors and without this new aggreagte-policy mapping?
> > >>
> > >> flavor 1 with extra specs to say aggregate A and policy Y flavor 2
> > >> with extra specs to say aggregate B and policy Z
> > >
> > > I agree that this approach is simpler to implement. One of the
> > > differences is the level of enforcement that instances within an
> > > aggregate are managed under the same policy. For example, nothing
> > > would prevent the admin to define 2 flavors with conflicting 
policies
> > > that can be applied to the same aggregate. Another aspect of the 
same
> > > problem is the case when admin wants to apply 2 different policies 
in
> > > 2 aggregates with same capabilities/properties. A natural way to
> > > distinguish between the two would be to add an artificial property
> > > that would be different between the two -- but then just specifying
> > > the policy would make most sense.
> > 
> > I'm not sure I understand this.  I don't see anything here that 
couldn't be
> > accomplished with flavor extra specs.  Is that what you're saying?
> > Or are you saying there are cases that can not be set up using 
> that approach?
> > 
> > >> > Well, I can think of few use-cases when the selection approach
> > >> > might be different. For example, it could be based on tenant
> > >> > properties (derived from some kind of SLA associated with the
> > >> > tenant, determining the over-commit levels), or image properties
> > >> > (e.g., I want to determine placement of Windows instances taking
> > >> > into account Windows licensing considerations), etc
> > >>
> > >> Well, you can define tenant specific flavors that could have
> > >> different policy configurations.
> > >
> > > Would it possible to express something like 'I want CPU over-commit 
of
> > > 2.0 for tenants with SLA=GOLD, and 4.0 for tenants with SLA=SILVER'?
> > 
> > Sure.  Define policies for sla=gold and sla=silver, and the flavors 
for each
> > tenant would refer to those policies.
> > 
> > >> I think I'd rather hold off on the extra complexity until there is 
a
> > >> concrete implementation of something that requires and justifies 
it.
> > >
> > > The extra complexity is actually not that huge.. we reuse the 
existing
> > > mechanism of generic filters.
> > 
> > I just want to see something that actually requires it before it 
> goes in.  I take
> > exposing a pluggable interface very seriously.  I don't want to expose 
more
> > random plug points than necessary.
> > 
> > > Regarding both suggestions -- I think the value of this blueprint 
will
> > > be somewhat limited if we keep just the simplest version. But if
> > > people think that it makes a lot of sense to do it in small 
increments
> > > -- we can probably split the patch into smaller pieces.
> > 
> > I'm certainly not trying to diminish value, but I am looking for 
> specific cases
> > that can not be accomplished with a simpler solution.
> > 
> > --
> > Russell Bryant
> > 
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130725/34b62243/attachment.html>


More information about the OpenStack-dev mailing list