[openstack-dev] [Nova] support for multiple active scheduler policies/drivers
rbryant at redhat.com
Tue Jul 23 21:32:24 UTC 2013
On 07/23/2013 04:24 PM, Alex Glikson wrote:
> Russell Bryant <rbryant at redhat.com> wrote on 23/07/2013 07:19:48 PM:
>> I understand the use case, but can't it just be achieved with 2 flavors
>> and without this new aggreagte-policy mapping?
>> flavor 1 with extra specs to say aggregate A and policy Y
>> flavor 2 with extra specs to say aggregate B and policy Z
> I agree that this approach is simpler to implement. One of the
> differences is the level of enforcement that instances within an
> aggregate are managed under the same policy. For example, nothing would
> prevent the admin to define 2 flavors with conflicting policies that can
> be applied to the same aggregate. Another aspect of the same problem is
> the case when admin wants to apply 2 different policies in 2 aggregates
> with same capabilities/properties. A natural way to distinguish between
> the two would be to add an artificial property that would be different
> between the two -- but then just specifying the policy would make most
I'm not sure I understand this. I don't see anything here that couldn't
be accomplished with flavor extra specs. Is that what you're saying?
Or are you saying there are cases that can not be set up using that
>> > Well, I can think of few use-cases when the selection approach might be
>> > different. For example, it could be based on tenant properties (derived
>> > from some kind of SLA associated with the tenant, determining the
>> > over-commit levels), or image properties (e.g., I want to determine
>> > placement of Windows instances taking into account Windows licensing
>> > considerations), etc
>> Well, you can define tenant specific flavors that could have different
>> policy configurations.
> Would it possible to express something like 'I want CPU over-commit of
> 2.0 for tenants with SLA=GOLD, and 4.0 for tenants with SLA=SILVER'?
Sure. Define policies for sla=gold and sla=silver, and the flavors for
each tenant would refer to those policies.
>> I think I'd rather hold off on the extra complexity until there is a
>> concrete implementation of something that requires and justifies it.
> The extra complexity is actually not that huge.. we reuse the existing
> mechanism of generic filters.
I just want to see something that actually requires it before it goes
in. I take exposing a pluggable interface very seriously. I don't want
to expose more random plug points than necessary.
> Regarding both suggestions -- I think the value of this blueprint will
> be somewhat limited if we keep just the simplest version. But if people
> think that it makes a lot of sense to do it in small increments -- we
> can probably split the patch into smaller pieces.
I'm certainly not trying to diminish value, but I am looking for
specific cases that can not be accomplished with a simpler solution.
More information about the OpenStack-dev