[openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

Solly Ross sross at redhat.com
Tue Jun 10 14:49:28 UTC 2014


Response inline

----- Original Message -----
> From: "Alex Glikson" <GLIKSON at il.ibm.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Sent: Monday, June 9, 2014 3:13:52 PM
> Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler
> 
> >> So maybe the problem isn’t having the flavors so much, but in how the user
> >> currently has to specific an exact match from that list.
> If the user could say “I want a flavor with these attributes” and then the
> system would find a “best match” based on criteria set by the cloud admin
> then would that be a more user friendly solution ?
> 
> Interesting idea.. Thoughts how this can be achieved?

Well, that is *essentially* what a scheduler does -- you give it a set of parameters
and it finds a chunk of resources (in this case, a flavor) to match those features.
I'm *not* suggesting that we reuse any scheduling code, it's just one way to think
about it.

Another way to think about it would be to produce a "distance" score and choose the
flavor with the smallest "distance", discounting flavors that couldn't fit the target
configuration.  The "distance" score would simply be a sum of distances between the individual
resources for the target and flavor.

Best Regards,
Solly Ross

> 
> Alex
> 
> 
> 
> 
> From: "Day, Phil" <philip.day at hp.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>,
> Date: 06/06/2014 12:38 PM
> Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation
> ratio out of scheduler
> 
> 
> 
> 
> 
> From: Scott Devoid [ mailto:devoid at anl.gov ]
> Sent: 04 June 2014 17:36
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation
> ratio out of scheduler
> 
> Not only live upgrades but also dynamic reconfiguration.
> 
> Overcommitting affects the quality of service delivered to the cloud user. In
> this situation in particular, as in many situations in general, I think we
> want to enable the service provider to offer multiple qualities of service.
> That is, enable the cloud provider to offer a selectable level of
> overcommit. A given instance would be placed in a pool that is dedicated to
> the relevant level of overcommit (or, possibly, a better pool if the
> selected one is currently full). Ideally the pool sizes would be dynamic.
> That's the dynamic reconfiguration I mentioned preparing for.
> 
> +1 This is exactly the situation I'm in as an operator. You can do different
> levels of overcommit with host-aggregates and different flavors, but this
> has several drawbacks:
> 1. The nature of this is slightly exposed to the end-user, through
> extra-specs and the fact that two flavors cannot have the same name. One
> scenario we have is that we want to be able to document our flavor
> names--what each name means, but we want to provide different QoS standards
> for different projects. Since flavor names must be unique, we have to create
> different flavors for different levels of service. Sometimes you do want to
> lie to your users!
> [Day, Phil] I agree that there is a problem with having every new option we
> add in extra_specs leading to a new set of flavors. There are a number of
> changes up for review to expose more hypervisor capabilities via extra_specs
> that also have this potential problem. What I’d really like to be able to
> ask for a s a user is something like “a medium instance with a side order of
> overcommit”, rather than have to choose from a long list of variations. I
> did spend some time trying to think of a more elegant solution – but as the
> user wants to know what combinations are available it pretty much comes down
> to needing that full list of combinations somewhere. So maybe the problem
> isn’t having the flavors so much, but in how the user currently has to
> specific an exact match from that list.
> If the user could say “I want a flavor with these attributes” and then the
> system would find a “best match” based on criteria set by the cloud admin
> (for example I might or might not want to allow a request for an
> overcommitted instance to use my not-overcommited flavor depending on the
> roles of the tenant) then would that be a more user friendly solution ?
> 
> 2. If I have two pools of nova-compute HVs with different overcommit
> settings, I have to manage the pool sizes manually. Even if I use puppet to
> change the config and flip an instance into a different pool, that requires
> me to restart nova-compute. Not an ideal situation.
> [Day, Phil] If the pools are aggregates, and the overcommit is defined by
> aggregate meta-data then I don’t see why you need to restart nova-compute.
> 3. If I want to do anything complicated, like 3 overcommit tiers with "good",
> "better", "best" performance and allow the scheduler to pick "better" for a
> "good" instance if the "good" pool is full, this is very hard and
> complicated to do with the current system.
> [Day, Phil] Yep, a combination of filters and weighting functions would allow
> you to do this – its not really tied to whether the overcommit Is defined in
> the scheduler or the host though as far as I can see.
> 
> I'm looking forward to seeing this in nova-specs!
> ~ Scott _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



More information about the OpenStack-dev mailing list