[openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

Alex Glikson GLIKSON at il.ibm.com
Mon Jun 9 19:13:52 UTC 2014


>> So maybe the problem isn?t having the flavors so much, but in how the 
user currently has to specific an exact match from that list.
If the user could say ?I want a flavor with these attributes? and then the 
system would find a ?best match? based on criteria set by the cloud admin 
then would that be a more user friendly solution ? 

Interesting idea.. Thoughts how this can be achieved?

Alex




From:   "Day, Phil" <philip.day at hp.com>
To:     "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev at lists.openstack.org>, 
Date:   06/06/2014 12:38 PM
Subject:        Re: [openstack-dev] [nova] Proposal: Move CPU and memory 
allocation ratio out of scheduler



 
From: Scott Devoid [mailto:devoid at anl.gov] 
Sent: 04 June 2014 17:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory 
allocation ratio out of scheduler
 
Not only live upgrades but also dynamic reconfiguration. 

Overcommitting affects the quality of service delivered to the cloud user. 
 In this situation in particular, as in many situations in general, I 
think we want to enable the service provider to offer multiple qualities 
of service.  That is, enable the cloud provider to offer a selectable 
level of overcommit.  A given instance would be placed in a pool that is 
dedicated to the relevant level of overcommit (or, possibly, a better pool 
if the selected one is currently full).  Ideally the pool sizes would be 
dynamic.  That's the dynamic reconfiguration I mentioned preparing for. 
 
+1 This is exactly the situation I'm in as an operator. You can do 
different levels of overcommit with host-aggregates and different flavors, 
but this has several drawbacks:
1.      The nature of this is slightly exposed to the end-user, through 
extra-specs and the fact that two flavors cannot have the same name. One 
scenario we have is that we want to be able to document our flavor 
names--what each name means, but we want to provide different QoS 
standards for different projects. Since flavor names must be unique, we 
have to create different flavors for different levels of service. 
Sometimes you do want to lie to your users!
[Day, Phil] I agree that there is a problem with having every new option 
we add in extra_specs leading to a new set of flavors.    There are a 
number of changes up for review to expose more hypervisor capabilities via 
extra_specs that also have this potential problem.    What I?d really like 
to be able to ask for a s a user is something like ?a medium instance with 
a side order of overcommit?, rather than have to choose from a long list 
of variations.    I did spend some time trying to think of a more elegant 
solution ? but as the user wants to know what combinations are available 
it pretty much comes down to needing that full list of combinations 
somewhere.    So maybe the problem isn?t having the flavors so much, but 
in how the user currently has to specific an exact match from that list.
If the user could say ?I want a flavor with these attributes? and then the 
system would find a ?best match? based on criteria set by the cloud admin 
(for example I might or might not want to allow a request for an 
overcommitted instance to use my not-overcommited flavor depending on the 
roles of the tenant) then would that be a more user friendly solution ? 
 
2.      If I have two pools of nova-compute HVs with different overcommit 
settings, I have to manage the pool sizes manually. Even if I use puppet 
to change the config and flip an instance into a different pool, that 
requires me to restart nova-compute. Not an ideal situation.
[Day, Phil] If the pools are aggregates, and the overcommit is defined by 
aggregate meta-data then I don?t see why you  need to restart 
nova-compute.
3.      If I want to do anything complicated, like 3 overcommit tiers with 
"good", "better", "best" performance and allow the scheduler to pick 
"better" for a "good" instance if the "good" pool is full, this is very 
hard and complicated to do with the current system.
[Day, Phil]  Yep, a combination of filters and weighting functions would 
allow you to do this ? its not really tied to whether the overcommit Is 
defined in the scheduler or the host though as far as I can see. 
 
I'm looking forward to seeing this in nova-specs!
~ Scott_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140609/27fe6d6f/attachment.html>


More information about the OpenStack-dev mailing list