[placement][ptg] Allocation Partitioning

melanie witt melwittt at gmail.com
Wed Apr 17 00:33:29 UTC 2019


On Tue, 16 Apr 2019 19:18:03 -0500, Matt Riedemann <mriedemos at gmail.com> 
wrote:
> On 4/16/2019 4:03 PM, melanie witt wrote:
>> Consumer types would also enable us to implement the quota usage
>> behavior we want during a resize, to take max('instance' usage,
>> 'migration' usage).
> 
> To be clear, and you did the testing on this recently, but this is how
> quota usage worked pre-pike counting quotas in nova, correct? And that
> counting behavior was changed in Pike because we now only count the new
> flavor resources for vcpus/ram while the server is in VERIFY_RESIZE
> status, even if it was a resize down - which could prevent the user from
> reverting the resize if they would go over-quota during revert because
> we didn't track quota properly during a resize.

Yes, that is how quota usage worked pre-Pike. And yes, the counting 
behavior in Pike means that we only count the vcpus, memory_mb, and 
disk_gb attributes that are set on the instance. Since those attributes 
are updated to the new flavor on the instance at finish_resize time 
(before confirm/revert), we get only the new flavor count, regardless of 
whether it was an upsize or a downsize. To do it in a way that reserves 
room for a revert of a downsize, we would have to defer saving the 
instance attributes until after a resize confirm, only if it's a 
downsize. I'm not sure whether it's worth the complexity.

> So "behavior we want" meaning, "get us back to pre-pike behavior" but at
> the same time no one has really complained about this slight wrinkle
> since it happened in pike (I think it dawned on me last week). Meaning,
> it's low priority, yeah?

Yes, back to pre-Pike behavior is the ideal state, I think. But what I 
was saying is getting to pre-Pike behavior from the current counting 
from placement proposal is a significant improvement. The current 
counting from placement proposal necessarily results in "doubled" 
allocations for upsizes and downsizes. If we're interested in making 
counting from placement the default in the future, getting away from 
"doubled" allocations seems important.

For completeness, here are the results of the tests I did with Ocata and 
Train devstacks:

* Ocata behavior, uses max of old flavor and new flavor until 
confirmed/reverted: http://paste.openstack.org/show/749221

* Train behavior, uses new flavor until confirmed/reverted: 
http://paste.openstack.org/show/749266

* Train behavior + counting from placement, uses old flavor + new flavor 
until confirmed/reverted: http://paste.openstack.org/show/749267

-melanie







More information about the openstack-discuss mailing list