[openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

Balázs Gibizer balazs.gibizer at ericsson.com
Fri Aug 17 10:10:37 UTC 2018



On Thu, Aug 16, 2018 at 5:34 PM, Eric Fried <openstack at fried.cc> wrote:
> Thanks for this, gibi.
> 
> 
> TL;DR: a).
> 
> I didn't look, but I'm pretty sure we're not caching allocations in 
> the
> report client. Today, nobody outside of nova (specifically the 
> resource
> tracker via the report client) is supposed to be mucking with instance
> allocations, right? And given the global lock in the resource tracker,
> it should be pretty difficult to race e.g. a resize and a delete in 
> any
> meaningful way. So short term, IMO it is reasonable to treat any
> generation conflict as an error. No retries. Possible wrinkle on 
> delete,
> where it should be a failure unless forced.

Yes, today the instance_uuid and migraton_uuid consumers in placement 
are only changed from nova.

Right now I don't have any examples where nova is racing with itself on 
a instance or migration consumer. We could try hitting the Nova API in 
parallel with different server lifecycle operations against the same 
server to see if we can find races. But until such race is discovered 
we can go with option a)

> 
> Long term, I also can't come up with any scenario where it would be
> appropriate to do a narrowly-focused GET+merge/replace+retry. But
> implementing the above short-term plan shouldn't prevent us from 
> adding
> retries for individual scenarios later if we do uncover places where 
> it
> makes sense.
> 

Later when resources consumed by a server will be handled outside of 
nova, like bandwidth from neutron and accelerators from cyborg we might 
see cases when nova will not be the only module changing a 
instance_uuid consumer. Then we have to decide how to handle that. I 
think one solution could be to make sure Nova knows about the bandwidth 
and accelerator resource needs of a server even if it is provided by 
neutron or cyborg. This knowledge is anyhow necessary to support atomic 
resource claim in the scheduler. For neturon ports this will be done 
through the resource_request attribute of the port. So even if the 
resource need of a port changes nova can go back to neutron and query 
the current need. This way nova can implement the following generic 
algorithm for every operation where nova wants to change the 
instance_uuid consumer in placement:
* collect the server current resource needs (might involve reading it 
from flavor, from neutron port, from cyborg accelerator) and apply the 
change nova wants to make (e.g. delete, move, resize).
* GET current consumer view from placement
* merge the two and push the result back to placement


> Here's some stream-of-consciousness that led me to the above opinions:
> 
> - On spawn, we send the allocation with a consumer gen of None because
> we expect the consumer not to exist. If it exists, that should be a 
> hard
> fail. (Hopefully the only way this happens is a true UUID conflict.)
> 
> - On migration, when we create the migration UUID, ditto above ^

I agree on both. I suggest returning HTTP 500 as we need a bug report 
about these cases.

> 
> - On migration, when we transfer the allocations in either direction, 
> a
> conflict means someone managed to resize (or otherwise change
> allocations?) since the last time we pulled data. Given the global 
> lock
> in the report client, this should have been tough to do. If it does
> happen, I would think any retry would need to be done all the way back
> at the claim, which I imagine is higher up than we should go. So 
> again,
> I think we should fail the migration and make the user retry.

Do we want to fail the whole migration or just the migration step (e.g. 
confirm, revert)?
The later means that failure during confirm or revert would put the 
instance back to VERIFY_RESIZE. While the former would mean that in 
case of conflict at confirm we try an automatic revert. But for a 
conflict at revert we can only put the instance to ERROR state.

> 
> - On destroy, a conflict again means someone managed a resize despite
> the global lock. If I'm deleting an instance and something about it
> changes, I would think I want the opportunity to reevaluate my 
> decision
> to delete it. That said, I would definitely want a way to force it (in
> which case we can just use the DELETE call explicitly). But neither 
> case
> should be a retry, and certainly there is no destroy scenario where I
> would want a "merging" of allocations to happen.

Good idea about allowing forcing the delete. So a simple DELETE 
/servers/{instance_uuid} could fail on consumer conflict but a POST 
/servers/{instance_uuid}/action with forceDelete body would use DELETE 
/allocations and therefore will ignore any consumer generation.

Cheers,
gibi

> 
> Thanks,
> efried
> 
> 
> On 08/16/2018 06:43 AM, Balázs Gibizer wrote:
>>  reformatted for readabiliy, sorry:
>> 
>>  Hi,
>> 
>>  tl;dr: To properly use consumer generation (placement 1.28) in Nova 
>> we
>>  need to decide how to handle consumer generation conflict from Nova
>>  perspective:
>>  a) Nova reads the current consumer_generation before the allocation
>>    update operation and use that generation in the allocation update
>>    operation.  If the allocation is changed between the read and the
>>    update then nova fails the server lifecycle operation and let the
>>    end user retry it.
>>  b) Like a) but in case of conflict nova blindly retries the
>>    read-and-update operation pair couple of times and if only fails
>>    the life cycle operation if run out of retries.
>>  c) Nova stores its own view of the allocation. When a consumer's
>>    allocation needs to be modified then nova reads the current state
>>    of the consumer from placement. Then nova combines the two
>>    allocations to generate the new expected consumer state. In case
>>    of generation conflict nova retries the read-combine-update
>>    operation triplet.
>> 
>>  Which way we should go now?
>> 
>>  What should be or long term goal?
>> 
>> 
>>  Details:
>> 
>>  There are plenty of affected lifecycle operations. See the patch 
>> series
>>  starting at [1].
>> 
>>  For example:
>> 
>>  The current patch[1] that handles the delete server case implements
>>  option b).  It simly reads the current consumer generation from
>>  placement and uses that to send a PUT /allocatons/{instance_uuid} 
>> with
>>  "allocations": {} in its body.
>> 
>>  Here implementing option c) would mean that during server delete 
>> nova
>>  needs:
>>  1) to compile its own view of the resource need of the server
>>    (currently based on the flavor but in the future based on the
>>    attached port's resource requests as well)
>>  2) then read the current allocation of the server from placement
>>  3) then subtract the server resource needs from the current 
>> allocation
>>    and send the resulting allocation back in the update to placement
>> 
>>  In the simple case this subtraction would result in an empty 
>> allocation
>>  sent to placement. Also in this simple case c) has the same effect 
>> as
>>  b) currently implementated in [1].
>> 
>>  However if somebody outside of nova modifies the allocation of this
>>  consumer in a way that nova does not know about such changed 
>> resource
>>  need then b) and c) will result in different placement state after
>>  server delete.
>> 
>>  I only know of one example, the change of neutron port's resource
>>  request while the port is attached. (Note, it is out of scope in the
>>  first step of bandwidth implementation.) In this specific example
>>  option c) can work if nova re-reads the port's resource request 
>> during
>>  delete when recalculates its own view of the server resource needs. 
>> But
>>  I don't know if every other resource (e.g.  accelerators) used by a
>>  server can be / will be handled this way.
>> 
>> 
>>  Other examples of affected lifecycle operations:
>> 
>>  During a server migration moving the source host allocation from the
>>  instance_uuid to a the migration_uuid fails with consumer generation
>>  conflict because of the instance_uuid consumer generation. [2]
>> 
>>  Confirming a migration fails as the deletion of the source host
>>  allocation fails due to the consumer generation conflict of the
>>  migration_uuid consumer that is being emptied.[3]
>> 
>>  During scheduling of a new server putting allocation to 
>> instance_uuid
>>  fails as the scheduler assumes that it is a new consumer and 
>> therefore
>>  uses consumer_generation: None for the allocation, but placement
>>  reports generation conflict. [4]
>> 
>>  During a non-forced evacuation the scheduler tries to claim the
>>  resource on the destination host with the instance_uuid, but that
>>  consumer already holds the source allocation therefore the scheduler
>>  cannot assume that the instance_uuid is a new consumer. [4]
>> 
>> 
>>  [1] https://review.openstack.org/#/c/591597
>>  [2] https://review.openstack.org/#/c/591810
>>  [3] https://review.openstack.org/#/c/591811
>>  [4] https://review.openstack.org/#/c/583667
>> 
>> 
>> 
>> 
>> 
>> 
>>  
>> __________________________________________________________________________
>>  OpenStack Development Mailing List (not for usage questions)
>>  Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list