[nova][ops] What should the compute service delete behavior be wrt resource providers with allocations?

Matt Riedemann mriedemos at gmail.com
Thu Jun 13 18:00:39 UTC 2019


On 6/12/2019 7:05 PM, Sean Mooney wrote:
>> If we can distinguish between the migratey ones and the evacuatey ones,
>> maybe we fail on the former (forcing them to wait for completion) and
>> automatically delete the latter (which is almost always okay for the
>> reasons you state; and recoverable via heal if it's not okay for some
>> reason).
> for a cold migration the allcoation will be associated with a migration object
> for evacuate which is basically a rebuild to a different host we do not have a
> migration object so the consumer uuid for the allcotion are still associated with
> the instace uuid not a migration uuid. so technically we can tell yes
> but only if we pull back the allcoation form placmenet and then iterate over
> them and check if we have a migration object or an instance that has the same
> uuid.

Evacuate operations do have a migration record but you're right that we 
don't move the source node allocations from the instance to the 
migration prior to scheduling (like we do for cold and live migration). 
So after the evacuation, the instance consumer has allocations on both 
the source and dest node.

If we did what Eric is suggesting, which is kind of a mix between option 
1 and option 2, then I'd do the same query as we have on restart of the 
compute service [1] to find migration records for evacuations concerning 
the host we're being asked to delete within a certain status and clean 
those up, then (re?)try the resource provider delete - and if that 
fails, then we punt and fail the request to delete the compute service 
because we couldn't safely delete the resource provider (and we don't 
want to orphan it for the reasons mnaser pointed out).

[1] 
https://github.com/openstack/nova/blob/61558f274842b149044a14bbe7537b9f278035fd/nova/compute/manager.py#L651

-- 

Thanks,

Matt



More information about the openstack-discuss mailing list