On 11/10/2019 2:09 AM, Balázs Gibizer wrote: > * Check ongoing migration and reject the delete if migration with this > compute having the source node exists. Let operator confirm the > migrations To be clear, the suggestion here is call [1] from the API like around [2]? That's a behavior change but so was blocking the delete when the compute was hosting instances [3] and we added a release note for that. Anyway, that's a pretty simple change and not really something I thought about in earlier threads on this problem. Regarding evacuate migration records that should also work since the final states for an evacuate migration are done, failed or error for which [1] accounts. > * Cascade delete providers and allocations in placement. > * in case of evacuated instances this is the right thing to do OK this seems to confirm my TODO here [4]. > * in any other dangling allocation case nova has the final thrut so > nova > has the authority to delete them. So this would build on the first idea above about blocking the service delete if there are in-progress migrations involving the node (either incoming or outgoing) right? So if we get to the point of deleting the provider we know (1) there are no in-progress migrations and (2) there are no instances on the host (outside of evacuated instances which we can cleanup automatically per [4]). Given that, I'm not sure there is really anything else to do here. > * Document possible ways to reconcile Placement with Nova using > heal_allocations and eventually the audit command once it's merged. Done (merged yesterday) [5]. [1] https://github.com/openstack/nova/blob/20.0.0/nova/objects/migration.py#L240 [2] https://github.com/openstack/nova/blob/20.0.0/nova/api/openstack/compute/services.py#L254 [3] https://review.opendev.org/#/c/560674/ [4] https://review.opendev.org/#/c/678100/2/nova/scheduler/client/report.py@2165 [5] https://docs.openstack.org/nova/latest/admin/troubleshooting/orphaned-allocations.html -- Thanks, Matt