Hi, Just came up on IRC, when nova-compute gets killed half way through a volume attach (i.e. no graceful shutdown), things get stuck in a bad state, like volumes stuck in the attaching state. This looks like a new addition to this conversation: http://lists.openstack.org/pipermail/openstack-dev/2015-December/082683.html And brings us back to this discussion: https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova What if we move our attention towards automatically recovering from the above issue? I am wondering if we can look at making our usually recovery code deal with the above situation: https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24c79f4bf615/nova/compute/manager.py#L934 Did we get the Cinder APIs in place that enable the force-detach? I think we did and it was this one? https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force-detach-needs-cinderclient-api I think diablo_rojo might be able to help dig for any bugs we have related to this. I just wanted to get this idea out there before I head out. Thanks, John