[openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

Daniel P. Berrange berrange at redhat.com
Thu Feb 11 10:23:03 UTC 2016

On Tue, Feb 09, 2016 at 11:49:33AM -0800, Walter A. Boring IV wrote:
> Hey folks,
>    One of the challenges we have faced with the ability to attach a single
> volume to multiple instances, is how to correctly detach that volume.  The
> issue is a bit complex, but I'll try and explain the problem, and then
> describe one approach to solving one part of the detach puzzle.
> Problem:
>   When a volume is attached to multiple instances on the same host. There
> are 2 scenarios here.
>   1) Some Cinder drivers export a new target for every attachment on a
> compute host.  This means that you will get a new unique volume path on a
> host, which is then handed off to the VM instance.
>   2) Other Cinder drivers export a single target for all instances on a
> compute host.  This means that every instance on a single host, will reuse
> the same host volume path.

This problem isn't actually new. It is a problem we already have in Nova
even with single attachments per volume.  eg, with NFS and SMBFS there
is a single mount setup on the host, which can serve up multiple volumes.
We have to avoid unmounting that until no VM is using any volume provided
by that mount point. Except we pretend the problem doesn't exist and just
try to unmount every single time a VM stops, and rely on the kernel
failing umout() with EBUSY.  Except this has a race condition if one VM
is stopping right as another VM is starting

There is a patch up to try to solve this for SMBFS:


but I don't really much like it, because it only solves it for one

I think we need a general solution that solves the problem for all
cases, including multi-attach.

AFAICT, the only real answer here is to have nova record more info
about volume attachments, so it can reliably decide when it is safe
to release a connection on the host.

> Proposed solution:
>   Nova needs to determine if the volume that's being detached is a shared or
> non shared volume.  Here is one way to determine that.
>   Every Cinder volume has a list of it's attachments.  In those attachments
> it contains the instance_uuid that the volume is attached to.  I presume
> Nova can find which of the volume attachments are on the same host.  Then
> Nova can call Cinder's initialize_connection for each of those attachments
> to get the target's connection_info dictionary.  This connection_info
> dictionary describes how to connect to the target on the cinder backend.  If
> the target is shared, then each of the connection_info dicts for each
> attachment on that host will be identical.  Then Nova would know that it's a
> shared target, and then only call os-brick's disconnect_volume, if it's the
> last attachment on that host.  I think at most 2 calls to cinder's
> initialize_connection would suffice to determine if the volume is a shared
> target.  This would only need to be done if the volume is multi-attach
> capable and if there are more than 1 attachments on the same host, where the
> detach is happening.

As above, we need to solve this more generally than just multi-attach,
even single-attach is flawed today.

|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

More information about the OpenStack-dev mailing list