[openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

Walter A. Boring IV walter.boring at hpe.com
Tue Feb 16 17:05:47 UTC 2016


On 02/12/2016 04:35 PM, John Griffith wrote:
>
>
> On Thu, Feb 11, 2016 at 10:31 AM, Walter A. Boring IV 
> <walter.boring at hpe.com <mailto:walter.boring at hpe.com>> wrote:
>
>     There seems to be a few discussions going on here wrt to
>     detaches.   One is what to do on the Nova side with calling
>     os-brick's disconnect_volume, and also when to or not to call
>     Cinder's terminate_connection and detach.
>
>     My original post was simply to discuss a mechanism to try and
>     figure out the first problem.  When should nova call brick to remove
>     the local volume, prior to calling Cinder to do something.
>>
>
>     Nova needs to know if it's safe to call disconnect_volume or not.
>     Cinder already tracks each attachment, and it can return the
>     connection_info for each attachment with a call to
>     initialize_connection.   If 2 of those connection_info dicts are
>     the same, it's a shared volume/target.  Don't call
>     disconnect_volume if there are any more of those left.
>
>     On the Cinder side of things, if terminate_connection, detach is
>     called, the volume manager can find the list of attachments for a
>     volume, and compare that to the attachments on a host.  The
>     problem is, Cinder doesn't track the host along with the
>     instance_uuid in the attachments table.  I plan on allowing that
>     as an API change after microversions lands, so we know how many
>     times a volume is attached/used on a particular host.  The driver
>     can decide what to do with it at terminate_connection, detach
>     time.     This helps account for
>     the differences in each of the Cinder backends, which we will
>     never get all aligned to the same model.  Each array/backend
>     handles attachments different and only the driver knows if it's
>     safe to remove the target or not, depending on how many
>     attachments/usages it has
>     on the host itself.   This is the same thing as a reference
>     counter, which we don't need, because we have the count in the
>     attachments table, once we allow setting the host and the
>     instance_uuid at the same time.
>
> ​ Not trying to drag this out or be difficult I promise. But, this 
> seems like it is in fact the same problem, and I'm not exactly 
> following; if you store the info on the compute side during the attach 
> phase, why would you need/want to then create a split brain scenario 
> and have Cinder do any sort of tracking on the detach side of things?
>
> Like the earlier posts said, just don't call terminate_connection if 
> you don't want to really terminate the connection?  I'm sorry, I'm 
> just not following the logic of why Cinder should track this and 
> interfere with things?  It's supposed to be providing a service to 
> consumers and "do what it's told" even if it's told to do the wrong thing.

The only reason to store the connector information on the cinder 
attachments side is in the few use cases when there is no way to get 
that connector any more.  Such as the case for nova evacuate, and force 
detach where nova has no information about where the original attachment 
was, because the instance is gone.   Cinder backends still need the 
connector at terminate_connection time, to find the right 
exports/targets to remove.

Walt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160216/80d725b8/attachment.html>


More information about the OpenStack-dev mailing list