[openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

Ildikó Váncsa ildiko.vancsa at ericsson.com
Tue Feb 9 22:23:18 UTC 2016


Hi Walt,

> -----Original Message-----
> From: Walter A. Boring IV [mailto:walter.boring at hpe.com]
> Sent: February 09, 2016 23:15
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume
> 
> On 02/09/2016 02:04 PM, Ildikó Váncsa wrote:
> > Hi Walt,
> >
> > Thanks for starting this thread. It is a good summary of the issue and the proposal also looks feasible to me.
> >
> > I have a quick, hopefully not too wild idea based on the earlier discussions we had. We were considering earlier to store the target
> identifier together with the other items of the attachment info. The problem with this idea is that when we call initialize_connection
> from Nova, Cinder does not get the relevant information, like instance_id, to be able to do this. This means we cannot do that using
> the functionality we have today.
> >
> > My idea here is to extend the Cinder API so that Nova can send the missing information after a successful attach. Nova should have
> all the information including the 'target', which means that it could update the attachment information through the new Cinder API.
> I think we need to do is to allow the connector to be passed at
> os-attach time.   Then cinder can save it in the attachment's table entry.
> 
> We will also need a new cinder API to allow that attachment to be updated during live migration, or the connector for the attachment
> will get stale and incorrect.

When saying below that it will be good for live migration as well I meant that the update is part of the API.

Ildikó

> 
> Walt
> >
> > It would mean that when we request for the volume info from Cinder at detach time the 'attachments' list would contain all the
> required information for each attachments the volume has. If we don't have the 'target' information because of any reason we can
> still use the approach described below as fallback. This approach could even be used in case of live migration I think.
> >
> > The Cinder API extension would need to be added with a new microversion to avoid problems with older Cinder versions talking to
> new Nova.
> >
> > The advantage of this direction is that we can reduce the round trips to Cinder at detach time. The round trip after a successful
> attach should not have an impact on the normal operation as if that fails the only issue we have is we need to use the fall back method
> to be able to detach properly. This would still affect only multiattached volumes, where we have more than one attachments on the
> same host. By having the information stored in Cinder as well we can also avoid removing a target when there are still active
> attachments connected to it.
> >
> > What do you think?
> >
> > Thanks,
> > Ildikó
> >
> >
> >> -----Original Message-----
> >> From: Walter A. Boring IV [mailto:walter.boring at hpe.com]
> >> Sent: February 09, 2016 20:50
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: [openstack-dev] [Nova][Cinder] Multi-attach, determining
> >> when to call os-brick's connector.disconnect_volume
> >>
> >> Hey folks,
> >>      One of the challenges we have faced with the ability to attach a
> >> single volume to multiple instances, is how to correctly detach that
> >> volume.  The issue is a bit complex, but I'll try and explain the problem, and then describe one approach to solving one part of the
> detach puzzle.
> >>
> >> Problem:
> >>     When a volume is attached to multiple instances on the same host.
> >> There are 2 scenarios here.
> >>
> >>     1) Some Cinder drivers export a new target for every attachment
> >> on a compute host.  This means that you will get a new unique volume path on a host, which is then handed off to the VM
> instance.
> >>
> >>     2) Other Cinder drivers export a single target for all instances
> >> on a compute host.  This means that every instance on a single host, will reuse the same host volume path.
> >>
> >>
> >> When a user issues a request to detach a volume, the workflow boils
> >> down to first calling os-brick's connector.disconnect_volume before
> >> calling Cinder's terminate_connection and detach. disconnect_volume's job is to remove the local volume from the host OS and
> close any sessions.
> >>
> >> There is no problem under scenario 1.  Each disconnect_volume only
> >> affects the attached volume in question and doesn't affect any other
> >> VM using that same volume, because they are using a different path that has shown up on the host.  It's a different target
> exported from the Cinder backend/array.
> >>
> >> The problem comes under scenario 2, where that single volume is
> >> shared for every instance on the same compute host.  Nova needs to be
> >> careful and not call disconnect_volume if it's a shared volume, otherwise the first disconnect_volume call will nuke every instance's
> access to that volume.
> >>
> >>
> >> Proposed solution:
> >>     Nova needs to determine if the volume that's being detached is a shared or non shared volume.  Here is one way to determine
> that.
> >>
> >>     Every Cinder volume has a list of it's attachments.  In those attachments it contains the instance_uuid that the volume is attached
> to.
> >> I presume Nova can find which of the volume attachments are on the
> >> same host.  Then Nova can call Cinder's initialize_connection for each of those attachments to get the target's connection_info
> dictionary.
> >> This connection_info dictionary describes how to connect to the
> >> target on the cinder backend.  If the target is shared, then each of
> >> the connection_info dicts for each attachment on that host will be
> >> identical.  Then Nova would know that it's a shared target, and then
> >> only call os-brick's disconnect_volume, if it's the last attachment on that host.  I think at most 2 calls to cinder's initialize_connection
> would suffice to determine if the volume is a shared target.  This would only need to be done if the volume is multi-attach capable and
> if there are more than 1 attachments on the same host, where the detach is happening.
> >>
> >> Walt
> >>
> >> _____________________________________________________________________
> >> _____ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > ______________________________________________________________________
> > ____ OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > .
> >
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list