[openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

Matt Riedemann mriedem at linux.vnet.ibm.com
Wed Jan 27 11:01:42 UTC 2016



On 1/26/2016 5:55 AM, Avishay Traeger wrote:
> OK great, thanks!  I added a suggestion to the etherpad as well, and
> found this link helpful: https://review.openstack.org/#/c/266095/
>
> On Tue, Jan 26, 2016 at 1:37 AM, D'Angelo, Scott <scott.dangelo at hpe.com
> <mailto:scott.dangelo at hpe.com>> wrote:
>
>     There is currently no simple way to clean up Cinder attachments if
>     the Nova node (or the instance) has gone away. We’ve put this topic
>     on the agenda for the Cinder mid-cycle this week:____
>
>     https://etherpad.openstack.org/p/mitaka-cinder-midcycle L#113
>     <https://etherpad.openstack.org/p/mitaka-cinder-midcycle%20L#113>____
>
>     __ __
>
>     *From:*Avishay Traeger [mailto:avishay at stratoscale.com
>     <mailto:avishay at stratoscale.com>]
>     *Sent:* Monday, January 25, 2016 7:21 AM
>     *To:* OpenStack Development Mailing List (not for usage questions)
>     *Subject:* [openstack-dev] [Nova][Cinder] Cleanly detaching volumes
>     from failed nodes____
>
>     __ __
>
>     Hi all,____
>
>     I was wondering if there was any way to cleanly detach volumes from
>     failed nodes.  In the case where the node is up nova-compute will
>     call Cinder's terminate_connection API with a "connector" that
>     includes information about the node - e.g., hostname, IP, iSCSI
>     initiator name, FC WWPNs, etc.____
>
>     If the node has died, this information is no longer available, and
>     so the attachment cannot be cleaned up properly.  Is there any way
>     to handle this today?  If not, does it make sense to save the
>     connector elsewhere (e.g., DB) for cases like these?____
>
>     __ __
>
>     Thanks,____
>
>     Avishay
>     ____
>
>     __ __
>
>     -- ____
>
>     *Avishay Traeger, PhD*____
>
>     /System Architect/____
>
>     __ __
>
>     Mobile: +972 54 447 1475 <tel:%2B972%2054%20447%201475>____
>
>     E-mail: avishay at stratoscale.com <mailto:avishay at stratoscale.com>____
>
>     __ __
>
>     ____
>
>     __ __
>
>     Web <http://www.stratoscale.com/> | Blog
>     <http://www.stratoscale.com/blog/> | Twitter
>     <https://twitter.com/Stratoscale> | Google+
>     <https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts> |
>     Linkedin <https://www.linkedin.com/company/stratoscale>____
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> *Avishay Traeger, PhD*
> /System Architect/
>
> Mobile:+972 54 447 1475
> E-mail: avishay at stratoscale.com <mailto:avishay at stratoscale.com>
>
>
>
> Web <http://www.stratoscale.com/> | Blog
> <http://www.stratoscale.com/blog/> | Twitter
> <https://twitter.com/Stratoscale> | Google+
> <https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts> |
> Linkedin <https://www.linkedin.com/company/stratoscale>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I've replied on https://review.openstack.org/#/c/266095/ and the related 
cinder change https://review.openstack.org/#/c/272899/ which are adding 
a new key to the volume connector dict being passed around between nova 
and cinder, which is not ideal.

I'd really like to see us start modeling the volume connector with 
versioned objects so we can (1) tell what's actually in this mystery 
connector dict in the nova virt driver interface and (2) handle version 
compat with adding new keys to it.

-- 

Thanks,

Matt Riedemann




More information about the OpenStack-dev mailing list