[openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

Ildikó Váncsa ildiko.vancsa at ericsson.com
Thu Feb 11 08:30:13 UTC 2016


Hi,

As far as I can see volume attachments are handled on attachment level today as opposed to host level in Cinder. How the volume is exposed to a host technically is another question, but conceptually Cinder is the ultimate source of truth regarding how many attachments a volume has and what driver takes care of that volume.

In this sense in my understanding what you are suggesting below would need a redesign and refactoring so that the concept and the implementation are in line with each other. We talked about this at the beginning of Mitaka and this was the outcome of that discussion too as far as I can remember.

Tracking the connector info is not Nova's responsibility I think neither keeping track of what back end provides the volume to it. I think we need to find the solution within the current concept and architecture and then refactor to the aimed design. We cannot use what we have and solve our issues according what we would like to have. That will not match but bring additional complexity to these modules.

Would the new API and the connector info record in the Cinder database cause any problems conceptually and/or technically?

Thanks,
Ildikó

> -----Original Message-----
> From: Avishay Traeger [mailto:avishay at stratoscale.com]
> Sent: February 11, 2016 07:43
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume
> 
> I think Sean and John are in the right direction.  Nova and Cinder need to be more decoupled in the area of volume attachments.
> 
> I think some of the mess here is due to different Cinder backend behavior - with some Cinder backends you actually attach volumes to
> a host (e.g., FC, iSCSI), with some you attach to a VM (e.g., Ceph), and with some you attach an entire pool of volumes to a host (e.g.,
> NFS).  I think this difference should all be contained in the Nova drivers that do the attachments.
> 
> On Thu, Feb 11, 2016 at 6:06 AM, John Griffith <john.griffith8 at gmail.com> wrote:
> 
> 
> 
> 
> 	On Wed, Feb 10, 2016 at 5:12 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
> 
> 
> 		But the issue is, when told to detach, some of the drivers do bad things. then, is it the driver's issue to
> refcount to fix the issue, or is it nova's to refcount so that it doesn't call the release before all users are done with it? I think solving it in
> the middle, in cinder's probably not the right place to track it, but if its to be solved on nova's side, nova needs to know when it needs
> to do it. But cinder might have to relay some extra info from the backend.
> 
> 		Either way, On the driver side, there probably needs to be a mechanism on the driver to say it either can
> refcount properly so its multiattach compatible (or that nova should refcount), or to default to not allowing multiattach ever, so
> existing drivers don't break.
> 
> 		Thanks,
> 		Kevin
> 		________________________________________
> 		From: Sean McGinnis [sean.mcginnis at gmx.com]
> 		Sent: Wednesday, February 10, 2016 3:25 PM
> 		To: OpenStack Development Mailing List (not for usage questions)
> 		Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's
> connector.disconnect_volume
> 
> 
> 		On Wed, Feb 10, 2016 at 11:16:28PM +0000, Fox, Kevin M wrote:
> 		> I think part of the issue is whether to count or not is cinder driver specific and only cinder knows if it
> should be done or not.
> 		>
> 		> But if cinder told nova that particular multiattach endpoints must be refcounted, that might resolve the
> issue?
> 		>
> 		> Thanks,
> 		> Kevin
> 
> 		I this case (the point John and I were making at least) it doesn't
> 		matter. Nothing is driver specific, so it wouldn't matter which backend
> 		is being used.
> 
> 		If a volume is needed, request it to be attached. When it is no longer
> 		needed, tell Cinder to take it away. Simple as that.
> 
> 
> 		__________________________________________________________________________
> 		OpenStack Development Mailing List (not for usage questions)
> 		Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> 		http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 		__________________________________________________________________________
> 		OpenStack Development Mailing List (not for usage questions)
> 		Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> 		http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 	​Hey Kevin,
> 
> 	So I think what Sean M pointed out is still valid in your case.  It's not really that some drivers do bad things, the problem
> is actually the way attach/detach works in OpenStack as a whole.  The original design (which we haven't strayed very far from) was
> that you could only attach a single resource to a single compute node.  That was it, there was no concept of multi-attach etc.
> 
> 	Now however folks want to introduce multi-attach, which means all of the old assumptions that the code was written
> on and designed around are kinda "bad assumptions" now.  It's true, as you pointed out however that there are some drivers that
> behave or deal with targets in a way that makes things complicated, but they're completely inline with the scsi standards and aren't
> doing anything *wrong*.
> 
> 	The point Sean M and I were trying to make is that for the specific use case of a single volume being attached to a
> compute node, BUT being passed through to more than one Instance it might be worth looking at just ensuring that Compute Node
> doesn't call detach unless it's *done* with all of the Instances that it was passing that volume through to.
> 
> 	You're absolutely right, there are some *weird* things that a couple of vendors do with targets in the case of like
> replication where they may actually create a new target and attach; those sorts of things are ABSOLUTELY Cinder's problem and Nova
> should not have to know anything about that as a consumer of the Target.
> 
> 	My view is that maybe we should look at addressing the multiple use of a single target case in Nova, and then absolutely
> figure out how to make things work correctly on the Cinder side for all the different behaviors that may occur on the Cinder side from
> the various vendors.
> 
> 	Make sense?
> 
> 
> 	John
> 
> 	__________________________________________________________________________
> 	OpenStack Development Mailing List (not for usage questions)
> 	Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> 	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> 
> --
> 
> Avishay Traeger, PhD
> 
> System Architect
> 
> 
> Mobile: +972 54 447 1475
> 
> E-mail: avishay at stratoscale.com
> 
> 
>  <http://www.stratoscale.com/wp-content/uploads/Logo-Signature-Stratoscale-230.jpg>
> 
> 
> 
> 
> Web <http://www.stratoscale.com/>  | Blog <http://www.stratoscale.com/blog/>  | Twitter <https://twitter.com/Stratoscale>  |
> Google+ <https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>  | Linkedin
> <https://www.linkedin.com/company/stratoscale>



More information about the OpenStack-dev mailing list