[openstack-dev] [cinder] About Read-Only volume support
John Griffith
john.griffith at solidfire.com
Wed May 15 05:22:18 UTC 2013
On Tue, May 14, 2013 at 9:19 AM, Martin, Kurt Frederick (ESSN Storage MSDU)
<kurt.f.martin at hp.com> wrote:
> Hi Zhi Yan,
> I was working with Kiran, who already has a good start on the nova side
> code for VMware hypervisor. I was going to take care of the small cinder
> side changes that would be required as you documented below, with the main
> one being the database change to keep at list of hosts connected to volume
> instead of a single volume.
> ~Kurt Martin
>
> -----Original Message-----
> From: lzy.dev at gmail.com [mailto:lzy.dev at gmail.com]
> Sent: Tuesday, May 14, 2013 2:35 AM
> To: John Griffith; Huang Zhiteng; Vaddi, Kiran Kumar
> Cc: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [cinder] About Read-Only volume support
>
> Thanks Zhiteng, I agree with you, the 'a' option (in my above mail) -
> doing R/O control on Nova/hypervisor side, it's much simpler and easy to
> keep unified capabilities across all the back-ends for R/O volume attaching
> support. Only concern for me here is that under this design the R/O volume
> supporting is just a *attaching method* for Nova but not a *volume type*
> for Cinder.
>
> John, what's your comments there?
>
> Of cause, if we choice 'a' option way to implement, the most change/work
> will be handled in Nova side, Cinder work would require minor work (not yet
> determined, but IMO) in tracking that a volume has multiple host attached
> to it. Possibly changing the database entries for what volume is attached
> to a single host to a list of host, and give a column to mark the attaching
> mode: "r" or "w".
>
> We can split above idea to two changes to landing, each for Nova and
> Cinder. Maybe, Kiran's "
> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume"
> can cover the change of Cinder part.
>
> Thanks,
> Zhi Yan
>
> On Tue, May 14, 2013 at 3:21 PM, Huang Zhiteng <winston.d at gmail.com>
> wrote:
> >
> > On Tue, May 14, 2013 at 2:47 PM, lzy.dev at gmail.com <lzy.dev at gmail.com>
> > wrote:
> >>
> >> John and all, about shared-volume implementation detail base on the
> >> R/O volume, for now, it seems there are multiple choice for us:
> >>
> >> a. Supporting R/O volume from Nova side. Ask Nova take care R/O
> >> volume attaching but not Cinder, use hypervisors specific method to
> >> keep the volume be attached in read only mode but actually the
> >> backend volume (in cinder) is on R/W mode. It can be implemented as
> >> shared-volume said "introduce a Read Only option that could be
> >> specified during attach". So as we know, this is not a real R/O volume.
> >>
> > Implementing R/O control here allows Cinder/Nova to maintain unified
> > capabilities across all the back-ends (given that hypervisors support
> > R/O control over volume).
> >
> >>
> >> b. Supporting R/O volume from both Cinder and Nova sides. Nova part
> >> just like I mentioned in above 'a' section. And in Cinder part, we
> >> can give a native R/O volume support capability to Cinder, in this
> >> case, Cinder can pass the read-write mode argument to backend
> >> driver/storage, that is the volume can be attached under the "real"
> >> read-only mode. We also have two choices here:
> >> i. Allow client set this "read-only" mode option in volume creating
> >> API calls, and cinder will not allow modify it after the volume
> >> creating.
> >
> >
> > Any use cases for this? A lot of changes need to be done to achieve
> this:
> > modification of API; modification of all Cinder back-end drivers.
> >
> >> ii. Allow client mark a "read-only" flag to a volume on-demand,
> >> (necessary checking is needed, such as an already attached volume
> >> will not allow change its "read-write" mode), client can change
> >> volume from R/O to R/W or reverse as they needed.
> >>
> > While this option has best flexibility, it implies the most changes
> > required in Cinder. Doing R/O control on Nova/hypervisor side seems
> > much simpler and cleaner unless there are special use case Nova side
> > control isn't able to fulfill?
> >
> >>
> >> What's your thoughts?
> >>
> >> Thanks,
> >> Zhi Yan
> >>
> >> On Tue, May 14, 2013 at 12:09 PM, John Griffith
> >> <john.griffith at solidfire.com> wrote:
> >> >
> >> >
> >> >
> >> > On Mon, May 13, 2013 at 9:47 PM, lzy.dev at gmail.com
> >> > <lzy.dev at gmail.com>
> >> > wrote:
> >> >>
> >> >> Hi, Guys
> >> >>
> >> >> Form below link, it seems Xen can support R/O volume attaching also:
> >> >> http://backdrift.org/xen-disk-hot-add-block-device-howto
> >> >>
> >> >> "xm block-attach <Domain> <BackDev> <FrontDev> <Mode> [BackDomain]"
> >> >> the "mode" can be R/O and R/W (r and w).
> >> >>
> >> >> Any thoughts? if not I will update the etherpad to adding xen.
> >> >>
> >> >> Thanks,
> >> >> Zhi Yan
> >> >>
> >> >> On Tue, May 14, 2013 at 2:26 AM, Martin, Kurt Frederick (ESSN
> >> >> Storage
> >> >> MSDU) <kurt.f.martin at hp.com> wrote:
> >> >> > Thanks Alessandro, I have also updated the etherpad
> >> >> >
> >> >> >
> >> >> > (https://etherpad.openstack.org/summit-havana-cinder-multi-attac
> >> >> > h-and-ro-volumes) to include the latest findings regarding R/O
> >> >> > volumes. It appears that a number of hypervisors do indeed allow
> >> >> > for setting the volumes to read only.
> >> >> >
> >> >> > Regards,
> >> >> >
> >> >> > Kurt Martin
> >> >> >
> >> >> >
> >> >> >
> >> >> > From: Alessandro Pilotti [mailto:ap at pilotti.it]
> >> >> > Sent: Monday, May 13, 2013 10:46 AM
> >> >> > To: OpenStack Development Mailing List
> >> >> > Subject: Re: [openstack-dev] [cinder] About Read-Only volume
> >> >> > support
> >> >> > Importance: High
> >> >> >
> >> >> >
> >> >> >
> >> >> > Hi guys,
> >> >> >
> >> >> >
> >> >> >
> >> >> > "Summit feedback: Not doing R/O volumes due to the limited
> >> >> > hypervisor that can support setting the volume to R/O, currently
> >> >> > only KVM has this capability".
> >> >> >
> >> >> >
> >> >> >
> >> >> > Hyper-V supports mounting R/O iSCSI volumes as well.
> >> >> >
> >> >> >
> >> >> >
> >> >> > Alessandro
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> > On May 13, 2013, at 13:22 , lzy.dev at gmail.com wrote:
> >> >> >
> >> >> >
> >> >> >
> >> >> > Hi All,
> >> >> >
> >> >> > In
> >> >> >
> >> >> >
> >> >> > https://etherpad.openstack.org/summit-havana-cinder-multi-attach
> >> >> > -and-ro-volumes,
> >> >> > I saw a comment there:
> >> >> > "Summit feedback: Not doing R/O volumes due to the limited
> >> >> > hypervisor that can support setting the volume to R/O, currently
> >> >> > only KVM has this capability".
> >> >> >
> >> >> > I agree there probably have some troubles cause R/O volumes
> >> >> > support hard to implement.
> >> >> > But maybe since I have not attended the summit, nova and cinder
> >> >> > guys not notice there is a blueprint to plan to implement a
> >> >> > cinder backend driver for glance
> >> >> > (https://blueprints.launchpad.net/glance/+spec/glance-cinder-dri
> >> >> > ver,
> >> >> > I
> >> >> > proposed), so I consider the R/O volumes support can be
> >> >> > implemented gracefully.
> >> >> > Under the case, the R/O volume stored in cinder will be created
> >> >> > as an image, client can access it by glance via standard api,
> >> >> > and nova can prepare the R/W image (base on R/O volume) for the
> instance normally.
> >> >> >
> >> >> > And more, I consider the R/O volume support and cinder driver
> >> >> > for glance is valuable because on nova side we can give some
> >> >> > code changes to allow nova prepare instance disk via particular
> >> >> > COW mechanism base on particular cinder backend store capability
> >> >> > with more efficiency way, such as efficient snapshot.
> >> >> >
> >> >> > Thanks,
> >> >> > Zhi Yan
> >> >> >
> >> >> > _______________________________________________
> >> >> > OpenStack-dev mailing list
> >> >> > OpenStack-dev at lists.openstack.org
> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
> >> >> > v
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> > _______________________________________________
> >> >> > OpenStack-dev mailing list
> >> >> > OpenStack-dev at lists.openstack.org
> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
> >> >> > v
> >> >> >
> >> >>
> >> >> _______________________________________________
> >> >> OpenStack-dev mailing list
> >> >> OpenStack-dev at lists.openstack.org
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >> >
> >> > Thanks Zhi Yan, I had some conversations with folks at the summit
> >> > and the general concensus seemed to be that it was possible.
> >> > There's a BP for this that met a bit of objection:
> >> > https://blueprints.launchpad.net/cinder/+spec/shared-volume
> >> >
> >> > perhaps we can work off of that and add some details to it.
> >> >
> >> > Thanks,
> >> > John
> >> >
> >> > _______________________________________________
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev at lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > Regards
> > Huang Zhiteng
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
I chatted with Zhi Yan over IRC this evening and he pointed out that I
should update this thread with my thoughts on this :)
There are multiple use cases here that are sort of globbing together, so I
want to ignore the multi attach R/W that may be in the works for the FC
guys for now.
As far as the R/O use case, I had something in mind that would be a
combination of Nova and Cinder functionality. On the Nova side I think
we're all on the same page that this would be handled via the hypervisor on
attach.
What I had in mind for Cinder, was the addition of of a property/column in
the Volume object. This would be a Read Only flag. The use case here
would be that a tenant could take a volume that they've populated with data
and set it to R/O explicitly. This would mean providing an API call to
enabled setting and clearing of this flag of course.
In addition, I'd like to modify the cinder attach code, so that if for
example a volume is "in use" attached to an instance, and a subsequent
attach was requested, rather than failing stating that the volume must be
available, we allow the attach however set it as Read Only. This would be
the default behavior, if we get to a point where we are interested in
multiple attach R/W (which is quite frankly rather scary to me) I would
propose that there explicit settings a user must request to do this and
that it's done via extension or disabled by default via the cinder.conf
file.
Hopefully that helps. I know the folks working on fibre channel have some
other use cases and concerns here and I'd like to keep that separate for
the time being and sync up with them later.
Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130514/89683176/attachment.html>
More information about the OpenStack-dev
mailing list