[openstack-dev] [cinder] About Read-Only volume support

lzy.dev at gmail.com lzy.dev at gmail.com
Tue May 14 09:34:56 UTC 2013


Thanks Zhiteng, I agree with you, the 'a' option (in my above mail) -
doing R/O control on Nova/hypervisor side, it's much simpler and easy
to keep unified capabilities across all the back-ends for R/O volume
attaching support. Only concern for me here is that under this design
the R/O volume supporting is just a *attaching method* for Nova but
not a *volume type* for Cinder.

John, what's your comments there?

Of cause, if we choice 'a' option way to implement, the most
change/work will be handled in Nova side, Cinder work would require
minor work (not yet determined, but IMO) in tracking that a volume has
multiple host attached to it. Possibly changing the database entries
for what volume is attached to a single host to a list of host, and
give a column to mark the attaching mode: "r" or "w".

We can split above idea to two changes to landing, each for Nova and
Cinder. Maybe, Kiran's
"https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume"
can cover the change of Cinder part.

Thanks,
Zhi Yan

On Tue, May 14, 2013 at 3:21 PM, Huang Zhiteng <winston.d at gmail.com> wrote:
>
> On Tue, May 14, 2013 at 2:47 PM, lzy.dev at gmail.com <lzy.dev at gmail.com>
> wrote:
>>
>> John and all, about shared-volume implementation detail base on the
>> R/O volume, for now, it seems there are multiple choice for us:
>>
>> a. Supporting R/O volume from Nova side. Ask Nova take care R/O volume
>> attaching but not Cinder, use hypervisors specific method to keep the
>> volume be attached in read only mode but actually the backend volume
>> (in cinder) is on R/W mode. It can be implemented as shared-volume
>> said "introduce a Read Only option that could be specified during
>> attach". So as we know, this is not a real R/O volume.
>>
> Implementing R/O control here allows Cinder/Nova to maintain unified
> capabilities across all the back-ends (given that hypervisors support R/O
> control over volume).
>
>>
>> b. Supporting R/O volume from both Cinder and Nova sides. Nova part
>> just like I mentioned in above 'a' section. And in Cinder part, we can
>> give a native R/O volume support capability to Cinder, in this case,
>> Cinder can pass the read-write mode argument to backend
>> driver/storage, that is the volume can be attached under the "real"
>> read-only mode. We also have two choices here:
>> i. Allow client set this "read-only" mode option in volume creating
>> API calls, and cinder will not allow modify it after the volume
>> creating.
>
>
> Any use cases for this?  A lot of changes need to be done to achieve this:
> modification of API; modification of all Cinder back-end drivers.
>
>> ii. Allow client mark a "read-only" flag to a volume on-demand,
>> (necessary checking is needed, such as an already attached volume will
>> not allow change its "read-write" mode), client can change volume from
>> R/O to R/W or reverse as they needed.
>>
> While this option has best flexibility, it implies the most changes required
> in Cinder.  Doing R/O control on Nova/hypervisor side seems much simpler and
> cleaner unless there are special use case Nova side control isn't able to
> fulfill?
>
>>
>> What's your thoughts?
>>
>> Thanks,
>> Zhi Yan
>>
>> On Tue, May 14, 2013 at 12:09 PM, John Griffith
>> <john.griffith at solidfire.com> wrote:
>> >
>> >
>> >
>> > On Mon, May 13, 2013 at 9:47 PM, lzy.dev at gmail.com <lzy.dev at gmail.com>
>> > wrote:
>> >>
>> >> Hi, Guys
>> >>
>> >> Form below link, it seems Xen can support R/O volume attaching also:
>> >> http://backdrift.org/xen-disk-hot-add-block-device-howto
>> >>
>> >> "xm block-attach <Domain> <BackDev> <FrontDev> <Mode> [BackDomain]"
>> >> the "mode" can be R/O and R/W (r and w).
>> >>
>> >> Any thoughts? if not I will update the etherpad to adding xen.
>> >>
>> >> Thanks,
>> >> Zhi Yan
>> >>
>> >> On Tue, May 14, 2013 at 2:26 AM, Martin, Kurt Frederick (ESSN Storage
>> >> MSDU) <kurt.f.martin at hp.com> wrote:
>> >> > Thanks Alessandro, I have also updated the etherpad
>> >> >
>> >> >
>> >> > (https://etherpad.openstack.org/summit-havana-cinder-multi-attach-and-ro-volumes)
>> >> > to include the latest findings regarding R/O volumes. It appears that
>> >> > a
>> >> > number of hypervisors do indeed allow for setting the volumes to read
>> >> > only.
>> >> >
>> >> > Regards,
>> >> >
>> >> > Kurt Martin
>> >> >
>> >> >
>> >> >
>> >> > From: Alessandro Pilotti [mailto:ap at pilotti.it]
>> >> > Sent: Monday, May 13, 2013 10:46 AM
>> >> > To: OpenStack Development Mailing List
>> >> > Subject: Re: [openstack-dev] [cinder] About Read-Only volume support
>> >> > Importance: High
>> >> >
>> >> >
>> >> >
>> >> > Hi guys,
>> >> >
>> >> >
>> >> >
>> >> > "Summit feedback: Not doing R/O volumes due to the limited hypervisor
>> >> > that can support setting the volume to R/O, currently only KVM has
>> >> > this capability".
>> >> >
>> >> >
>> >> >
>> >> > Hyper-V supports mounting R/O iSCSI volumes as well.
>> >> >
>> >> >
>> >> >
>> >> > Alessandro
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > On May 13, 2013, at 13:22 , lzy.dev at gmail.com wrote:
>> >> >
>> >> >
>> >> >
>> >> > Hi All,
>> >> >
>> >> > In
>> >> >
>> >> >
>> >> > https://etherpad.openstack.org/summit-havana-cinder-multi-attach-and-ro-volumes,
>> >> > I saw a comment there:
>> >> > "Summit feedback: Not doing R/O volumes due to the limited hypervisor
>> >> > that can support setting the volume to R/O, currently only KVM has
>> >> > this capability".
>> >> >
>> >> > I agree there probably have some troubles cause R/O volumes support
>> >> > hard to implement.
>> >> > But maybe since I have not attended the summit, nova and cinder guys
>> >> > not notice there is a blueprint to plan to implement a cinder backend
>> >> > driver for glance
>> >> > (https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver,
>> >> > I
>> >> > proposed), so I consider the R/O volumes support can be implemented
>> >> > gracefully.
>> >> > Under the case, the R/O volume stored in cinder will be created as an
>> >> > image, client can access it by glance via standard api, and nova can
>> >> > prepare the R/W image (base on R/O volume) for the instance normally.
>> >> >
>> >> > And more, I consider the R/O volume support and cinder driver for
>> >> > glance is valuable  because on nova side we can give some code
>> >> > changes
>> >> > to allow nova prepare instance disk via particular COW mechanism base
>> >> > on particular cinder backend store capability with more efficiency
>> >> > way, such as efficient snapshot.
>> >> >
>> >> > Thanks,
>> >> > Zhi Yan
>> >> >
>> >> > _______________________________________________
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev at lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev at lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> > Thanks Zhi Yan, I had some conversations with folks at the summit and
>> > the
>> > general concensus seemed to be that it was possible.  There's a BP for
>> > this
>> > that met a bit of objection:
>> > https://blueprints.launchpad.net/cinder/+spec/shared-volume
>> >
>> > perhaps we can work off of that and add some details to it.
>> >
>> > Thanks,
>> > John
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Regards
> Huang Zhiteng
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list