[cinder] ceph multiattach details?
Martinx - ジェームズ
thiagocmartinsc at gmail.com
Tue May 28 19:07:26 UTC 2019
Last time I checked OpenStack Ansible, Manila wasn't there... I believe
that they added support for it in Stein but, I'm not sure if it supports
CephFS backend (and the required Ceph Metadata Containers, since CephFS
needs if, I believe).
I'll definitely give it a try!
Initially, I was planning to multi-attach an RDB block device against 2 or
more Instances and run OCFS2 on top of it but, Manila with CephFS looks way
simpler.
Cheers!
Thiago
On Mon, 27 May 2019 at 22:30, Erik McCormick <emccormick at cirrusseven.com>
wrote:
>
>
> On Mon, May 27, 2019, 5:33 PM Martinx - ジェームズ <thiagocmartinsc at gmail.com>
> wrote:
>
>> Hello,
>>
>> I'm very curious about this as well!
>>
>> It would be awesome to support Cinder multi-attach when using Ceph... If
>> the code is already there, how to use it?!
>>
>> Cheers,
>> Thiago
>>
>> On Mon, 27 May 2019 at 03:52, Bernd Bausch <berndbausch at gmail.com> wrote:
>>
>>> The Stein release notes mention that the RBD driver now supports
>>> multiattach, but i have not found any details. Are there limitations? Is
>>> there a need to configure anything?
>>>
>>> In the RBD driver
>>> <https://opendev.org/openstack/cinder/src/branch/master/cinder/volume/drivers/rbd.py#L767>,
>>> I find this:
>>>
>>> def _enable_multiattach(self, volume):
>>> multipath_feature_exclusions = [
>>> self.rbd.RBD_FEATURE_JOURNALING,
>>> self.rbd.RBD_FEATURE_FAST_DIFF,
>>> self.rbd.RBD_FEATURE_OBJECT_MAP,
>>> self.rbd.RBD_FEATURE_EXCLUSIVE_LOCK,
>>> ]
>>>
>>> This seems to mean that journaling and other features (to me, it's not
>>> quite clear what they are) will be automatically disabled when switching on
>>> multiattachment.
>>>
>>> Further down in the code I see that replication and multiattach are
>>> mutually exclusive.
>>>
>>> Is there some documentation about the Ceph multiattach feature, even an
>>> email thread?
>>>
>>> Thanks,
>>>
>>> Bernd
>>>
>>
> There isn't really a Ceph multi-attach feature using Cinder. The code
> comment is stating that, while the Openstack side of things is in place,
> Ceph doesn't yet support it with RBD due to replication issues with
> multiple clients. The Ceph community is aware of it, but has thus far
> focused on CephFS as the shared file system instead.
>
> This could possibly be used with the NFS Cinder driver talking to Ganesha
> with CephFS mounted. You may also want to look at Openstack's Manilla
> project to orchestrate that.
>
> -Erik
>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190528/50d788c6/attachment.html>
More information about the openstack-discuss
mailing list