The Stein release notes mention that the RBD driver now supports multiattach, but i have not found any details. Are there limitations? Is there a need to configure anything? In the RBD driver <https://opendev.org/openstack/cinder/src/branch/master/cinder/volume/drivers/rbd.py#L767>, I find this: def _enable_multiattach(self, volume): multipath_feature_exclusions = [ self.rbd.RBD_FEATURE_JOURNALING, self.rbd.RBD_FEATURE_FAST_DIFF, self.rbd.RBD_FEATURE_OBJECT_MAP, self.rbd.RBD_FEATURE_EXCLUSIVE_LOCK, ] This seems to mean that journaling and other features (to me, it's not quite clear what they are) will be automatically disabled when switching on multiattachment. Further down in the code I see that replication and multiattach are mutually exclusive. Is there some documentation about the Ceph multiattach feature, even an email thread? Thanks, Bernd ||
Hello, I'm very curious about this as well! It would be awesome to support Cinder multi-attach when using Ceph... If the code is already there, how to use it?! Cheers, Thiago On Mon, 27 May 2019 at 03:52, Bernd Bausch <berndbausch@gmail.com> wrote:
The Stein release notes mention that the RBD driver now supports multiattach, but i have not found any details. Are there limitations? Is there a need to configure anything?
In the RBD driver <https://opendev.org/openstack/cinder/src/branch/master/cinder/volume/drivers/rbd.py#L767>, I find this:
def _enable_multiattach(self, volume): multipath_feature_exclusions = [ self.rbd.RBD_FEATURE_JOURNALING, self.rbd.RBD_FEATURE_FAST_DIFF, self.rbd.RBD_FEATURE_OBJECT_MAP, self.rbd.RBD_FEATURE_EXCLUSIVE_LOCK, ]
This seems to mean that journaling and other features (to me, it's not quite clear what they are) will be automatically disabled when switching on multiattachment.
Further down in the code I see that replication and multiattach are mutually exclusive.
Is there some documentation about the Ceph multiattach feature, even an email thread?
Thanks,
Bernd
On Mon, May 27, 2019, 5:33 PM Martinx - ジェームズ <thiagocmartinsc@gmail.com> wrote:
Hello,
I'm very curious about this as well!
It would be awesome to support Cinder multi-attach when using Ceph... If the code is already there, how to use it?!
Cheers, Thiago
On Mon, 27 May 2019 at 03:52, Bernd Bausch <berndbausch@gmail.com> wrote:
The Stein release notes mention that the RBD driver now supports multiattach, but i have not found any details. Are there limitations? Is there a need to configure anything?
In the RBD driver <https://opendev.org/openstack/cinder/src/branch/master/cinder/volume/drivers/rbd.py#L767>, I find this:
def _enable_multiattach(self, volume): multipath_feature_exclusions = [ self.rbd.RBD_FEATURE_JOURNALING, self.rbd.RBD_FEATURE_FAST_DIFF, self.rbd.RBD_FEATURE_OBJECT_MAP, self.rbd.RBD_FEATURE_EXCLUSIVE_LOCK, ]
This seems to mean that journaling and other features (to me, it's not quite clear what they are) will be automatically disabled when switching on multiattachment.
Further down in the code I see that replication and multiattach are mutually exclusive.
Is there some documentation about the Ceph multiattach feature, even an email thread?
Thanks,
Bernd
There isn't really a Ceph multi-attach feature using Cinder. The code comment is stating that, while the Openstack side of things is in place, Ceph doesn't yet support it with RBD due to replication issues with multiple clients. The Ceph community is aware of it, but has thus far focused on CephFS as the shared file system instead. This could possibly be used with the NFS Cinder driver talking to Ganesha with CephFS mounted. You may also want to look at Openstack's Manilla project to orchestrate that. -Erik
Thanks for clarifying this Erik. Let me point out then that the release notes [1] are worded rather unequivocally, and what you are saying contradicts them: RBD driver has added multiattach support. It should be noted that replication and multiattach are mutually exclusive, so a single RBD volume can only be configured to support one of these features at a time. Additionally, RBD image features are not preserved which prevents a volume being retyped from multiattach to another type. This limitation is temporary and will be addressed soon. Bernd. On 5/28/2019 11:30 AM, Erik McCormick wrote:
There isn't really a Ceph multi-attach feature using Cinder
[1] https://docs.openstack.org/releasenotes/cinder/stein.html
On Tue, May 28, 2019, 8:29 AM Bernd Bausch <berndbausch@gmail.com> wrote:
Thanks for clarifying this Erik. Let me point out then that the release notes [1] are worded rather unequivocally, and what you are saying contradicts them:
RBD driver has added multiattach support. It should be noted that replication and multiattach are mutually exclusive, so a single RBD volume can only be configured to support one of these features at a time. Additionally, RBD image features are not preserved which prevents a volume being retyped from multiattach to another type. This limitation is temporary and will be addressed soon.
Bernd.
I haven't even looked at Stein yet so I could be wrong, but I thin this is referring to replication like RBD mirroring. There's still an issue in Ceph, as far as I know, where ceph's object replication would have a problem with multi-attach. I haven't come across any Ceph release note to say this has been addressed. Hopefully someone from the Cinder team can straighten us out. -Erik
On 5/28/2019 11:30 AM, Erik McCormick wrote:
There isn't really a Ceph multi-attach feature using Cinder
[1] https://docs.openstack.org/releasenotes/cinder/stein.html
On Tue, May 28, 2019 at 08:57:17AM -0400, Erik McCormick wrote:
On Tue, May 28, 2019, 8:29 AM Bernd Bausch <berndbausch@gmail.com> wrote:
Thanks for clarifying this Erik. Let me point out then that the release notes [1] are worded rather unequivocally, and what you are saying contradicts them:
RBD driver has added multiattach support. It should be noted that replication and multiattach are mutually exclusive, so a single RBD volume can only be configured to support one of these features at a time. Additionally, RBD image features are not preserved which prevents a volume being retyped from multiattach to another type. This limitation is temporary and will be addressed soon.
Bernd.
I haven't even looked at Stein yet so I could be wrong, but I thin this is referring to replication like RBD mirroring. There's still an issue in Ceph, as far as I know, where ceph's object replication would have a problem with multi-attach. I haven't come across any Ceph release note to say this has been addressed.
Hopefully someone from the Cinder team can straighten us out.
-Erik
Multiattach support has indeed been enabled for RBD as of the Stein release. Though there are the known caveats that you point out. I thought there was a pending patch to add some details on this to to the RBD driver configuration reference, but I am not finding anything at the moment. I don't have all the details on that myself, but hopefully one of the RBD driver maintainers can chime in here with better details. Sean
On 5/28/2019 8:33 AM, Sean McGinnis wrote:
Multiattach support has indeed been enabled for RBD as of the Stein release. Though there are the known caveats that you point out.
I thought there was a pending patch to add some details on this to to the RBD driver configuration reference, but I am not finding anything at the moment.
I don't have all the details on that myself, but hopefully one of the RBD driver maintainers can chime in here with better details.
Note that the devstack-plugin-ceph repo also enabled multiattach testing after it was enabled for the rbd volume driver in cinder [1]. So it should be getting tested in the ceph job as well upstream. [1] https://github.com/openstack/devstack-plugin-ceph/commit/b69c941d5ccafb70240... -- Thanks, Matt
Last time I checked OpenStack Ansible, Manila wasn't there... I believe that they added support for it in Stein but, I'm not sure if it supports CephFS backend (and the required Ceph Metadata Containers, since CephFS needs if, I believe). I'll definitely give it a try! Initially, I was planning to multi-attach an RDB block device against 2 or more Instances and run OCFS2 on top of it but, Manila with CephFS looks way simpler. Cheers! Thiago On Mon, 27 May 2019 at 22:30, Erik McCormick <emccormick@cirrusseven.com> wrote:
On Mon, May 27, 2019, 5:33 PM Martinx - ジェームズ <thiagocmartinsc@gmail.com> wrote:
Hello,
I'm very curious about this as well!
It would be awesome to support Cinder multi-attach when using Ceph... If the code is already there, how to use it?!
Cheers, Thiago
On Mon, 27 May 2019 at 03:52, Bernd Bausch <berndbausch@gmail.com> wrote:
The Stein release notes mention that the RBD driver now supports multiattach, but i have not found any details. Are there limitations? Is there a need to configure anything?
In the RBD driver <https://opendev.org/openstack/cinder/src/branch/master/cinder/volume/drivers/rbd.py#L767>, I find this:
def _enable_multiattach(self, volume): multipath_feature_exclusions = [ self.rbd.RBD_FEATURE_JOURNALING, self.rbd.RBD_FEATURE_FAST_DIFF, self.rbd.RBD_FEATURE_OBJECT_MAP, self.rbd.RBD_FEATURE_EXCLUSIVE_LOCK, ]
This seems to mean that journaling and other features (to me, it's not quite clear what they are) will be automatically disabled when switching on multiattachment.
Further down in the code I see that replication and multiattach are mutually exclusive.
Is there some documentation about the Ceph multiattach feature, even an email thread?
Thanks,
Bernd
There isn't really a Ceph multi-attach feature using Cinder. The code comment is stating that, while the Openstack side of things is in place, Ceph doesn't yet support it with RBD due to replication issues with multiple clients. The Ceph community is aware of it, but has thus far focused on CephFS as the shared file system instead.
This could possibly be used with the NFS Cinder driver talking to Ganesha with CephFS mounted. You may also want to look at Openstack's Manilla project to orchestrate that.
-Erik
Hi, Yep, OSA has manila support since stein: https://docs.openstack.org/openstack-ansible/latest/contributor/testing.html... So you may give it a try. Feedback regarding the role is highly appreciated:) 28.05.2019, 22:13, "Martinx - ジェームズ" <thiagocmartinsc@gmail.com>:
Last time I checked OpenStack Ansible, Manila wasn't there... I believe that they added support for it in Stein but, I'm not sure if it supports CephFS backend (and the required Ceph Metadata Containers, since CephFS needs if, I believe).
I'll definitely give it a try!
Initially, I was planning to multi-attach an RDB block device against 2 or more Instances and run OCFS2 on top of it but, Manila with CephFS looks way simpler.
Cheers! Thiago
On Mon, 27 May 2019 at 22:30, Erik McCormick <emccormick@cirrusseven.com> wrote:
On Mon, May 27, 2019, 5:33 PM Martinx - ジェームズ <thiagocmartinsc@gmail.com> wrote:
Hello,
I'm very curious about this as well!
It would be awesome to support Cinder multi-attach when using Ceph... If the code is already there, how to use it?!
Cheers, Thiago
On Mon, 27 May 2019 at 03:52, Bernd Bausch <berndbausch@gmail.com> wrote:
The Stein release notes mention that the RBD driver now supports multiattach, but i have not found any details. Are there limitations? Is there a need to configure anything?
In the RBD driver, I find this:
def _enable_multiattach(self, volume): multipath_feature_exclusions = [ self.rbd.RBD_FEATURE_JOURNALING, self.rbd.RBD_FEATURE_FAST_DIFF, self.rbd.RBD_FEATURE_OBJECT_MAP, self.rbd.RBD_FEATURE_EXCLUSIVE_LOCK, ]
This seems to mean that journaling and other features (to me, it's not quite clear what they are) will be automatically disabled when switching on multiattachment.
Further down in the code I see that replication and multiattach are mutually exclusive.
Is there some documentation about the Ceph multiattach feature, even an email thread?
Thanks,
Bernd
There isn't really a Ceph multi-attach feature using Cinder. The code comment is stating that, while the Openstack side of things is in place, Ceph doesn't yet support it with RBD due to replication issues with multiple clients. The Ceph community is aware of it, but has thus far focused on CephFS as the shared file system instead.
This could possibly be used with the NFS Cinder driver talking to Ganesha with CephFS mounted. You may also want to look at Openstack's Manilla project to orchestrate that.
-Erik
-- Kind Regards, Dmitriy Rabotyagov
participants (6)
-
Bernd Bausch
-
Dmitriy Rabotyagov
-
Erik McCormick
-
Martinx - ジェームズ
-
Matt Riedemann
-
Sean McGinnis