[cinder]multi backend SAN and Ceph
Rajat Dhasmana
rdhasman at redhat.com
Thu Apr 6 07:33:04 UTC 2023
Hi Nguyễn,
Cinder does support configuring multiple storage backends where each
backend runs under its own cinder-volume service.
Different deployments will have different ways to configure multiple
backends and you can refer to their documentation.
You can configure a specific volume type for a particular backend and
retype between the two backends if possible (there
are checks that won't allow it in certain conditions).
You can follow this[1] documentation for reference. Hope that helps.
[1] https://docs.openstack.org/cinder/latest/admin/multi-backend.html
Thanks
Rajat Dhasmana
On Sun, Apr 2, 2023 at 11:31 AM Nguyễn Hữu Khôi <nguyenhuukhoinw at gmail.com>
wrote:
> I ask becasuse I have no see docs which talked about.
> I will lab and update you.
> Many thanks for quick response. Have a nice weekend.
>
>
>
> On Sun, Apr 2, 2023, 12:43 PM Dmitriy Rabotyagov <noonedeadpunk at gmail.com>
> wrote:
>
>> I don't see any reason why this won't be possible. Though, I would likely
>> used different sets of cinder-volume services for that, as while Ceph does
>> support active-active setup, I'm not sure your NAS driver does - it's worth
>> checking your specific driver with this matrix:
>> https://docs.openstack.org/cinder/latest/reference/support-matrix.html
>>
>> Regarding changing volume types, I assume you mean basically volume
>> retypes. It actually depends on the format of images that are stored on the
>> backend, as cinder does not execute type conversion during retype. So if
>> your NAS storage stores in RAW as Ceph does, this should work, I assume.
>>
>> вс, 2 апр. 2023 г., 06:48 Nguyễn Hữu Khôi <nguyenhuukhoinw at gmail.com>:
>>
>>> Hello.
>>> I have a question that could we use both SAN and Ceph for multi backend?
>>> If yes can we change volume type from SAN to Ceph and otherwise?
>>> Thanks.
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230406/12fb1a3a/attachment.htm>
More information about the openstack-discuss
mailing list