[kolla-ansible] [cinder] Setting up multiple LVM cinder backends located on different servers

Alan Bishop abishop at redhat.com
Mon Jan 23 15:29:12 UTC 2023


On Sat, Jan 21, 2023 at 4:39 AM A Monster <amonster369 at gmail.com> wrote:

> First of all thank you for your answer, it's exactly what I was looking
> for,
> What is still ambiguous for me is the name of the volume group I specified
> in globals.yml file before running the deployment, the default value is
> cinder-volumes, however after I added the second lvm backend, I kept the
> same volume group for lvm-1 but chooses another name for lvm-2, was it
> possible to keep the same nomination for both ? If not how can I specify
> the different backends directly from globals.yml file if possible.
>

The LVM driver's volume_group option is significant to each LVM backend,
but only to the LVM backends on that controller. In other words, two
controllers can each have an LVM backend using the same "cinder-volumes"
volume group. But if a controller is configured with multiple LVM backends,
each backend must be configured with a unique volume_group. So, the answer
to your question, "was it possible to keep the same nomination for both?"
is yes.

I'm not familiar with kolla-ansible and its globals.yml file, so I don't
know if that file can be leveraged to provide a different volume_group
value to each controller. The file name suggests it contains global
settings that would be common to every node. You'll need to find a way to
specify the value for the lvm-2 backend (the one that doesn't use
"cinder-volumes"). Also bear in mind that "cinder-volumes" is the default
value [1], so you don't even need to specify that for the backend that *is*
using that value.

[1]
https://github.com/openstack/cinder/blob/4c9b76b9373a85f8dfae28f240bb130525e777af/cinder/volume/drivers/lvm.py#L48

Alan

On Fri, Jan 20, 2023, 20:51 Alan Bishop <abishop at redhat.com> wrote:
>
>>
>>
>> On Wed, Jan 18, 2023 at 6:38 AM A Monster <amonster369 at gmail.com> wrote:
>>
>>> I have an openstack configuration, with 3 controller nodes and multiple
>>> compute nodes , one of the controllers has an LVM storage based on HDD
>>> drives, while another one has an SDD one, and when I tried to configure the
>>> two different types of storage as cinder backends I faced a dilemma since
>>> according to the documentation I have to specify the two different backends
>>> in the cinder configuration as it is explained here
>>> <https://docs.openstack.org/cinder/latest/admin/multi-backend.html>
>>> however and since I want to separate disks type when creating volumes, I
>>> had to specify different backend names, but I don't know if this
>>> configuration should be written in both the storage nodes, or should I
>>> specify for each one of these storage nodes the configuration related to
>>> its own type of disks.
>>>
>>
>> The key factor in understanding how to configure the cinder-volume
>> services for your use case is knowing how the volume services operate and
>> how they interact with the other cinder services. In short, you only define
>> backends in the cinder-volume service that "owns" that backend. If
>> controller-X only handles lvm-X, then you only define that backend on that
>> controller. Don't include any mention of lvm-Y if that one is handled by
>> another controller. The other services (namely the api and schedulers)
>> learn about the backends when each of them reports its status via cinder's
>> internal RPC framework.
>>
>> This means your lvm-1 service running on one controller should only have
>> the one lvm-1 backend (with enabled_backends=lvm-1), and NO mention at all
>> to the lvm-3 backend on the other controller. Likewise, the other
>> controller should only contain the lvm-3 backend, with its
>> enabled_backends=lvm-3.
>>
>>
>>> Now, I tried writing the same configuration for both nodes, but I found
>>> out that the volume service related to server1 concerning disks in server2
>>> is down, and the volume service in server2 concerning disks in server1 is
>>> also down.
>>>
>>> $ openstack volume service
>>> list+------------------+---------------------+------+---------+-------+----------------------------+|
>>> Binary | Host | Zone | Status | State | Updated At
>>> |+------------------+---------------------+------+---------+-------+----------------------------+|
>>> cinder-scheduler | controller-01 | nova | enabled | up |
>>> 2023-01-18T14:27:51.000000 || cinder-scheduler | controller-02 | nova |
>>> enabled | up | 2023-01-18T14:27:41.000000 || cinder-scheduler |
>>> controller-03 | nova | enabled | up | 2023-01-18T14:27:50.000000 ||
>>> cinder-volume | controller-03 at lvm-1 | nova | enabled | up |
>>> 2023-01-18T14:27:42.000000 || cinder-volume | controller-01 at lvm-1 |
>>> nova | enabled | down | 2023-01-18T14:10:00.000000 || cinder-volume |
>>> controller-01 at lvm-3 | nova | enabled | down |
>>> 2023-01-18T14:09:42.000000 || cinder-volume | controller-03 at lvm-3 |
>>> nova | enabled | down |
>>> 2023-01-18T12:12:19.000000|+------------------+---------------------+------+---------+-------+----------------------------+
>>>
>>>
>> Unless you do a fresh deployment, you will need to remove the invalid
>> services that will always be down. Those would be the ones on controller-X
>> where the backend is actually on controller-Y. You'll use the cinder-manage
>> command to do that. From the data you supplied, it seems the lvm-1 backend
>> is up on controller03, and the lvm-3 backend on that controller is down.
>> The numbering seems backwards, but I stick with this example. To delete the
>> lvm-3 backend, which is down because that backend is actually on another
>> controller, you'd issue this command:
>>
>> $ cinder-manage service remove cinder-volume controller-03 at lvm-3
>>
>> Don't worry if you accidentally delete a "good" service. The list will be
>> refreshed each time the cinder-volume services refresh their status.
>>
>>
>>> This is the configuration I have written on the configuration files for
>>> cinder_api _cinder_scheduler and cinder_volume for both servers.
>>>
>>> enabled_backends= lvm-1,lvm-3
>>> [lvm-1]
>>> volume_group = cinder-volumes
>>> volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
>>> volume_backend_name = lvm-1
>>> target_helper = lioadm
>>> target_protocol = iscsi
>>> report_discard_supported = true
>>> [lvm-3]
>>> volume_group=cinder-volumes-ssd
>>> volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
>>> volume_backend_name=lvm-3
>>> target_helper = lioadm
>>> target_protocol = iscsi
>>> report_discard_supported = true
>>>
>>
>> At a minimum, on each controller you need to remove all references to the
>> backend that's actually on the other controller. The cinder-api and
>> cinder-scheduler services don't need any backend configuration. That's
>> because the backend sections and enabled_backends options are only relevant
>> to the cinder-volume service, and are ignored by the other services.
>>
>> Alan
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230123/2176a243/attachment-0001.htm>


More information about the openstack-discuss mailing list