<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Jan 21, 2023 at 4:39 AM A Monster <<a href="mailto:amonster369@gmail.com">amonster369@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">First of all thank you for your answer, it's exactly what I was looking for, <div dir="auto">What is still ambiguous for me is the name of the volume group I specified in globals.yml file before running the deployment, the default value is cinder-volumes, however after I added the second lvm backend, I kept the same volume group for lvm-1 but chooses another name for lvm-2, was it possible to keep the same nomination for both ? If not how can I specify the different backends directly from globals.yml file if possible.</div></div></blockquote><div><br></div><div>The LVM driver's volume_group option is significant to each LVM backend, but only to the LVM backends on that controller. In other words, two controllers can each have an LVM backend using the same "cinder-volumes" volume group. But if a controller is configured with multiple LVM backends, each backend must be configured with a unique volume_group. So, the answer to your question, "was it possible to keep the same nomination for both?" is yes.<br></div><div><br></div><div>I'm not familiar with kolla-ansible and its globals.yml file, so I don't know if that file can be leveraged to provide a different volume_group value to each controller. The file name suggests it contains global settings that would be common to every node. You'll need to find a way to specify the value for the lvm-2 backend (the one that doesn't use "cinder-volumes"). Also bear in mind that "cinder-volumes" is the default value [1], so you don't even need to specify that for the backend that *is* using that value.</div><div><br></div><div>[1] <a href="https://github.com/openstack/cinder/blob/4c9b76b9373a85f8dfae28f240bb130525e777af/cinder/volume/drivers/lvm.py#L48">https://github.com/openstack/cinder/blob/4c9b76b9373a85f8dfae28f240bb130525e777af/cinder/volume/drivers/lvm.py#L48</a></div><div><br></div><div>Alan<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jan 20, 2023, 20:51 Alan Bishop <<a href="mailto:abishop@redhat.com" target="_blank">abishop@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jan 18, 2023 at 6:38 AM A Monster <<a href="mailto:amonster369@gmail.com" rel="noreferrer" target="_blank">amonster369@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I have an openstack configuration, with 3 controller nodes and multiple compute nodes , one of the controllers has an LVM storage based on HDD drives, while another one has an SDD one, and when I tried to configure the two different types of storage as cinder backends I faced a dilemma since according to the documentation I have to specify the two different backends in the cinder configuration as it is explained <a href="https://docs.openstack.org/cinder/latest/admin/multi-backend.html" rel="noreferrer" target="_blank">here</a> however and since I want to separate disks type when creating volumes, I had to specify different backend names, but I don't know if this configuration should be written in both the storage nodes, or should I specify for each one of these storage nodes the configuration related to its own type of disks.<br></div></blockquote><div><br></div><div><div>The key factor in understanding how to configure the cinder-volume
services for your use case is knowing how the volume services operate
and how they interact with the other cinder services. In short, you only
define backends in the cinder-volume service that "owns" that backend.
If controller-X only handles lvm-X, then you only define that backend on
that controller. Don't include any mention of lvm-Y if that one is
handled by another controller. The other services (namely the api and
schedulers) learn about the backends when each of them reports its
status via cinder's internal RPC framework.</div><div><br></div><div>This
means your lvm-1 service running on one controller should only have the
one lvm-1 backend (with enabled_backends=lvm-1), and NO mention at all
to the lvm-3 backend on the other controller. Likewise, the other
controller should only contain the lvm-3 backend, with its
enabled_backends=lvm-3.</div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br>Now, I tried writing the same configuration for both nodes, but I found out that the volume service related to server1 concerning disks in server2 is down, and the volume service in server2 concerning disks in server1 is also down.<br><br>$ openstack volume service list+------------------+---------------------+------+---------+-------+----------------------------+| Binary | Host | Zone | Status | State | Updated At |+------------------+---------------------+------+---------+-------+----------------------------+| cinder-scheduler | controller-01 | nova | enabled | up | 2023-01-18T14:27:51.000000 || cinder-scheduler | controller-02 | nova | enabled | up | 2023-01-18T14:27:41.000000 || cinder-scheduler | controller-03 | nova | enabled | up | 2023-01-18T14:27:50.000000 || cinder-volume | controller-03@lvm-1 | nova | enabled | up | 2023-01-18T14:27:42.000000 || cinder-volume | controller-01@lvm-1 | nova | enabled | down | 2023-01-18T14:10:00.000000 || cinder-volume | controller-01@lvm-3 | nova | enabled | down | 2023-01-18T14:09:42.000000 || cinder-volume | controller-03@lvm-3 | nova | enabled | down | 2023-01-18T12:12:19.000000|+------------------+---------------------+------+---------+-------+----------------------------+<br><br></div></blockquote><div><br></div><div><div>Unless you do a
fresh deployment, you will need to remove the invalid services that will
always be down. Those would be the ones on controller-X where the
backend is actually on controller-Y. You'll use the cinder-manage
command to do that. From the data you supplied, it seems the lvm-1
backend is up on controller03, and the lvm-3 backend on that controller
is down. The numbering seems backwards, but I stick with this example.
To delete the lvm-3 backend, which is down because that backend is
actually on another controller, you'd issue this command:</div><div><br></div><div>$ cinder-manage service remove cinder-volume controller-03@lvm-3</div><div><br></div><div>Don't
worry if you accidentally delete a "good" service. The list will be
refreshed each time the cinder-volume services refresh their status.</div><div> <br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">This is the configuration I have written on the configuration files for cinder_api _cinder_scheduler and cinder_volume for both servers.<br><br>enabled_backends= lvm-1,lvm-3<br>[lvm-1]<br>volume_group = cinder-volumes<br>volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver<br>volume_backend_name = lvm-1<br>target_helper = lioadm<br>target_protocol = iscsi<br>report_discard_supported = true<br>[lvm-3]<br>volume_group=cinder-volumes-ssd<br>volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver<br>volume_backend_name=lvm-3<br>target_helper = lioadm<br>target_protocol = iscsi<br>report_discard_supported = true<br></div></blockquote><div><br></div><div><div>At a minimum, on each controller you need to remove all references to the backend that's actually on the other controller. The cinder-api and cinder-scheduler services don't need any backend configuration. That's because the backend sections and enabled_backends options are only relevant to the cinder-volume service, and are ignored by the other services.</div><div><br></div><div>Alan<br></div> </div></div></div>
</blockquote></div>
</blockquote></div></div>