Just to add to the cornucopia of configuration methods for cinder backends. We use a variation of this for active/active controller setup which seems to work (on pure, ceph rbd and nimble). We use both backend_host and volume_backend_name so any of our controllers
can action a request for a given volume. We've not had any issues with this setup since ussuri (we're on Zed / Yoga now).
...
$ openstack volume service list
+------------------+-------------------------+----------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-------------------------+----------+---------+-------+----------------------------+
| cinder-volume | nimble@nimble-az1 | gb-lon-1 | enabled | up | 2023-08-30T12:34:18.000000 |
| cinder-volume | nimble@nimble-az2 | gb-lon-2 | enabled | up | 2023-08-30T12:34:14.000000 |
| cinder-volume | nimble@nimble-az3 | gb-lon-3 | enabled | up | 2023-08-30T12:34:17.000000 |
| cinder-volume | ceph-nvme@ceph-nvme-az3 | gb-lon-3 | enabled | up | 2023-08-30T12:34:14.000000 |
| cinder-volume | pure@pure-az1 | gb-lon-1 | enabled | up | 2023-08-30T12:34:11.000000 |
| cinder-volume | pure@pure-az2 | gb-lon-2 | enabled | up | 2023-08-30T12:34:11.000000 |
+------------------+-------------------------+----------+---------+-------+----------------------------+
CAUTION: This email originates from outside THG
Just to share our config, we don't use "backend_host" but only "host"
on all control nodes. Its value is the hostname (shortname) pointing
to the virtual IP which migrates in case of a failure. We only use
Ceph as storage backend with different volume types. The volume
service list looks like this:
controller02:~ # openstack volume service list
+------------------+--------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State |
Updated At |
+------------------+--------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up |
2023-08-30T10:32:36.000000 |
| cinder-backup | controller | nova | enabled | up |
2023-08-30T10:32:34.000000 |
| cinder-volume | controller@rbd | nova | enabled | up |
2023-08-30T10:32:28.000000 |
| cinder-volume | controller@rbd2 | nova | enabled | up |
2023-08-30T10:32:36.000000 |
| cinder-volume | controller@ceph-ec | nova | enabled | up |
2023-08-30T10:32:28.000000 |
+------------------+--------------------+------+---------+-------+----------------------------+
We haven't seen any issue with this setup in years during failover.
Zitat von Gorka Eguileor <geguileo@redhat.com>:
> On 29/08, Albert Braden wrote:
>> What does backend_host look like? Should it match my internal API
>> URL, i.e. api-int.qde4.ourdomain.com?
>> On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor
>> <geguileo@redhat.com> wrote:
>>
>
> Hi,
>
> If I remember correctly you can set it to anything you want, but for
> convenience I would recommend setting it to the hostname that currently
> has more volumes in your system.
>
> Let's say you have 3 hosts:
> qde4-ctrl1.cloud.ourdomain.com
> qde4-ctrl2.cloud.ourdomain.com
> qde4-ctrl3.cloud.ourdomain.com
>
> And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.
>
> Then it would be best to set it to ctrl1:
> [rbd-1]
> backend_host = qde4-ctrl1.cloud.ourdomain.com
>
> And then use the cinder-manage command to modify the other 2.
>
> For your information the value you see as "os-vol-host-attr:host" when
> seeing the detailed information of a volume is in the form of:
> <HostName>@<BackendName>#<PoolName>
>
> In your case:
> <HostName> = qde4-ctrl1.cloud.ourdomain.com
> <BackendName> = rbd-1
> <PoolName> = rbd-1
>
> In the RBD case the poolname will always be the same as the backendname.
>
> Cheers,
> Gorka.
>
>> On 29/08, Albert Braden wrote:
>> > We’re replacing controllers, and it takes a few hours to build
>> the new controller. We’re following this procedure to remove the
>> old controller:
>>
https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html
>> >
>> > After that the cluster seems to run fine on 2 controllers, but
>> approximately 1/3 of our volumes can’t be attached to a VM. When we
>> look at those volumes, we see this:
>> >
>> > | os-vol-host-attr:host |
>> qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1
>> |
>> >
>> > Ctrl1 is the controller that is being replaced. Is it possible to
>> change the os-vol-host-attr on a volume? How can we work around
>> this issue while we are replacing controllers? Do we need to
>> disable the API for the duration of the replacement process, or is
>> there a better way?
>> >
>>
>> Hi,
>>
>> Assuming you are running cinder volume in Active-Passive mode (which I
>> believe was the only way back in Train) then you should be hardcoding
>> the host name in the cinder.conf file to avoid losing access to your
>> volumes when the volume service starts in another host.
>>
>> This is done with the "backend_host" configuration option within the
>> specific driver section in cinder.conf.
>>
>> As for how to change the value of all the volumes to the same host
>> value, you can use the "cinder-manage" command:
>>
>> cinder-manage volume update_host \
>> --currenthost <current host> \
>> --newhost <new host>
>>
>> Cheers,
>> Gorka.
>>
>>
>>