[kolla] [train] [cinder] Cinder issues during controller replacement

Danny Webb Danny.Webb at thehutgroup.com
Wed Aug 30 13:19:49 UTC 2023


Just to add to the cornucopia of configuration methods for cinder backends.   We use a variation of this for active/active controller setup which seems to work (on pure, ceph rbd and nimble).  We use both backend_host and volume_backend_name so any of our controllers can action a request for a given volume.  We've not had any issues with this setup since ussuri (we're on Zed / Yoga now).

eg:

[rbd]
backend_host = ceph-nvme
volume_backend_name = high-performance
...

[pure]
backend_host = pure
volume_backend_name = high-performance
...

which leaves us with a set of the following:

$ openstack volume service list
+------------------+-------------------------+----------+---------+-------+----------------------------+
| Binary           | Host                    | Zone     | Status  | State | Updated At                 |
+------------------+-------------------------+----------+---------+-------+----------------------------+
| cinder-volume    | nimble at nimble-az1       | gb-lon-1 | enabled | up    | 2023-08-30T12:34:18.000000 |
| cinder-volume    | nimble at nimble-az2       | gb-lon-2 | enabled | up    | 2023-08-30T12:34:14.000000 |
| cinder-volume    | nimble at nimble-az3       | gb-lon-3 | enabled | up    | 2023-08-30T12:34:17.000000 |
| cinder-volume    | ceph-nvme at ceph-nvme-az3 | gb-lon-3 | enabled | up    | 2023-08-30T12:34:14.000000 |
| cinder-volume    | pure at pure-az1           | gb-lon-1 | enabled | up    | 2023-08-30T12:34:11.000000 |
| cinder-volume    | pure at pure-az2           | gb-lon-2 | enabled | up    | 2023-08-30T12:34:11.000000 |
+------------------+-------------------------+----------+---------+-------+----------------------------+


________________________________
From: Eugen Block <eblock at nde.ag>
Sent: 30 August 2023 11:36
To: openstack-discuss at lists.openstack.org <openstack-discuss at lists.openstack.org>
Subject: Re: [kolla] [train] [cinder] Cinder issues during controller replacement

CAUTION: This email originates from outside THG

Just to share our config, we don't use "backend_host" but only "host"
on all control nodes. Its value is the hostname (shortname) pointing
to the virtual IP which migrates in case of a failure. We only use
Ceph as storage backend with different volume types. The volume
service list looks like this:

controller02:~ # openstack volume service list
+------------------+--------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State |
Updated At |
+------------------+--------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up |
2023-08-30T10:32:36.000000 |
| cinder-backup | controller | nova | enabled | up |
2023-08-30T10:32:34.000000 |
| cinder-volume | controller at rbd | nova | enabled | up |
2023-08-30T10:32:28.000000 |
| cinder-volume | controller at rbd2 | nova | enabled | up |
2023-08-30T10:32:36.000000 |
| cinder-volume | controller at ceph-ec | nova | enabled | up |
2023-08-30T10:32:28.000000 |
+------------------+--------------------+------+---------+-------+----------------------------+

We haven't seen any issue with this setup in years during failover.

Zitat von Gorka Eguileor <geguileo at redhat.com>:

> On 29/08, Albert Braden wrote:
>> What does backend_host look like? Should it match my internal API
>> URL, i.e. api-int.qde4.ourdomain.com?
>> On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor
>> <geguileo at redhat.com> wrote:
>>
>
> Hi,
>
> If I remember correctly you can set it to anything you want, but for
> convenience I would recommend setting it to the hostname that currently
> has more volumes in your system.
>
> Let's say you have 3 hosts:
> qde4-ctrl1.cloud.ourdomain.com
> qde4-ctrl2.cloud.ourdomain.com
> qde4-ctrl3.cloud.ourdomain.com
>
> And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.
>
> Then it would be best to set it to ctrl1:
> [rbd-1]
> backend_host = qde4-ctrl1.cloud.ourdomain.com
>
> And then use the cinder-manage command to modify the other 2.
>
> For your information the value you see as "os-vol-host-attr:host" when
> seeing the detailed information of a volume is in the form of:
> <HostName>@<BackendName>#<PoolName>
>
> In your case:
> <HostName> = qde4-ctrl1.cloud.ourdomain.com
> <BackendName> = rbd-1
> <PoolName> = rbd-1
>
> In the RBD case the poolname will always be the same as the backendname.
>
> Cheers,
> Gorka.
>
>> On 29/08, Albert Braden wrote:
>> > We’re replacing controllers, and it takes a few hours to build
>> the new controller. We’re following this procedure to remove the
>> old controller:
>> https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html<https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html>
>> >
>> >  After that the cluster seems to run fine on 2 controllers, but
>> approximately 1/3 of our volumes can’t be attached to a VM. When we
>> look at those volumes, we see this:
>> >
>> > | os-vol-host-attr:host          |
>> qde4-ctrl1.cloud.ourdomain.com at rbd-1#rbd-1
>>   |
>> >
>> > Ctrl1 is the controller that is being replaced. Is it possible to
>> change the os-vol-host-attr on a volume? How can we work around
>> this issue while we are replacing controllers? Do we need to
>> disable the API for the duration of the replacement process, or is
>> there a better way?
>> >
>>
>> Hi,
>>
>> Assuming you are running cinder volume in Active-Passive mode (which I
>> believe was the only way back in Train) then you should be hardcoding
>> the host name in the cinder.conf file to avoid losing access to your
>> volumes when the volume service starts in another host.
>>
>> This is done with the "backend_host" configuration option within the
>> specific driver section in cinder.conf.
>>
>> As for how to change the value of all the volumes to the same host
>> value, you can use the "cinder-manage" command:
>>
>>   cinder-manage volume update_host \
>>     --currenthost <current host> \
>>     --newhost <new host>
>>
>> Cheers,
>> Gorka.
>>
>>
>>



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/da83b4b1/attachment-0001.htm>


More information about the openstack-discuss mailing list