[kolla] [train] [cinder] Cinder issues during controller replacement
We’re replacing controllers, and it takes a few hours to build the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host... After that the cluster seems to run fine on 2 controllers, but approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this: | os-vol-host-attr:host | qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 | Ctrl1 is the controller that is being replaced. Is it possible to change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
On 29/08, Albert Braden wrote:
We’re replacing controllers, and it takes a few hours to build the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host...
After that the cluster seems to run fine on 2 controllers, but approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this:
| os-vol-host-attr:host | qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 |
Ctrl1 is the controller that is being replaced. Is it possible to change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi, Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host. This is done with the "backend_host" configuration option within the specific driver section in cinder.conf. As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command: cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host> Cheers, Gorka.
What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com? On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote: On 29/08, Albert Braden wrote:
We’re replacing controllers, and it takes a few hours to build the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host...
After that the cluster seems to run fine on 2 controllers, but approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this:
| os-vol-host-attr:host | qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 |
Ctrl1 is the controller that is being replaced. Is it possible to change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi, Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host. This is done with the "backend_host" configuration option within the specific driver section in cinder.conf. As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command: cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host> Cheers, Gorka.
On 29/08, Albert Braden wrote:
What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com? On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote:
Hi, If I remember correctly you can set it to anything you want, but for convenience I would recommend setting it to the hostname that currently has more volumes in your system. Let's say you have 3 hosts: qde4-ctrl1.cloud.ourdomain.com qde4-ctrl2.cloud.ourdomain.com qde4-ctrl3.cloud.ourdomain.com And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3. Then it would be best to set it to ctrl1: [rbd-1] backend_host = qde4-ctrl1.cloud.ourdomain.com And then use the cinder-manage command to modify the other 2. For your information the value you see as "os-vol-host-attr:host" when seeing the detailed information of a volume is in the form of: <HostName>@<BackendName>#<PoolName> In your case: <HostName> = qde4-ctrl1.cloud.ourdomain.com <BackendName> = rbd-1 <PoolName> = rbd-1 In the RBD case the poolname will always be the same as the backendname. Cheers, Gorka.
On 29/08, Albert Braden wrote:
We’re replacing controllers, and it takes a few hours to build the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host...
After that the cluster seems to run fine on 2 controllers, but approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this:
| os-vol-host-attr:host | qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 |
Ctrl1 is the controller that is being replaced. Is it possible to change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi,
Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host.
This is done with the "backend_host" configuration option within the specific driver section in cinder.conf.
As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command:
cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host>
Cheers, Gorka.
Just to share our config, we don't use "backend_host" but only "host" on all control nodes. Its value is the hostname (shortname) pointing to the virtual IP which migrates in case of a failure. We only use Ceph as storage backend with different volume types. The volume service list looks like this: controller02:~ # openstack volume service list +------------------+--------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+--------------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up | 2023-08-30T10:32:36.000000 | | cinder-backup | controller | nova | enabled | up | 2023-08-30T10:32:34.000000 | | cinder-volume | controller@rbd | nova | enabled | up | 2023-08-30T10:32:28.000000 | | cinder-volume | controller@rbd2 | nova | enabled | up | 2023-08-30T10:32:36.000000 | | cinder-volume | controller@ceph-ec | nova | enabled | up | 2023-08-30T10:32:28.000000 | +------------------+--------------------+------+---------+-------+----------------------------+ We haven't seen any issue with this setup in years during failover. Zitat von Gorka Eguileor <geguileo@redhat.com>:
On 29/08, Albert Braden wrote:
What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com? On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote:
Hi,
If I remember correctly you can set it to anything you want, but for convenience I would recommend setting it to the hostname that currently has more volumes in your system.
Let's say you have 3 hosts: qde4-ctrl1.cloud.ourdomain.com qde4-ctrl2.cloud.ourdomain.com qde4-ctrl3.cloud.ourdomain.com
And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.
Then it would be best to set it to ctrl1: [rbd-1] backend_host = qde4-ctrl1.cloud.ourdomain.com
And then use the cinder-manage command to modify the other 2.
For your information the value you see as "os-vol-host-attr:host" when seeing the detailed information of a volume is in the form of: <HostName>@<BackendName>#<PoolName>
In your case: <HostName> = qde4-ctrl1.cloud.ourdomain.com <BackendName> = rbd-1 <PoolName> = rbd-1
In the RBD case the poolname will always be the same as the backendname.
Cheers, Gorka.
We’re replacing controllers, and it takes a few hours to build
On 29/08, Albert Braden wrote: the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host...
After that the cluster seems to run fine on 2 controllers, but
approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this:
| os-vol-host-attr:host |
qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 |
Ctrl1 is the controller that is being replaced. Is it possible to
change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi,
Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host.
This is done with the "backend_host" configuration option within the specific driver section in cinder.conf.
As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command:
cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host>
Cheers, Gorka.
Just to add to the cornucopia of configuration methods for cinder backends. We use a variation of this for active/active controller setup which seems to work (on pure, ceph rbd and nimble). We use both backend_host and volume_backend_name so any of our controllers can action a request for a given volume. We've not had any issues with this setup since ussuri (we're on Zed / Yoga now). eg: [rbd] backend_host = ceph-nvme volume_backend_name = high-performance ... [pure] backend_host = pure volume_backend_name = high-performance ... which leaves us with a set of the following: $ openstack volume service list +------------------+-------------------------+----------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-------------------------+----------+---------+-------+----------------------------+ | cinder-volume | nimble@nimble-az1 | gb-lon-1 | enabled | up | 2023-08-30T12:34:18.000000 | | cinder-volume | nimble@nimble-az2 | gb-lon-2 | enabled | up | 2023-08-30T12:34:14.000000 | | cinder-volume | nimble@nimble-az3 | gb-lon-3 | enabled | up | 2023-08-30T12:34:17.000000 | | cinder-volume | ceph-nvme@ceph-nvme-az3 | gb-lon-3 | enabled | up | 2023-08-30T12:34:14.000000 | | cinder-volume | pure@pure-az1 | gb-lon-1 | enabled | up | 2023-08-30T12:34:11.000000 | | cinder-volume | pure@pure-az2 | gb-lon-2 | enabled | up | 2023-08-30T12:34:11.000000 | +------------------+-------------------------+----------+---------+-------+----------------------------+ ________________________________ From: Eugen Block <eblock@nde.ag> Sent: 30 August 2023 11:36 To: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: Re: [kolla] [train] [cinder] Cinder issues during controller replacement CAUTION: This email originates from outside THG Just to share our config, we don't use "backend_host" but only "host" on all control nodes. Its value is the hostname (shortname) pointing to the virtual IP which migrates in case of a failure. We only use Ceph as storage backend with different volume types. The volume service list looks like this: controller02:~ # openstack volume service list +------------------+--------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+--------------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up | 2023-08-30T10:32:36.000000 | | cinder-backup | controller | nova | enabled | up | 2023-08-30T10:32:34.000000 | | cinder-volume | controller@rbd | nova | enabled | up | 2023-08-30T10:32:28.000000 | | cinder-volume | controller@rbd2 | nova | enabled | up | 2023-08-30T10:32:36.000000 | | cinder-volume | controller@ceph-ec | nova | enabled | up | 2023-08-30T10:32:28.000000 | +------------------+--------------------+------+---------+-------+----------------------------+ We haven't seen any issue with this setup in years during failover. Zitat von Gorka Eguileor <geguileo@redhat.com>:
On 29/08, Albert Braden wrote:
What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com? On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote:
Hi,
If I remember correctly you can set it to anything you want, but for convenience I would recommend setting it to the hostname that currently has more volumes in your system.
Let's say you have 3 hosts: qde4-ctrl1.cloud.ourdomain.com qde4-ctrl2.cloud.ourdomain.com qde4-ctrl3.cloud.ourdomain.com
And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.
Then it would be best to set it to ctrl1: [rbd-1] backend_host = qde4-ctrl1.cloud.ourdomain.com
And then use the cinder-manage command to modify the other 2.
For your information the value you see as "os-vol-host-attr:host" when seeing the detailed information of a volume is in the form of: <HostName>@<BackendName>#<PoolName>
In your case: <HostName> = qde4-ctrl1.cloud.ourdomain.com <BackendName> = rbd-1 <PoolName> = rbd-1
In the RBD case the poolname will always be the same as the backendname.
Cheers, Gorka.
We’re replacing controllers, and it takes a few hours to build
On 29/08, Albert Braden wrote: the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html<https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html>
After that the cluster seems to run fine on 2 controllers, but
approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this:
| os-vol-host-attr:host |
qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 |
Ctrl1 is the controller that is being replaced. Is it possible to
change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi,
Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host.
This is done with the "backend_host" configuration option within the specific driver section in cinder.conf.
As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command:
cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host>
Cheers, Gorka.
I tried this in the lab, setting backend_host to "api-int.qde4.ourdomain.com" and my new volumes end up with: os-vol-host-attr:host | api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 And they work fine, but volumes that existed before the change cannot be attached, detached nor deleted. They still have the old setting: os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1 I tried migrating them to the new host with: openstack volume migrate --host api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 <id> But the migration never finishes: | name | albert_testvol2 | | os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1 | | os-vol-mig-status-attr:migstat | starting What am I missing? How can I fix my existing volumes after changing the backend_host? On Wednesday, August 30, 2023 at 02:59:00 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote: On 29/08, Albert Braden wrote:
What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com? On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote:
Hi, If I remember correctly you can set it to anything you want, but for convenience I would recommend setting it to the hostname that currently has more volumes in your system. Let's say you have 3 hosts: qde4-ctrl1.cloud.ourdomain.com qde4-ctrl2.cloud.ourdomain.com qde4-ctrl3.cloud.ourdomain.com And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3. Then it would be best to set it to ctrl1: [rbd-1] backend_host = qde4-ctrl1.cloud.ourdomain.com And then use the cinder-manage command to modify the other 2. For your information the value you see as "os-vol-host-attr:host" when seeing the detailed information of a volume is in the form of: <HostName>@<BackendName>#<PoolName> In your case: <HostName> = qde4-ctrl1.cloud.ourdomain.com <BackendName> = rbd-1 <PoolName> = rbd-1 In the RBD case the poolname will always be the same as the backendname. Cheers, Gorka.
On 29/08, Albert Braden wrote:
We’re replacing controllers, and it takes a few hours to build the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host...
After that the cluster seems to run fine on 2 controllers, but approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this:
| os-vol-host-attr:host | qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 |
Ctrl1 is the controller that is being replaced. Is it possible to change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi,
Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host.
This is done with the "backend_host" configuration option within the specific driver section in cinder.conf.
As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command:
cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host>
Cheers, Gorka.
Hi, Use cinder-manage OR update column with new host name.. Cinder manage solution : (cinder-volume)[root@controller0 /]# cinder-manage volume update_host --help usage: cinder-manage volume update_host [-h] --currenthost CURRENTHOST --newhost NEWHOST optional arguments: -h, --help show this help message and exit --currenthost CURRENTHOST Existing volume host name in the format host@backend #pool --newhost NEWHOST New volume host name in the format host@backend#pool Kevko Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet@ultimum.io *https://ultimum.io <https://ultimum.io/>* LinkedIn <https://www.linkedin.com/company/ultimum-technologies> | Twitter <https://twitter.com/ultimumtech> | Facebook <https://www.facebook.com/ultimumtechnologies/timeline> po 16. 10. 2023 v 22:15 odesílatel Albert Braden <ozzzo@yahoo.com> napsal:
I tried this in the lab, setting backend_host to " api-int.qde4.ourdomain.com" and my new volumes end up with:
os-vol-host-attr:host | api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1
And they work fine, but volumes that existed before the change cannot be attached, detached nor deleted. They still have the old setting:
os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1
I tried migrating them to the new host with:
openstack volume migrate --host api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 <id>
But the migration never finishes:
| name | albert_testvol2 | | os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1 | | os-vol-mig-status-attr:migstat | starting
What am I missing? How can I fix my existing volumes after changing the backend_host? On Wednesday, August 30, 2023 at 02:59:00 AM EDT, Gorka Eguileor < geguileo@redhat.com> wrote:
On 29/08, Albert Braden wrote:
What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com? On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor < geguileo@redhat.com> wrote:
Hi,
If I remember correctly you can set it to anything you want, but for convenience I would recommend setting it to the hostname that currently has more volumes in your system.
Let's say you have 3 hosts: qde4-ctrl1.cloud.ourdomain.com qde4-ctrl2.cloud.ourdomain.com qde4-ctrl3.cloud.ourdomain.com
And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.
Then it would be best to set it to ctrl1: [rbd-1] backend_host = qde4-ctrl1.cloud.ourdomain.com
And then use the cinder-manage command to modify the other 2.
For your information the value you see as "os-vol-host-attr:host" when seeing the detailed information of a volume is in the form of: <HostName>@<BackendName>#<PoolName>
In your case: <HostName> = qde4-ctrl1.cloud.ourdomain.com <BackendName> = rbd-1 <PoolName> = rbd-1
In the RBD case the poolname will always be the same as the backendname.
Cheers,
Gorka.
We’re replacing controllers, and it takes a few hours to build the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host...
After that the cluster seems to run fine on 2 controllers, but approximately 1/3 of our volumes can’t be attached to a VM. When we look at
On 29/08, Albert Braden wrote: those volumes, we see this:
| os-vol-host-attr:host | qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1
|
Ctrl1 is the controller that is being replaced. Is it possible to
change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi,
Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host.
This is done with the "backend_host" configuration option within the specific driver section in cinder.conf.
As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command:
cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host>
Cheers, Gorka.
This seems to work fine for the most part. The host changed on the volumes, and I can delete them, except for the one that I tried to migrate. It seems to be stuck in "os-vol-mig-status-attr:migstat | starting" and I get an error trying to delete it. How can I get it out of "migrating" status? | name | albert_testvol2 | | os-vol-host-attr:host | api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 | | os-vol-mig-status-attr:migstat | starting | | os-vol-mig-status-attr:name_id | None qde4:admin]$ os volume delete 4bd84ade-5711-476a-b1fc-019cc9d0371a Failed to delete volume with name or ID '4bd84ade-5711-476a-b1fc-019cc9d0371a': Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-2ac17e8c-f9e5-48b8-b5c0-85f658f9fd37) 1 of 1 volumes failed to delete. On Tuesday, October 17, 2023 at 07:54:02 AM EDT, Michal Arbet <michal.arbet@ultimum.io> wrote: Hi, Use cinder-manage OR update column with new host name.. Cinder manage solution : (cinder-volume)[root@controller0 /]# cinder-manage volume update_host --help usage: cinder-manage volume update_host [-h] --currenthost CURRENTHOST --newhost NEWHOST optional arguments: -h, --help show this help message and exit --currenthost CURRENTHOST Existing volume host name in the format host@backend#pool --newhost NEWHOST New volume host name in the format host@backend#pool KevkoMichal Arbet Openstack Engineer Ultimum Technologies a.s. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet@ultimum.io https://ultimum.io LinkedIn | Twitter | Facebook po 16. 10. 2023 v 22:15 odesílatel Albert Braden <ozzzo@yahoo.com> napsal: I tried this in the lab, setting backend_host to "api-int.qde4.ourdomain.com" and my new volumes end up with: os-vol-host-attr:host | api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 And they work fine, but volumes that existed before the change cannot be attached, detached nor deleted. They still have the old setting: os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1 I tried migrating them to the new host with: openstack volume migrate --host api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 <id> But the migration never finishes: | name | albert_testvol2 | | os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1 | | os-vol-mig-status-attr:migstat | starting What am I missing? How can I fix my existing volumes after changing the backend_host? On Wednesday, August 30, 2023 at 02:59:00 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote: On 29/08, Albert Braden wrote:
What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com? On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote:
Hi, If I remember correctly you can set it to anything you want, but for convenience I would recommend setting it to the hostname that currently has more volumes in your system. Let's say you have 3 hosts: qde4-ctrl1.cloud.ourdomain.com qde4-ctrl2.cloud.ourdomain.com qde4-ctrl3.cloud.ourdomain.com And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3. Then it would be best to set it to ctrl1: [rbd-1] backend_host = qde4-ctrl1.cloud.ourdomain.com And then use the cinder-manage command to modify the other 2. For your information the value you see as "os-vol-host-attr:host" when seeing the detailed information of a volume is in the form of: <HostName>@<BackendName>#<PoolName> In your case: <HostName> = qde4-ctrl1.cloud.ourdomain.com <BackendName> = rbd-1 <PoolName> = rbd-1 In the RBD case the poolname will always be the same as the backendname. Cheers, Gorka.
On 29/08, Albert Braden wrote:
We’re replacing controllers, and it takes a few hours to build the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host...
After that the cluster seems to run fine on 2 controllers, but approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this:
| os-vol-host-attr:host | qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 |
Ctrl1 is the controller that is being replaced. Is it possible to change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi,
Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host.
This is done with the "backend_host" configuration option within the specific driver section in cinder.conf.
As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command:
cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host>
Cheers, Gorka.
Hi, You could try resetting the volume state by using: openstack volume set --state error 4bd84ade-5711-476a-b1fc-019cc9d0371a https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/volume.html#volume-set Franciszek
On 19 Oct 2023, at 22:54, Albert Braden <ozzzo@yahoo.com> wrote:
This seems to work fine for the most part. The host changed on the volumes, and I can delete them, except for the one that I tried to migrate. It seems to be stuck in "os-vol-mig-status-attr:migstat | starting" and I get an error trying to delete it. How can I get it out of "migrating" status?
| name | albert_testvol2 | | os-vol-host-attr:host | api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 | | os-vol-mig-status-attr:migstat | starting | | os-vol-mig-status-attr:name_id | None
qde4:admin]$ os volume delete 4bd84ade-5711-476a-b1fc-019cc9d0371a Failed to delete volume with name or ID '4bd84ade-5711-476a-b1fc-019cc9d0371a': Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-2ac17e8c-f9e5-48b8-b5c0-85f658f9fd37) 1 of 1 volumes failed to delete.
On Tuesday, October 17, 2023 at 07:54:02 AM EDT, Michal Arbet <michal.arbet@ultimum.io> wrote:
Hi,
Use cinder-manage OR update column with new host name..
Cinder manage solution :
(cinder-volume)[root@controller0 /]# cinder-manage volume update_host --help usage: cinder-manage volume update_host [-h] --currenthost CURRENTHOST --newhost NEWHOST
optional arguments: -h, --help show this help message and exit --currenthost CURRENTHOST Existing volume host name in the format host@backend#pool --newhost NEWHOST New volume host name in the format host@backend#pool
Kevko Michal Arbet Openstack Engineer
Ultimum Technologies a.s. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic
+420 604 228 897 <> michal.arbet@ultimum.io <mailto:michal.arbet@ultimum.io> https://ultimum.io <https://ultimum.io/>
LinkedIn <https://www.linkedin.com/company/ultimum-technologies> | Twitter <https://twitter.com/ultimumtech> | Facebook <https://www.facebook.com/ultimumtechnologies/timeline>
po 16. 10. 2023 v 22:15 odesílatel Albert Braden <ozzzo@yahoo.com <mailto:ozzzo@yahoo.com>> napsal: I tried this in the lab, setting backend_host to "api-int.qde4.ourdomain.com <http://api-int.qde4.ourdomain.com/>" and my new volumes end up with:
os-vol-host-attr:host | api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1
And they work fine, but volumes that existed before the change cannot be attached, detached nor deleted. They still have the old setting:
os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1
I tried migrating them to the new host with:
openstack volume migrate --host api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 <id>
But the migration never finishes:
| name | albert_testvol2 | | os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1 | | os-vol-mig-status-attr:migstat | starting
What am I missing? How can I fix my existing volumes after changing the backend_host? On Wednesday, August 30, 2023 at 02:59:00 AM EDT, Gorka Eguileor <geguileo@redhat.com <mailto:geguileo@redhat.com>> wrote:
On 29/08, Albert Braden wrote:
What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com <http://api-int.qde4.ourdomain.com/>? On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor <geguileo@redhat.com <mailto:geguileo@redhat.com>> wrote:
Hi,
If I remember correctly you can set it to anything you want, but for convenience I would recommend setting it to the hostname that currently has more volumes in your system.
Let's say you have 3 hosts: qde4-ctrl1.cloud.ourdomain.com <http://qde4-ctrl1.cloud.ourdomain.com/> qde4-ctrl2.cloud.ourdomain.com <http://qde4-ctrl2.cloud.ourdomain.com/> qde4-ctrl3.cloud.ourdomain.com <http://qde4-ctrl3.cloud.ourdomain.com/>
And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.
Then it would be best to set it to ctrl1: [rbd-1] backend_host = qde4-ctrl1.cloud.ourdomain.com <http://qde4-ctrl1.cloud.ourdomain.com/>
And then use the cinder-manage command to modify the other 2.
For your information the value you see as "os-vol-host-attr:host" when seeing the detailed information of a volume is in the form of: <HostName>@<BackendName>#<PoolName>
In your case: <HostName> = qde4-ctrl1.cloud.ourdomain.com <http://qde4-ctrl1.cloud.ourdomain.com/> <BackendName> = rbd-1 <PoolName> = rbd-1
In the RBD case the poolname will always be the same as the backendname.
Cheers,
Gorka.
On 29/08, Albert Braden wrote:
We’re replacing controllers, and it takes a few hours to build the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host...
After that the cluster seems to run fine on 2 controllers, but approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this:
| os-vol-host-attr:host | qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 |
Ctrl1 is the controller that is being replaced. Is it possible to change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi,
Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host.
This is done with the "backend_host" configuration option within the specific driver section in cinder.conf.
As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command:
cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host>
Cheers, Gorka.
participants (6)
-
Albert Braden
-
Danny Webb
-
Eugen Block
-
Franciszek Przewoźny
-
Gorka Eguileor
-
Michal Arbet