This seems to work fine for the most part. The host changed on the volumes, and I can delete them, except for the one that I tried to migrate. It seems to be stuck in "os-vol-mig-status-attr:migstat | starting" and I get an error trying to delete it. How can I get it out of "migrating" status?

| name | albert_testvol2 |
| os-vol-host-attr:host | api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 |
| os-vol-mig-status-attr:migstat | starting |
| os-vol-mig-status-attr:name_id | None

qde4:admin]$ os volume delete 4bd84ade-5711-476a-b1fc-019cc9d0371a
Failed to delete volume with name or ID '4bd84ade-5711-476a-b1fc-019cc9d0371a': Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-2ac17e8c-f9e5-48b8-b5c0-85f658f9fd37)
1 of 1 volumes failed to delete.


On Tuesday, October 17, 2023 at 07:54:02 AM EDT, Michal Arbet <michal.arbet@ultimum.io> wrote:


Hi,

Use cinder-manage  OR update column with new host name..

Cinder manage solution : 

(cinder-volume)[root@controller0 /]# cinder-manage volume update_host --help
usage: cinder-manage volume update_host [-h] --currenthost CURRENTHOST --newhost NEWHOST

optional arguments:
  -h, --help            show this help message and exit
  --currenthost CURRENTHOST
                        Existing volume host name in the format host@backend#pool
  --newhost NEWHOST     New volume host name in the format host@backend#pool


Kevko
Michal Arbet
Openstack Engineer

Ultimum Technologies a.s.
Na Poříčí 1047/26, 11000 Praha 1
Czech Republic

+420 604 228 897 
michal.arbet@ultimum.io
https://ultimum.io



po 16. 10. 2023 v 22:15 odesílatel Albert Braden <ozzzo@yahoo.com> napsal:
I tried this in the lab, setting backend_host to "api-int.qde4.ourdomain.com" and my new volumes end up with:

os-vol-host-attr:host | api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1

And they work fine, but volumes that existed before the change cannot be attached, detached nor deleted. They still have the old setting:

os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1

I tried migrating them to the new host with:

openstack volume migrate --host api-int.qde4.cloud.ourdomain.com@rbd-1#rbd-1 <id>

But the migration never finishes:

| name | albert_testvol2 |
| os-vol-host-attr:host | qde4-ctrl2.cloud.ourdomain.com@rbd-1#rbd-1 |
| os-vol-mig-status-attr:migstat | starting

What am I missing? How can I fix my existing volumes after changing the backend_host?
On Wednesday, August 30, 2023 at 02:59:00 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote:


On 29/08, Albert Braden wrote:
>  What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com?
>      On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote:
>

Hi,

If I remember correctly you can set it to anything you want, but for
convenience I would recommend setting it to the hostname that currently
has more volumes in your system.

Let's say you have 3 hosts:
  qde4-ctrl1.cloud.ourdomain.com
  qde4-ctrl2.cloud.ourdomain.com
  qde4-ctrl3.cloud.ourdomain.com

And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.

Then it would be best to set it to ctrl1:
[rbd-1]
backend_host = qde4-ctrl1.cloud.ourdomain.com

And then use the cinder-manage command to modify the other 2.

For your information the value you see as "os-vol-host-attr:host" when
seeing the detailed information of a volume is in the form of:
<HostName>@<BackendName>#<PoolName>

In your case:
  <HostName> = qde4-ctrl1.cloud.ourdomain.com
  <BackendName> = rbd-1
  <PoolName> = rbd-1                         

In the RBD case the poolname will always be the same as the backendname.

Cheers,

Gorka.

>  On 29/08, Albert Braden wrote:
> > We’re replacing controllers, and it takes a few hours to build the new controller. We’re following this procedure to remove the old controller: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html
> >
> >  After that the cluster seems to run fine on 2 controllers, but approximately 1/3 of our volumes can’t be attached to a VM. When we look at those volumes, we see this:
> >
> > | os-vol-host-attr:host          | qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1                            |
> >
> > Ctrl1 is the controller that is being replaced. Is it possible to change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
> >
>
> Hi,
>
> Assuming you are running cinder volume in Active-Passive mode (which I
> believe was the only way back in Train) then you should be hardcoding
> the host name in the cinder.conf file to avoid losing access to your
> volumes when the volume service starts in another host.
>
> This is done with the "backend_host" configuration option within the
> specific driver section in cinder.conf.
>
> As for how to change the value of all the volumes to the same host
> value, you can use the "cinder-manage" command:
>
>   cinder-manage volume update_host \
>     --currenthost <current host> \
>     --newhost <new host>
>
> Cheers,
> Gorka.
>
>
>