Hi Eugen and Rajat, Thanks for the feedback! The solution you provided solved the problem — specifically, using the cinder-manage command. After doing more research, I also found that when updating the host attribute via cinder-manage, it's necessary to update the internal service_uuid in the database [1], using cinder-manage volume update_service on the host to which the volumes were updated. Also, as Rajat pointed out, this works because we use external Ceph, so the Cinder pools themselves are not on the hosts but in Ceph. Therefore, it's safe to update the host attribute logically (in the database only). For reference, I wrote a blog post [2] showing all the procedures, including the Cinder one. It might help someone with a similar environment to mine. Thanks again for the responses, Isaac. [1] https://bugs.launchpad.net/cinder/+bug/1890278 [2] https://isaacvicente.github.io/posts/replacing-control-nodes-with-kolla-ansi... On Wed, Mar 18, 2026, 07:14 Eugen Block <eblock@nde.ag> wrote:
Oh, I forgot about that. :-D That's definitely preferrable compared to my approach with the direct DB manipulation. :-)
Zitat von Rajat Dhasmana <rdhasman@redhat.com>:
Hi Isaac,
I'm not sure if the host entry should point to the controller but this might differ between deployments. If you just need a way to update the host entry from one node to another, we have a cinder-manage command to do so.
cinder-manage volume update_host --currenthost CURRENTHOST --newhost NEWHOST
Thanks Rajat Dhasmana
On Wed, Mar 18, 2026 at 2:01 PM Eugen Block <eblock@nde.ag> wrote:
Hi,
first a word of caution: I have never done this in a production environment, it's just an idea, so please take that with a grain of salt and take a backup of your DB if you're willing to test that.
From my understanding, this is just a database entry, and since your backend is Ceph (RBD) the underlying volume is not "physically" on that old controller.
What I tested in my lab environment was this:
MariaDB [cinder]> select host from volumes where id='2e82b488-1fc3-4712-a1ea-778cfd1ed59d'; +--------------------------+ | host | +--------------------------+ | controller@hdd-ec#hdd-ec | +--------------------------+ 1 row in set (0,001 sec)
MariaDB [cinder]> update volumes set host='controller@rbd2#rbd2' where id='2e82b488-1fc3-4712-a1ea-778cfd1ed59d';
So I just replaced the "host" string with a different string. The volume is still intact and the VM attached to it works (as expected). So in theory, you could just replace the "host" entry with a valid backend. But as I said, there's no guarantee for anything.
What I'm wondering about is, if you have multiple control nodes, how come "host" is a specific control node and not a generic one (for example, we use "controller" which points to the virtual IP of our openstack services). I thought with kolla-ansible (which we don't use) there would be a "backend_host" override in each backend section which would also be some generic entry, for example:
backend_host = rbd:hdd
or
backend_host = rbd:ssd
A different approach could be to "unmanage" that volume, but I don't know if that works if the backend is not available. And it requires the volume to be detached (at least its state has to be). Then you could unmanage it:
cinder unmanage ac314aa4-9b21-4a41-9c60-9a4ff123a6c5
cinder manage --name imported-volume controller@rbd2#rbd2 volume-ac314aa4-9b21-4a41-9c60-9a4ff123a6c5
This creates a new volume (new ID). That's all I got. :-)
Regards, Eugen
Zitat von Isaac Vicente <isaacvicentsocial@gmail.com>:
Hello all,
In the past week I've added a new controller and decommissioned an old one, following the official documentation [1], and I wrote a version of my own [2], specific to my environment. In the section "Removing existing controllers", it's necessary to move agents and remove services of the node that will be decommissioned. So far so good, until I realize that a volume has the following property:
os-vol-host-attr:host: <old_node>@rbd-1#rbd-1
It means that this volume is still on the backend of the removed node. So I tried to migrate it to another backend, and the migration status is "starting" for a long time. In the cinder-scheduler logs: WARNING cinder.scheduler.host_manager volume service is down. (host: <old_node>@rbd-1)
To me, the migration stays in this status because the proper backend is down. To mention, I'm using Ceph as cinder backend. Also, there's no mention of the volume being migrated on other controller nodes.
Is there any method to migrate all volumes of this backend without needing to re-deploy the old controller?
Versions: - Openstack: caracal (2024.1) - OS: Ubuntu 22.04 - kolla-ansible: 18.8.1
[1]
https://docs.openstack.org/kolla-ansible/latest/user/adding-and-removing-hos...
[2]
https://isaacvicente.github.io/posts/replacing-control-nodes-with-kolla-ansi...