Updating Monitor List (rbd section) using Virsh with no downtime
Hello, is there a way to update a monitor list of the rbd section in a virtual machine? We need to add or modify the following entries. <source protocol='rbd' name='cinder/volume-343766ad-0086-4f8c-a557-bc60ba2210f4' index='1'> <host name='172.16.65.164' port='6789'/> <host name='172.16.65.165' port='6789'/> <host name='172.16.65.166' port='6789'/> </source> We want to replace the Monitor network and this means that we have to update every virtual machine with the new Ceph monitor IPs. The only way to replace them is with a live migration (then the live ceph configuration with the new IPs will be applied), but we would like to know if there is a faster way (because we have approximately 200-300 VMs) and some of them are so big that a live migration is impossible (only an offline migration could be done, resulting in a downtime). Another way to refresh the list is detaching the volume, but this only works for non-bootable volumes. We tried modifying the virsh XML directly (virsh edit ...) and creating a separate XML file and applying it with "virsh define newfile.xml", but the list will not be updated. Many thanks in advance!
Hello,
is there a way to update a monitor list of the rbd section in a virtual machine? We need to add or modify the following entries.
<source protocol='rbd' name='cinder/volume-343766ad-0086-4f8c-a557-bc60ba2210f4' index='1'> <host name='172.16.65.164' port='6789'/> <host name='172.16.65.165' port='6789'/> <host name='172.16.65.166' port='6789'/> </source> no we do not supprot changinteh ip adress oc ceph monitors for openstack vm
On 18/11/2025 12:26, m.riudalbas@first-colo.net wrote: the only way to update this required guest downtime or by migrating the vm.
We want to replace the Monitor network and this means that we have to update every virtual machine with the new Ceph monitor IPs.
yes that is a know limiation of howe this work today.
The only way to replace them is with a live migration (then the live ceph configuration with the new IPs will be applied), but we would like to know if there is a faster way (because we have approximately 200-300 VMs) and some of them are so big that a live migration is impossible (only an offline migration could be done, resulting in a downtime). Another way to refresh the list is detaching the volume, but this only works for non-bootable volumes.
live migration is the optimal way, failing that you you woudl need to cold migrate, shelve or hard reboot the instance to regenrate the xml.
We tried modifying the virsh XML directly (virsh edit ...) and creating a separate XML file and applying it with "virsh define newfile.xml", but the list will not be updated.
correct direct modification fo the xml is not supproted and form an upstream point of view woudl make the vm unmanagble by nova. while this might be tempthing it woudl ptoitaly cause other bugs i belive the monitor adres are stored in teh ceph attachme connection info which si also stored in nova's block device mapping object in nova db so this is not the correct approch.
Many thanks in advance!
participants (2)
-
m.riudalbas@first-colo.net
-
Sean Mooney