[cinder][nova] Migrating servers' root block devices from a cinder backend to another

Tobias Urdin tobias.urdin at binero.se
Thu Jan 30 08:34:15 UTC 2020


We did this something similar recently, we booted all instances from 
Cinder volume (with "Delete on terminate" set) in an old platform.

So we added our new Ceph storage to the old platform, removed instances 
(updated delete_on_terminate to 0 in Nova DB).
Then we issued a retype so cinder-volume performed a `dd` of the volume 
from the old to the new storage.

We then synced network/subnet/sg and started instances with same fixed 
IP and moved floating IPs to the new platform.

Since you only have to swap storage you should experiment with powering 
off the instances and try doing a migrate of the volume
but I suspect you need to either remove the instance or do some really 
nasty database operations.

I would suggest always going through the API and recreate the instance 
from the migrated volume instead of changing in the DB.
We had to update delete_on_terminate in DB but that was pretty trivial 
(and I even think there is a spec that is not implemented yet that will 
allow that from API).

On 1/29/20 9:54 PM, Jean-Philippe Méthot wrote:
> Hi,
>
> We have a several hundred VMs which were built on cinder block devices 
> as root drives which use a SAN backend. Now we want to change their 
> backend from the SAN to Ceph.
> We can shutdown the VMs but we will not destroy them. I am aware that 
> there is a cinder migrate volume command to change a volume’s backend, 
> but it requires that the volume be completely detached. Forcing a 
> detached state on
> that volume does let the volume migration take place, but the volume’s 
> path in Nova block_device_mapping doesn’t update, for obvious reasons.
>
> So, I am considering forcing the volumes to a detached status in 
> Cinder and then manually updating the nova db block_device_mapping 
> entry for each volume so that the VM can boot back up afterwards.
> However, before I start toying with the database and accidentally 
> break stuff, has anyone else ever done something similar? Got any tips 
> or hints on how best to proceed?
>
> Jean-Philippe Méthot
> Openstack system administrator
> Administrateur système Openstack
> PlanetHoster inc.
>
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200130/829dd8fc/attachment-0001.html>


More information about the openstack-discuss mailing list