[cinder][nova] Migrating servers' root block devices from a cinder backend to another

Tobias Urdin tobias.urdin at binero.se
Thu Jan 30 09:49:23 UTC 2020


Another approach would be to export the data to Glance and download then 
you can upload it somewhere.

There is no ready thing that I know about. We used the openstacksdk to 
simply recreate the steps we did on CLI.
Create all the neccesary resources on the other side, create new 
instances from the migrated volume and set a fixedIP on the neutron port 
to get same IP address.

On 1/30/20 9:43 AM, Tony Pearce wrote:
> I want to do something similar soon and don't want to touch the db (I 
> experimented with cloning the "controller" and it did not achieve any 
> desired outcome).
>
> Is there a way to export an instance from Openstack in terms of 
> something like a script that could re-create it on another openstack 
> as a like-for-like? I guess this is assuming that the instance is 
> linux-based and has cloud-init enabled.
>
>
> *Tony Pearce*| *Senior Network Engineer / Infrastructure Lead
> **Cinglevue International <https://www.cinglevue.com>*
>
> Email: tony.pearce at cinglevue.com <mailto:tony.pearce at cinglevue.com>
> Web: http://www.cinglevue.com <http://www.cinglevue.com/>**
>
> *Australia*
> 1 Walsh Loop, Joondalup, WA 6027 Australia.
>
> Direct: +61 8 6202 0036 | Main: +61 8 6202 0024
>
> Note: This email and all attachments are the sole property of 
> Cinglevue International Pty Ltd. (or any of its subsidiary entities), 
> and the information contained herein must be considered confidential, 
> unless specified otherwise.   If you are not the intended recipient, 
> you must not use or forward the information contained in these 
> documents.   If you have received this message in error, please delete 
> the email and notify the sender.
>
>
>
> On Thu, 30 Jan 2020 at 16:39, Tobias Urdin <tobias.urdin at binero.se 
> <mailto:tobias.urdin at binero.se>> wrote:
>
>     We did this something similar recently, we booted all instances
>     from Cinder volume (with "Delete on terminate" set) in an old
>     platform.
>
>     So we added our new Ceph storage to the old platform, removed
>     instances (updated delete_on_terminate to 0 in Nova DB).
>     Then we issued a retype so cinder-volume performed a `dd` of the
>     volume from the old to the new storage.
>
>     We then synced network/subnet/sg and started instances with same
>     fixed IP and moved floating IPs to the new platform.
>
>     Since you only have to swap storage you should experiment with
>     powering off the instances and try doing a migrate of the volume
>     but I suspect you need to either remove the instance or do some
>     really nasty database operations.
>
>     I would suggest always going through the API and recreate the
>     instance from the migrated volume instead of changing in the DB.
>     We had to update delete_on_terminate in DB but that was pretty
>     trivial (and I even think there is a spec that is not implemented
>     yet that will allow that from API).
>
>     On 1/29/20 9:54 PM, Jean-Philippe Méthot wrote:
>>     Hi,
>>
>>     We have a several hundred VMs which were built on cinder block
>>     devices as root drives which use a SAN backend. Now we want to
>>     change their backend from the SAN to Ceph.
>>     We can shutdown the VMs but we will not destroy them. I am aware
>>     that there is a cinder migrate volume command to change a
>>     volume’s backend, but it requires that the volume be completely
>>     detached. Forcing a detached state on
>>     that volume does let the volume migration take place, but the
>>     volume’s path in Nova block_device_mapping doesn’t update, for
>>     obvious reasons.
>>
>>     So, I am considering forcing the volumes to a detached status in
>>     Cinder and then manually updating the nova db
>>     block_device_mapping entry for each volume so that the VM can
>>     boot back up afterwards.
>>     However, before I start toying with the database and accidentally
>>     break stuff, has anyone else ever done something similar? Got any
>>     tips or hints on how best to proceed?
>>
>>     Jean-Philippe Méthot
>>     Openstack system administrator
>>     Administrateur système Openstack
>>     PlanetHoster inc.
>>
>>
>>
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200130/42fc9abd/attachment.html>


More information about the openstack-discuss mailing list