I am using openstack horizon to create a snapshot of an instance and then download that snapshot using glance image-download command and just import to Openstack B cloud. good to know that I can use ceph rbd to take a snapshot and import directly to ceph clusterB. (I will run this experiment) Last question importing directly from rbd command how does cinder service know and DB will understand there is a volume available. (As you mentioned I need to use the cinder manager command to add entries in DB) I will google and see if I can find some help there. On Wed, Jan 24, 2024 at 1:57 PM Eugen Block <eblock@nde.ag> wrote:
Just to prevent misunderstandings, can you clarify how exactly you create the snapshots? Are we talking about nova snapshots (for example from the dashboard) or rbd snapshots? Nova snaps result in new glance images. Before I make too many assumptions, maybe you could clarify what exactly you're trying to do (commands, results etc.).
I would do it on rbd side: - Stop VM - Create rbd snapshot on cluster A - Export/Import rbd snap into ceph cluster B --> You could import it directly into the cinder pool on site B and "cinder manage" it. Then launch an instance from that volume. - Flatten rbd image on cluster B to make it independent of snap on site A - Remove snap no site A - Delete image on site A
This is written up by heart, so it might be incomplete or not fully applicable to your case. So please clarify if I make wrong assumptions.
Zitat von Satish Patel <satish.txt@gmail.com>:
Thanks for the explanation.
Openstack A is very old and at the end of life.. I am going to burn it to the ground. I just want to move some instances from Openstack A to Openstack B (This is all legacy applications and hard to rebuild). Once I move the snapshot to Openstack B and spin up the vm after that I don't need that snapshot hanging around. I can just delete it and make the glance registry neat and clean.
Do I need to flatten the image in Openstack A (old) ?
On Wed, Jan 24, 2024 at 1:41 PM Eugen Block <eblock@nde.ag> wrote:
If I understand your workflow correctly you just need to import the snapshot to cluster B, on cluster B you then flatten the base (glance) image:
rbd flatten glance/<SOME_UUID>
This should remove all references from the parent image from cluster A. Glance doesn't really care about that (as far as I know). You can then create a new snapshot (and protect it) for that image so glance on cluster B can create children:
rbd snap create glance/<SOME_UUID>@snap rbd snap protect glance/<SOME_UUID>@snap
That should basically be it, now you can launch instances on site B from that imported image. What I don't understand yet is this part: "and how does glance understand [...] and how to delete it from glance?" I thought you wanted a new base image on site B, why deleting it from glance? Or are you referring to the snapshot on site A? If that's the case then you don't need to worry about that after the image has been flattened, it removes all references to site A, you should be able to delete the snapshot on site A.
Zitat von Satish Patel <satish.txt@gmail.com>:
Hi Eugen,
Could you give me example commands and processes to flatten? Do I need to flatten from ceph storage and how does glance understand that image is flatten and how to delete it from glance?
On Wed, Jan 24, 2024 at 1:06 PM Eugen Block <eblock@nde.ag> wrote:
You can just flatten the image to make it independent from the parent image. After flattening (rbd flatten) you could sparsify it to get some storage capacity back.
Zitat von Satish Patel <satish.txt@gmail.com>:
Folks,
I have two openstack clouds and both have their own Ceph backend storage.I am trying to migrate instances from openstack A to openstack B.
1. Take a snapshot from A 2. Export snapshot and import to B 3. Create instance on B 4. Delete snapshot - (I am getting error because its in-use)
Because it has a parent reference to that volume. How do I remove the reference so it will let me delete a snapshot. Reason I am asking is because I have so many VMs to migrate and don't want glance to have 100s of entities in those snapshots.
What is the alternative here? I can try qcow2 if that is the final solution to make it clean.
# rbd -p volumes info volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a rbd image 'volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a': size 40 GiB in 5120 objects order 23 (8 MiB objects) snapshot_count: 0 id: 5473c827864fed block_name_prefix: rbd_data.5473c827864fed format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Fri Jan 19 19:03:25 2024 access_timestamp: Wed Jan 24 15:51:04 2024 modify_timestamp: Wed Jan 24 15:52:40 2024 parent: images/3708f961-fb74-49f1-ab9b-40cf7954abed@snap overlap: 40 GiB