# rbd children images/3708f961-fb74-49f1-ab9b-40cf7954abed volumes/volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a # rbd du volumes/volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a NAME PROVISIONED USED volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a 40 GiB 8.1 GiB <-- 8.1 GB # rbd flatten volumes/volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a Image flatten: 100% complete...done. # rbd du volumes/volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a NAME PROVISIONED USED volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a 40 GiB 8.5 GiB <-- 8.5 GB # openstack image delete 3708f961-fb74-49f1-ab9b-40cf7954abed #
Hi Abhishek,Do you think this error means I have a protected snapshot?# rbd snap ls images/3708f961-fb74-49f1-ab9b-40cf7954abed
SNAPID NAME SIZE PROTECTED TIMESTAMP
55 snap 40 GiB yes Wed Jan 17 13:41:35 2024On Wed, Jan 24, 2024 at 2:05 PM Satish Patel <satish.txt@gmail.com> wrote:I did apply this patch by patch command (manually) - https://review.opendev.org/c/openstack/glance_store/+/896447Still getting errors. Do I need to do anything else like recreate instance or snapshot etc.2024-01-24 19:03:53.165 47 WARNING glance_store._drivers.rbd [None req-c4d10e8a-1bc3-496c-8df1-6156582d760f 93e2d918bc7a4d92a93df927743d00ff 08cae850a5bb47d998da180a7f0e2660 - - default default] Snap Operating Exception [errno 16] RBD image is busy (error unprotecting snapshot b'3708f961-fb74-49f1-ab9b-40cf7954abed'@b'snap') Snapshot is in use.: rbd.ImageBusy: [errno 16] RBD image is busy (error unprotecting snapshot b'3708f961-fb74-49f1-ab9b-40cf7954abed'@b'snap')
2024-01-24 19:03:53.171 47 WARNING glance.api.v2.images [None req-c4d10e8a-1bc3-496c-8df1-6156582d760f 93e2d918bc7a4d92a93df927743d00ff 08cae850a5bb47d998da180a7f0e2660 - - default default] Image 3708f961-fb74-49f1-ab9b-40cf7954abed could not be deleted because it is in use: The image cannot be deleted because it is in use through the backend store outside of Glance.: glance_store.exceptions.InUseByStore: The image cannot be deleted because it is in use through the backend store outside of Glance.On Wed, Jan 24, 2024 at 1:54 PM Abhishek Kekane <akekane@redhat.com> wrote:Hi Satish,This needs to be applied on glance_store and not glance.Thanks & Best Regards,Abhishek KekaneOn Thu, Jan 25, 2024 at 12:18 AM Satish Patel <satish.txt@gmail.com> wrote:Hi Abhishek,I found a patch here - https://review.opendev.org/c/openstack/glance_store/+/896447 Can I just simplay apply this patch on existing glance and see if it works or not?On Wed, Jan 24, 2024 at 12:07 PM Abhishek Kekane <akekane@redhat.com> wrote:Hi Satish,Which version of openstack are you using?In bobcat (glance_store version 4.6.0) we have added a feature [1] RBD Trash to cover the issue you described.Pre bobcat (glance_store version < 4.6.0), deletion of snapshot will fail with the issue you described.Thanks & Best Regards,Abhishek KekaneOn Wed, Jan 24, 2024 at 10:26 PM Satish Patel <satish.txt@gmail.com> wrote:Folks,I have two openstack clouds and both have their own Ceph backend storage.I am trying to migrate instances from openstack A to openstack B.1. Take a snapshot from A2. Export snapshot and import to B3. Create instance on B4. Delete snapshot - (I am getting error because its in-use)Because it has a parent reference to that volume. How do I remove the reference so it will let me delete a snapshot. Reason I am asking is because I have so many VMs to migrate and don't want glance to have 100s of entities in those snapshots.What is the alternative here? I can try qcow2 if that is the final solution to make it clean.# rbd -p volumes info volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a rbd image 'volume-f2b2aec2-cc57-49e5-aca1-54b5a7ee9f3a': size 40 GiB in 5120 objects order 23 (8 MiB objects) snapshot_count: 0 id: 5473c827864fed block_name_prefix: rbd_data.5473c827864fed format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Fri Jan 19 19:03:25 2024 access_timestamp: Wed Jan 24 15:51:04 2024 modify_timestamp: Wed Jan 24 15:52:40 2024 parent: images/3708f961-fb74-49f1-ab9b-40cf7954abed@snap overlap: 40 GiB