Hi, OpenStack Community.
I'm part of the StarlingX Community and our starlingx-openstack distribution is currently delivering the 2024.1 (Caracal) release. We are facing an issue when using Cinder with the Ceph RBD backend: volumes created from Glance images leave behind protected
snapshots and *.deleted volumes in the cinder-volumes pool after deletion, and these remnants are not being cleaned up automatically, even though I’ve enabled deferred deletion and garbage collection. So, I'm writing this email to explain our current issue
and get feedback on this new Launchpad [1] I’m creating for Cinder.
Environment:
conf:
cinder:
backend_defaults:
rbd_flatten_volume_from_snapshot: True
enable_deferred_deletion: True
deferred_deletion_purge_interval: 10
ceph:
image_volume_cache_enabled: False
Issue description:
1) Create volume(s) from an image:
2) Cinder creates a protected snapshot of the source image and then clones from it (as expected due to Ceph’s requirement for protected snapshots when cloning).
3) Delete the volume(s) via OpenStack CLI:
After the deletion the corresponding RBD image and its snapshot are not cleaned up:
NAME SIZE PARENT FMT PROT LOCK
<uuid>.deleted 100 GiB 2
<uuid>.deleted@<clone_snap>.clone_snap 1 GiB 2 yes...
Even with the garbage collection features enabled (enable_deferred_deletion, rbd_flatten_volume_from_snapshot, and deferred_deletion_purge_interval), these protected snapshots and deleted volumes
persist indefinitely, unless I clean them manually:
-
rbd snap unprotect <volume>.deleted@<snapshot> -p cinder-volumes
-
rbd snap rm <volume>.deleted@<snapshot> -p cinder-volumes
-
rbd rm <volume>.deleted -p cinder-volumes
Here’s the logs that cinder pod shows while trying to delete the volume each 10 seconds:
2025-05-28 19:44:48.079 8 INFO cinder.volume.drivers.rbd [-] Purging trash for backend 'ceph-store'
2025-05-28 19:44:58.076 8 INFO cinder.volume.drivers.rbd [-] Purging trash for backend 'ceph-store'
2025-05-28 19:45:08.081 8 INFO cinder.volume.drivers.rbd [-] Purging trash for backend 'ceph-store'
Questions:
-
Is this behavior expected in Cinder/CEPH integrations?
-
Shouldn't Cinder unprotect/delete the snapshot when the last volume, depending on how it is removed?
Any insights or best practices would be greatly appreciated.
Thanks in advance,
Vinicius Lobo
StarlingX contributor
[1] https://bugs.launchpad.net/cinder/+bug/2114853