[Cinder] Odd Volume and Volume Snapshot Dependencies

Sean McGinnis sean.mcginnis at gmx.com
Mon Apr 15 13:00:46 UTC 2019


On Fri, Apr 12, 2019 at 07:44:15PM +0000, Jeremy Houser wrote:
> Currently running python-openstackclient functional tests against an
> openstack deployment. Ran into some odd functionality whilst
> manipulating volumes and snapshots in openstack via cli and gui.
>
> I create a volume, create a snapshot of that volume, and then create a
> volume from that snapshot. I can't delete the snapshot until I delete the
> volume that I spawned using that snapshot. I understand the dependency
> of "can't delete a volume while a snapshot dependent on that volume is
> deleted first" but why can't a volume snapshot be deleted when volumes
> are spawned from it? Is this intentional? I am working with Cinder 3.27.
>
> Jeremy

I think it may depend a little on the storage backend you are using through
Cinder, but many storage systems will have an internal dependency on that
snapshot with the volume created from it. So some are able to handle things
correctly and mark that snapshot as deleted, even though it can't actually
delete the snapshot until the volume "using" it is also deleted.

This is an optimization that the storage devices do. Many are able to very
quickly create a volume from snapshot because it is really just a metadata
operation. Then any new writes the the new volume are really just a copy on
write operation or something similar. Great for speed of creation and for
optimizing space consumption, but then it does create a hard dependency on that
source snapshot.

If the backend you are using is not able to handle marking that source snapshot
as deleting while it is being used (with actual cleanup happening later) then a
workaround for you might be a multi-step process of creating a volume from
snapshot, cloning that new volume to have a new completely independent one,
then deleting that created-from-snapshot volume. Kind of a hacky process, but I
believe it should get around any backend storage restrictions.

Sean



More information about the openstack-discuss mailing list