[openstack-dev] [Cinder][Driver] Delete snapshot

Avishay Traeger avishay at stratoscale.com
Sun Jun 22 05:41:40 UTC 2014


This is what I thought of as well.  In the rbd driver, if a request to
delete a volume comes in, where the volume object on the backend has other
objects that depend on it, it simply renames it:
https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/rbd.py#L657

There is also code to clean up those renamed objects.

The point is, Cinder has an API which should be consistent no matter what
storage is being used.  The driver must do whatever necessary to implement
the API rather than allowing quirks of the specific storage to show through
to the user.

Thanks,
Avishay


On Thu, Jun 19, 2014 at 8:13 PM, Duncan Thomas <duncan.thomas at gmail.com>
wrote:

> So these are all features that various other backends manage to
> implement successfully.
>
> Your best point of reference might be the ceph code - I believe it
> deals with very similar issues in various ways.
>
> On 19 June 2014 18:01, Amit Das <amit.das at cloudbyte.com> wrote:
> > Hi All,
> >
> > Thanks for clarifying the Cinder behavior w.r.t a snapshot & its clones
> > which seems to be independent/decoupled.
> > The current volume & its snapshot based validations in Cinder holds true
> for
> > snapshot & its clones w.r.t my storage requirements.
> >
> > Our storage is built on top of ZFS filesystem.
> > The volume -> snapshot -> clone that I am referring to in turn points to
> a
> > ZFS dataset -> ZFS snapshot -> ZFS clone.
> >
> > The best part of ZFS based snapshots & clones are :
> >
> > these are almost instantaneous ( i.e. copy-on-write based copies)
> > these will not consume any additional (initially)
> >
> > a clone initially shares all its disk space with the original snapshot,
> its
> > used property is initially zero.
> > As changes are made to the clone, it uses more space.
> > The used property of the original snapshot does not consider the disk
> space
> > consumed by the clone.
> >
> > Further optimizations i.e. cool feature:
> >
> > While creating VM clones, a hypervisor driver can delegate part of its
> > cloning process to storage driver & hence, the overall VM cloning will be
> > very very fast.
> >
> >
> >
> >
> > Regards,
> > Amit
> > CloudByte Inc.
> >
> >
> > On Thu, Jun 19, 2014 at 9:16 PM, John Griffith <
> john.griffith at solidfire.com>
> > wrote:
> >>
> >>
> >>
> >>
> >> On Tue, Jun 17, 2014 at 10:50 PM, Amit Das <amit.das at cloudbyte.com>
> wrote:
> >>>
> >>> Hi Stackers,
> >>>
> >>> I have been implementing a Cinder driver for our storage solution &
> >>> facing issues with below scenario.
> >>>
> >>> Scenario - When a user/admin tries to delete a snapshot that has
> >>> associated clone(s), an error message/log should be shown to the user
> >>> stating that 'There are clones associated to this snapshot. Hence,
> snapshot
> >>> cannot be deleted'.
> >>
> >>
> >> What's the use model of "clones associated with the snapshot"?  What are
> >> these "clones" from a Cinder perspective.  Easy answer is: don't create
> >> them, but I realize you probably have a cool feature or optimization
> that
> >> you're trying to leverage here.
> >>>
> >>>
> >>> Implementation issues - If Cinder driver throws an Exception the
> snapshot
> >>> will have error_deleting status & will not be usable. If Cinder driver
> logs
> >>> the error silently then Openstack will probably mark the snapshot as
> >>> deleted.
> >>
> >>
> >> So as others point out, from a Cinder perspective this is what I/we
> would
> >> expect.
> >>
> >> Scott made some really good points, but the point is we do not want to
> >> behave differently for every single driver.  The agreed upon mission for
> >> Cinder is to actually provide a consistent API and set of behaviors to
> end
> >> users regardless of what backend device they're using (in other words
> that
> >> should remain pretty much invisible to the end-user).
> >>
> >> What do you use the Clones of the Snapshot for?  Maybe we can come up
> with
> >> another approach that works and keeps consistency in the API.
> >>
> >>
> >>>
> >>> What is the appropriate procedure that needs to be followed for above
> >>> usecase.
> >>>
> >>> Regards,
> >>> Amit
> >>> CloudByte Inc.
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Duncan Thomas
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140622/87b91b72/attachment.html>


More information about the OpenStack-dev mailing list