[openstack-dev] [Nova][Cinder] Guest Assisted Snapshots

John Griffith john.griffith at solidfire.com
Wed Aug 7 03:11:18 UTC 2013


On Tue, Aug 6, 2013 at 7:25 PM, Russell Bryant <rbryant at redhat.com> wrote:

> On 08/06/2013 06:55 PM, John Griffith wrote:
> >
> >
> >
> > On Tue, Aug 6, 2013 at 4:05 PM, Russell Bryant <rbryant at redhat.com
> > <mailto:rbryant at redhat.com>> wrote:
> >
> >     Greetings,
> >
> >     The following blueprint is targeted at Havana.  I was reading over
> the
> >     design notes today.  I wanted to check on the status of this as well
> as
> >     discuss some of the design details.
> >
> >         https://wiki.openstack.org/wiki/Cinder/GuestAssistedSnapshotting
> >
> https://blueprints.launchpad.net/nova/+spec/qemu-assisted-snapshots
> >
> >     As a quick overview, the purpose of this is to provide the ability
> to do
> >     volume snapshots for certain volume types that do not support it
> >     internally, such as the GlusterFS or NFS drivers.
> >
> >     Some comments/questions ...
> >
> >     On the Nova side, the wiki page lists adding an API to snapshot all
> >     attached volumes at once.  This seems fine, but I would personally
> put
> >     it at a lower priority than just making basic snapshots work.  The
> Nova
> >     patch I've seen come by so far [1] was for this API, but like I
> said, I
> >     would just come back to this once regular snapshots work.
> >
> >
> > +1 from me on this for sure
> >
> >
> >     The page also indicates that a snapshot request through the Cinder
> API
> >     will only work through if it's not attached.  That seems fairly
> >     undesirable.  Can we try to address that with the first pass?  It
> seems
> >     like we could do something like:
> >
> >     on the cinder side:
> >
> >         cinder snapshot API
> >             if snapshot requires guest assist while in use, and is in
> use:
> >                 call nova's guest assisted snapshot API
> >             else:
> >                 do snapshot in cinder
> >
> >     on the nova side:
> >
> >         nova's new guest assisted snapshot API
> >             if volume type is a local file:
> >                 do local magic to create a snapshot and call
> >                 the new create-snapshot-metadata API call in cinder
> >             else:
> >                 do cinder API call to do a snapshot, but potentially
> >                 adding some guest assistance here (to get the filesystem
> >                 in a consistent state first, for example)
> >
> >
> > Part of this reminds me of my whole debate about shared FS storage mixed
> > in Cinder to begin with.  The capabilties are different, and I think
> > mixing around this "if this: do nova-xxxx; else: do cinder-xxx" results
> > in a poor end-user experience to say the least.
> >
> > My proposal would be to change a couple of things here:
> >
> > 1. As Russell suggests leave the API calls in Cinder.  I don't see a
> > strong reason why we can't have a "nova" driver to send requests to
> > Compute for things like this.  Even if implementation wise we're still
> > looking at things spread between the two (which I still don't find very
> > appealing) at least from an end-users perspective it's not so confusing
> > as to what's going on.
> >
> > 2. Consider rather than using the existing "cinder snapshot" command we
> > introduce something unique for this special case.  This would give the
> > ability to do what we want here and would be considered acceptable in
> > terms of the cinder minimum qualifications requirement.  I don't care
> > what it's called, "cinder snapshot-share" or whatever, but it seems like
> > it's a different semantic so should be a different call.
>
> I buy keeping a single API for snapshots (Cinder), but why should it be
> a new Cinder API?  Is it really different from the end user's
> perspective?  It's obviously wildly different on the backend.  It
> requires some cooperation from nova, but it's all done with the VM still
> running AFAIK.
>

So maybe that last part isn't all that necessary, but part of the reason I
mention that has to do with some history.  I'm pretty adamant about
behavior consistency and expectations regardless of the backend (a somewhat
unpopular stance with vendors).  Part of the reason for this suggestion is
the fact that I *believe* some of these back-ends actually require the
volume be attached to an instance in order to perform the snapshot.  Most
of the block devices don't care and do snapshots strictly internally on the
device itself.

I may be wrong about how this shakes out so I'm perfectly happy to be wrong
and not require an extension or new API call here.

>
> >
> >     Similarly to create, I think having delete work through Nova, but not
> >     Cinder isn't ideal.  Can we address that as well with a similar
> >     smart-redirect approach?
> >
> >
> > Again, agree on this... having the mix between the two seems like it
> > will just create confusion IMO.  The same strategy as suggested above
> > could work here as well.
> >
> >
> >     My final comment on all of this is that I'm not a huge fan of having
> >     snapshot create/delete in both the nova and cinder APIs.  I can't
> think
> >     of a better way to accomplish this, though.  We don't have a nova API
> >     only exposed internally to the deployment, and I don't think this
> >     feature is enough to warrant adding one.
> >
> >
> > So the internal nova API direction is more along the lines of what I was
> > suggesting.  To be honest I suspect there's the possibility of more
> > things fitting here in the future but I see your point.  The problem I
> > have is if these features aren't enough to justify it then are these
> > features worth enough to justify the confusion of using multiple
> > services/api's do do a volume snapshot?
>
> Yeah, you're right on this.  Let's do it.
>
> --
> Russell Bryant
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Thanks for the feedback on this,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130806/931152a8/attachment.html>


More information about the OpenStack-dev mailing list