[Openstack] [OpenStack] How to add a RBD volume snapshot rollback operation to Cinder

Zuo Changqian dummyhacker85 at gmail.com
Wed Feb 26 06:46:15 UTC 2014


Sorry, Edward. It's been so late. Mails from list growed day by day and I
didn't know how to handle it... until today I finally found a perfect way
to organise all those mail (ha, using google "Lable" and "Filter") and I
found your message.

It has been done, I have added a new API for cinder-api and cinder-volume
myself, to roll a volume back to a snapshot. cinder-backup is still in my
consideration and I know that "bootable volume" bug too, just don't know
there is a patch.

But for now I am dispatched to some kind of Load Blancing job (digging in
L3 agent and LBaaS agent and some other things), it may be a long while
before I come back to Cinder and Ceph things.


2014-01-23 20:52 GMT+08:00 Edward Hope-Morley <
edward.hope-morley at canonical.com>:

> Hi Changqian, see comments inline.
>
> On 02/01/14 07:38, Changqian Zuo wrote:
> > Hi,
> >
> > First time mailing to a list.
> >
> > I'm trying to add a RBD volume snapshot rollback operation to Cinder.
> > I've just got through WSGI part of Cinder API code and have read some
> > pieces of Cinder Volume. Now there is not much time for me to complete
> > the whole code base (with Ceph driver) very carefully, I need some
> > advice.
> >
> > Is adding a Extension Controller the most suitable way to do this? How
> > to deal with snapshot and volume state transformation, would it affect
> > other part of Cinder codes? What parts of Cinder codes I should pay
> > special attention?
> >
> > Many thanks.
> >
> > Some background information:
> >
> > We're planning to use Cinder bootable volume (Ceph RBD as backend) as
> > instance root disk in my company's OpenStack environment. Creating a
> > volume snapshot from image, and then use this snapshot to spawn instance
> > root volumes.
> >
> > To backup volume state, I've thought of Cinder-Backup service (still
> > need to add a in-use backup for Cinder), but it takes too much space (a
> > 80G volume would take at least 248G space for base backup image with x3
> > Ceph replication ratio). The other way is volume snapshot, but in this
> > case when restoring volume data, I have to create a new volume from
> > snapshot, destroy the original instance (there's no API to change a
> > instance's bootable volume), and create a new instance from new volume.
> If the you use the ceph backup driver with cinder backup (added
> in Havana), it is capable of doing incremental/differential backups
> so it only actually stores the data that is in use so e.g. if you have
> a 80G volume with 1G of actual data, the backup volume will only be
> 1G on disk (x number of replicas).
>
> You can then do a 'restore' which, again, will only transfer the actual
> data.
>
> Unfortunately, there is a bug in cinder-backup (all drivers) whereby
> bootable
> volumes cannot be restored, or more specifically volumes cannot be
> restored as
> bootable. There is currently a patch in progress to resolve this which we
> hope will land during the I cycle -
> https://review.openstack.org/#/c/51900/
>
> hope that helps,
>
> Ed.
> > The old volume can not be removed, for a snapshot is referencing to it
> > (the snapshot must be kept). If I do this again and again, there would
> > be many snapshots referencing to many different volumes, and I have to
> > keep these volumes (not used elsewhere), which is a mess.
> >
> >
> >
> > _______________________________________________
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140226/f01724fe/attachment.html>


More information about the Openstack mailing list