[Openstack] snapshot question

Brian Lamar brian.lamar at rackspace.com
Sun Jun 26 19:27:45 UTC 2011


(There is a summary at the bottom!)

Jay/Kei/Everyone,

I thought the exact same thing but I had completely forgotten to respond. 

The term 'snapshot', as it is currently being used IMO is incorrect and/or misleading. Perhaps it's a burden of knowledge, but I start equating anything generated by 'snapshot' as being analogous to an LVM snapshot. A snapshot is taken (either automatically or manually) because you want to preserve the contents of that image/system in that exact moment in time.

For example, snapshot a system before doing an upgrade so that rollback is as easy as rebooting using the base image. In this case the snapshot is nothing but a temporary disk buffer which is thrown away once you either decide that the upgrade worked or once you roll back.

Snapshots, in LVM are not meant to be kept around forever, and will 'expire' once the diff has reached a predetermined size. For example, you might create an LVM snapshot that is 1G. In that case, the snapshot would be active and receiving all writes destined to  the base volume until 1G has been written. At that point the snapshot is deactivated and writes go back to the base volume.

(Warning, not expert on LVM or QEMU, so please correct me on anything!)

Snapshots, in OpenStack, start out as traditional snapshots by using libvirt to create a true COW snapshot of the domain (as described above when I was talking about LVM). However, once the snapshot is taken, the image is then converted from qcow2 to a raw image. This means that *technically* you've made a copy of the existing image rather than a technically traditional snapshot. Part of me wants to call it a snapshot...albeit an expensive one.

After looking at the code, I don't think that we're destroying the snapshots via libvirt, so we might actually be creating true COW snapshots *and* faux full-copy snapshots. If that's the case there are going to be big penalties down the line as QEMU snapshots don't expire like their LVM counterparts.


Summary/TLDR/IMOs:

-We are making snapshots, but not the type of snapshots that I think of right away.

-This method should be called image_save, disk_save, image_copy, disk_copy, or something more obvious. Great call their Kei!

-Snapshot should be kept, but altered to be a true COW snapshot.

-This is not just a libvirt issue, but XenAPI as well. Both drivers explicitly ignore COW functionality and do true disk copies.

-I see more value in true disk copies (the way we're doing it now), but I think COW snapshot + rollback functionality in drivers would be wicked awesome.

-Kei, as to Q2 in your original message...I have absolutely no idea why the image is bigger, but hopefully one of these days I'll get time to get back into the libvirt code and help you out! :)


-Brian


-----Original Message-----
From: "Jay Pipes" <jaypipes at gmail.com>
Sent: Sunday, June 26, 2011 1:41pm
To: masumotok at nttdata.co.jp, "vishvananda" <vishvananda at gmail.com>, "Soren Hansen" <soren.hansen at rackspace.com>
Cc: openstack at lists.launchpad.net
Subject: Re: [Openstack] snapshot question

Ping. I think Kei's suggestion below is a good one... can someone
knowledgeable with qemu respond?

-jay

On Tue, Jun 21, 2011 at 9:55 PM,  <masumotok at nttdata.co.jp> wrote:
> Hello,
>
> I have a question about nova.virt.libvirt.connection.snapshot().
> In my understandings, this method is currently used for saving(cloning) VM images and upload cloned image to Glance.
>
> Q1) Is there any reason why method name is snapshot() , not image_create or image_save or something?
> I am just wondering if there will be additional work to add VMWare-like snapshotting(meaning taking snapshot many times, user can select one of them, and VM state can get back...)
>
> Q2) In current implementation original disk size is bigger after nova image-create. Please see below.
>
> [before image-create]
> root at testhost:/opt/openstack/instances/instance-00000015# ls -l
> total 151580
> -rw-r----- 1 libvirt-qemu kvm       2889 2011-06-23 11:53 console.log
> -rw-r--r-- 1 libvirt-qemu kvm  155189248 2011-06-23 11:56 disk
> -rw-r--r-- 1 libvirt-qemu kvm    6291968 2011-06-23 11:50 disk.local
> -rw-r--r-- 1 root         root      1728 2011-06-23 11:49 libvirt.xml
>
> [after image-create]
> root at testhost:/opt/openstack/instances/instance-00000015# ls -l
> total 3734664
> -rw-r----- 1 libvirt-qemu kvm        2889 2011-06-23 11:53 console.log
> -rw-r--r-- 1 libvirt-qemu kvm     2011-06-23 13:02 disk
> -rw-r--r-- 1 root         root     197120 2011-06-23 12:11 disk.diff
> -rw-r--r-- 1 libvirt-qemu kvm    10486272 2011-06-23 12:59 disk.local
> -rw-r--r-- 1 root         root       1728 2011-06-23 11:49 libvirt.xml
>
> That means if any sensitive users do "nova-image create", compute node original image size is bigger and compute node disk size available is decreasing. So I would like to ask, the below operation is inappropriate here?
> a) taking diff backup: qemu-img create -b disk -f qcow2 disk.diff
> b) convert : qemu-img convert -f -O qcow2 disk.diff new_img
>
> Disk size is not bigger this way. In addition, if --use_cow_image=False, this way can be used.
> Any opinion on this? If I misunderstand somehow, please let me know.
>
> Regards,
> Kei
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
This email may include confidential information. If you received it in error, please delete it.






More information about the Openstack mailing list