[Openstack] snapshot question

masumotok at nttdata.co.jp masumotok at nttdata.co.jp
Thu Jun 30 10:32:20 UTC 2011


> > Hello,
> >
> > I have a question about nova.virt.libvirt.connection.snapshot().
> > In my understandings, this method is currently used for saving(cloning) VM
> images and upload cloned image to Glance.
> >
> > Q1) Is there any reason why method name is snapshot() , not image_create
> or image_save or something?
> > I am just wondering if there will be additional work to add
> > VMWare-like snapshotting(meaning taking snapshot many times, user can
> > select one of them, and VM state can get back...)
> 
> This is solely because of the definition of rackspace in the OS api which is
> take an instance and back it up externally into glance.
> >
> > Q2) In current implementation original disk size is bigger after nova
> image-create. Please see below.
> 
> This is actually a bug imo.  The snapshot in the kvm driver creates an internal
> snapshot, then exports it using qemu-img.  This should delete the internal
> snapshot after it is done exporting.  Also we should probably switch to using
> qemu-img snapshot instead of libvirt driver savevm because we don't need to
> be saving the memory to disk.
> 
Hmm... I agree that we don’t need to be saving memory. One thing I want to mention is the situation is not changed when I use qemu-img snapshot, meaning converted image is shrunk, but original image is bigger. Once I delete internal snapshot, image size is not shrunk. Please see below. AFAIK, many people said this is limitation of qcow2 from long time ago, I have no idea whether it is qcow2 bug or just limitation..

[ before qemu-img snapshot ]
root at dev:/opt/openstack/instances/instance-00000019# ls -l
-rw-r--r-- 1 libvirt-qemu kvm  83886080 2011-07-01 19:13 disk

[ taking snapshot ]
root at dev:/opt/openstack/instances/instance-00000019# qemu-img snapshot -c testsnap disk
root at dev:/opt/openstack/instances/instance-00000019# qemu-img info disk
image: disk
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 76M
cluster_size: 2097152
backing file: /opt/openstack/instances/_base/c5f336a120f7e4c2f2a08df38c71b84527704846 (actual path: /opt/openstack/instances/_base/c5f336a120f7e4c2f2a08df38c71b84527704846)
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
1         testsnap                  0 2011-07-01 19:13:58   00:00:00.000

[ after snapshotting ]
root at dev:/opt/openstack/instances/instance-00000019# ls -l
-rw-r--r-- 1 libvirt-qemu kvm  85983744 2011-07-01 19:13 disk

[removing snapshot]
root at dev:/opt/openstack/instances/instance-00000019# qemu-img snapshot -d testsnap disk
root at dev:/opt/openstack/instances/instance-00000019# qemu-img info disk
image: disk
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 76M
cluster_size: 2097152
backing file: /opt/openstack/instances/_base/c5f336a120f7e4c2f2a08df38c71b84527704846 (actual path: /opt/openstack/instances/_base/c5f336a120f7e4c2f2a08df38c71b84527704846)

[confirm image size]
root at nova:/opt/openstack/instances/instance-00000019# ls -l
-rw-r--r-- 1 libvirt-qemu kvm  85983744 2011-07-01 19:14 disk

> > That means if any sensitive users do "nova-image create", compute node
> original image size is bigger and compute node disk size available is
> decreasing. So I would like to ask, the below operation is inappropriate here?
> > a) taking diff backup: qemu-img create -b disk -f qcow2 disk.diff
> > b) convert : qemu-img convert -f -O qcow2 disk.diff new_img
> 
> This is another option.  I think this is functionally equivalent to the above
> option using qemu-img snapshot except it uses an external file.  There might
> be a performance penalty to converting from a backing file that is still active
> though?
At this point, I also agree that there might be performance penalty if we use external file. I feel we can choose accepting image size is bigger and bigger, or, getting some performance penalty (or much better way?)

If I misunderstand somehow, please let me know.
Thanks for your reply!



More information about the Openstack mailing list