[Openstack-operators] Snapshots taking long time

Andreas Vallin andreas.vallin at it.uu.se
Thu Mar 3 07:53:03 UTC 2016


Hello Saverio,
Thanks for your answer. In that case the problem is that I thought the 
patch you are referring to was already make in Kilo.
Doing snapshots directly from ceph goes fast:

[root at ceph01: ~] # time rbd -p volumes snap create 
volume-fecc8258-e6d8-4d3c-9ac2-fe98b5dbbc2f at mytestsnap

real    0m1.062s
user    0m0.090s
sys     0m0.011s
[root at ceph01: ~] # rbd -p volumes info 
volume-fecc8258-e6d8-4d3c-9ac2-fe98b5dbbc2f
rbd image 'volume-fecc8258-e6d8-4d3c-9ac2-fe98b5dbbc2f':
         size 10240 MB in 2560 objects
         order 22 (4096 kB objects)
         block_name_prefix: rbd_data.d30dc62fe6784
         format: 2
         features: layering
         flags:

[root at ceph01: ~] # rbd -p volumes snap ls 
volume-fecc8258-e6d8-4d3c-9ac2-fe98b5dbbc2f
SNAPID NAME           SIZE
      4 mytestsnap 10240 MB

Regards,
Andreas

On 03/03/2016 08:37 AM, Saverio Proto wrote:
> Hello Andreas,
>
> what kind of snapshot are you doing ?
>
> 1) Snapshot of a instance running on a ephimeral volume ?
> 2) Snapshot of a instance booted from Volume ?
> 3) Snapshot of a volume ?
>
> in case 1 the ephemeral volume is in the volume pool with the name
> <instanceUUID>_disk
> when you snapshot, this must be read to disk and then a image is
> generated and uploaded to the glance pool.
> This is slow, but the patch to make this faster and all within ceph
> has been already merged in Mitaka
> Look here:
> https://etherpad.openstack.org/p/MAN-ops-Ceph
> Under
> Instance (ephemeral disk) snap CoW directly to Glance pool
>
>
> You might also want to measure how fast your ceph can take snapshot,
> without Openstack.
>
> assuming that your ceph pool for volumes is called volumespool
>
> try to make a snapshot by hand bypassing openstack using rbd CLI
>
> rbd -p volumespool snap create volume-<UUID>@mytestsnapshotname
>
> Does it take a long time as well ?
>
> Saverio
>
>
> 2016-03-03 8:01 GMT+01:00 Andreas Vallin <andreas.vallin at it.uu.se>:
>> We are currently installing a new openstack cluster (Liberty) with
>> openstack-ansible and an already existing ceph cluster. We have both images
>> and volumes located in ceph with rbd. My current problem is that snapshots
>> take a very long time and I can see that snapshots are temporary created
>> under /var/lib/nova/instances/snapshots/tmp on the compute node, I thought
>> that this would not be needed when using ceph? The instance that I am
>> creating a snapshot of uses a raw image that is protected. What can cause
>> this behavior?
>>
>> Thanks,
>> Andreas
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




More information about the OpenStack-operators mailing list