[Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

Saverio Proto zioproto at gmail.com
Wed Nov 4 14:46:43 UTC 2015


Hello there,

I am using cinder with rbd, and most volumes are created from glance
images on rbd as well.
Because of ceph features, these volumes are CoW and only blocks
different from the original parent image are really written.

Today I am debugging why in my production system deleting cinder
volumes gets very slow. Looks like the problem happens only at scale,
I can't reproduce it on my small test cluster.

I read all the cinder.conf reference, and I found this default value
=>   volume_clear=0.

Is this parameter evaluated when cinder works with rbd ?

This means that everytime we delete a Volume we first write all blocks
to 0 with a "dd" like operation and then we really delete it. This
default is designed with LVM backend in mind. In fact we dont want
that the next user gets a raw block device that is dirty, and can
potentially can read data out of it.

But what happens when we are using Ceph rbd as cinder backend ? and
our volumes are CoW from Glance Images most of the time, so we only
write in Ceph the blocks that are different from the original image. I
hope this is not writing all the rbd objects with zeros before
actually deleting the ceph volumes.

Does anybody has any advice on volume_clear setting to be used with rbd ?
Or even better, how can I make sure that the setting volume_clear is
not evaluated at all when using the rbd backend ?

thank you

Saverio



More information about the OpenStack-operators mailing list