[Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

David Wahlstrom david.wahlstrom at gmail.com
Wed Nov 4 17:52:37 UTC 2015


Looking at the code in master (and ignoring tests), the only drivers I see
reference to volume_clear are the LVM and block device drivers:

$ git grep -l volume_clear
driver.py
drivers/block_device.py
drivers/lvm.py
utils.py

So other drivers (netapp, smb, gluster, and of course Ceph/RBD) simply
ignore this option (or more accurately, don't take any action).


On Wed, Nov 4, 2015 at 8:52 AM, Chris Friesen <chris.friesen at windriver.com>
wrote:

> On 11/04/2015 08:46 AM, Saverio Proto wrote:
>
>> Hello there,
>>
>> I am using cinder with rbd, and most volumes are created from glance
>> images on rbd as well.
>> Because of ceph features, these volumes are CoW and only blocks
>> different from the original parent image are really written.
>>
>> Today I am debugging why in my production system deleting cinder
>> volumes gets very slow. Looks like the problem happens only at scale,
>> I can't reproduce it on my small test cluster.
>>
>> I read all the cinder.conf reference, and I found this default value
>> =>   volume_clear=0.
>>
>> Is this parameter evaluated when cinder works with rbd ?
>>
>
> I don't think that's actually used with rbd, since as you say Ceph uses
> CoW internally.
>
> I believe it's also ignored if you use LVM with thin provisioning.
>
> Chris
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
David W.
Unix, because every barista in Seattle has an MCSE.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20151104/3e9018e6/attachment.html>


More information about the OpenStack-operators mailing list