[openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.
P at draigBrady.com
Thu Jan 16 01:14:02 UTC 2014
On 12/26/2013 07:56 AM, cosmos cosmos wrote:
> My name is Rucia for Samsung SDS.
> I had in truouble in volume deleting.
> I am developing for supporting big data storage such as hadoop in lvm.
> it use as a full disk io for deleting of cinder lvm volume because of dd the high disk I/O affects the other hadoop instance on same host.
> If using dd for deleting the volume, it takes too much time for deleting of cinder lvm volume because of dd.
> Cinder volume is 200GB for supporting hadoop master data.
> When i delete cinder volume in using 'dd if=/dev/zero of $cinder-volume count=1000000 bs=1M' it takes about 30 minutes.
> So When I delete the volume, i added disk id scheduler, ionice.
Some history notes...
This was discussed a couple years ago when the volume stuff was in nova:
I proposed ionice then but thought better of it since performance of this
is very much dependent on system setup and it's generally best for apps
to not be tweaking priorities in isolation, and rather having the higher level
system logic and config handle the various sharing strategies.
I did add global support for volume wipe type, which can be set to none,
or a limited portion of the volume which can be useful for encrypted
volumes where overwriting keys at the start is sufficient.
An option would be to have this configurable on a per volume basis?
Also an option is to leverage provisioning systems that
return zeros for unallocated portions.
More information about the OpenStack-dev