[openstack-dev] [cinder]a problem about the implement of limit-volume-copy-bandwidth
Yuzhou (C)
vitas.yuzhou at huawei.com
Mon Jun 30 09:35:37 UTC 2014
Hi stackers,
I found some problems about the current implement of limit-volume-copy-bandwidth (this patch has been merged in last week.)
Firstly, assume that I configurate volume_copy_bps_limit=10M, If the path is a block device, cgroup blkio can limit copy-bandwidth separately for every volume.
But If the path is a regular file, according to the current implement, cgroup blkio have to limit total copy-bandwidth for all volume on the disk device which the file lies on.
The reason is :
In cinder/utils.py, the method get_blkdev_major_minor
elif lookup_for_file:
# lookup the mounted disk which the file lies on
out, _err = execute('df', path)
devpath = out.split("\n")[1].split()[0]
return get_blkdev_major_minor(devpath, False)
If invoke the method copy_volume concurrently, copy-bandwidth for a volume is less than 10M. In this case, the meaning of param volume_copy_bps_limit in cinder.conf is defferent.
Secondly, In NFS, the result of cmd 'df' is like this:
[root at yuzhou yuzhou]# df /mnt/111
Filesystem 1K-blocks Used Available Use% Mounted on
186.100.8.144:/mnt/nfs_storage 51606528 14676992 34308096 30% /mnt
I think the method get_blkdev_major_minor can not deal with the devpath '186.100.8.144:/mnt/nfs_storage'.
i.e can not limit volume copy bandwidth in nfsdriver.
So I think maybe we should modify the current implement to make sure copy-bandwidth for every volume meet the configuration requirement.
I suggest we use loop device associated with the regular file(losetup /dev/loop0 /mnt/volumes/vm.qcow2),
then limit the bps of loop device.( cgset -r blkio.throttle.write_bps_device="7:0 10000000" test)
After copying volume, detach loop device. (losetup --detach /dev/loop0)
Any suggestion about my improvement opinions?
Thanks!
Zhou Yu
More information about the OpenStack-dev
mailing list