[openstack-dev] [cinder]a problem about the implement of limit-volume-copy-bandwidth

Tomoki Sekiyama tomoki.sekiyama at hds.com
Tue Jul 1 22:23:31 UTC 2014


Hi Zhou,

>Hi stackers,
>
>    I found some problems about the current implement of
>limit-volume-copy-bandwidth (this patch has been merged in last week.)
>
>    Firstly, assume that I configurate volume_copy_bps_limit=10M, If
>the path is a block device, cgroup blkio can limit copy-bandwidth
>separately for every volume.
>But If the path is a regular file, according to the current implement,
>cgroup blkio have to limit total copy-bandwidth for all volume on the
>disk device which the file lies on.
>The reason is :
>In cinder/utils.py, the method get_blkdev_major_minor
>
>    elif lookup_for_file:
>        # lookup the mounted disk which the file lies on
>        out, _err = execute('df', path)
>        devpath = out.split("\n")[1].split()[0]
>        return get_blkdev_major_minor(devpath, False)
>
>If invoke the method copy_volume concurrently, copy-bandwidth for a
>volume is less than 10M. In this case, the meaning of param
>volume_copy_bps_limit in cinder.conf is defferent.

Thank you for pointing this out.
I think the goal of this feature is QoS to mitigate slowdown of instances
during volume copy. In order to assure the bandwidth for instances, the
total bandwidth used by every running volume copy should be limited less
than volume_copy_bps_limit.

The current implementation satisfies this condition within each block
device, but is still insufficient for limiting total bandwidth of
concurrent volume copies among multiple block devices. From the viewpoint
of QoS, we may need to divide the value of volume_copy_bps_limit into the
number of running volume copies.

For example, when volume_copy_bps_limit is 100M and 2 copies are running,
  (1) copy an image on sda -> sdb
  (2) copy an image on sda -> sdc
to limit each copy bps to 50M (= 100M / 2 concurrent copies), we should
set the limits to:
  sda (read)  = 100M
  sdb (write) =  50M
  sdc (write) =  50M
And when copy (2) is finished before the end of (1), the limit of sdb
(write) is increased to 100M.

I appreciate any opinions for/against this idea.

>   Secondly, In NFS, the result of cmd 'df' is like this:
>[root at yuzhou yuzhou]# df /mnt/111
>Filesystem                     1K-blocks      Used Available Use% Mounted
>on
>186.100.8.144:/mnt/nfs_storage   51606528  14676992  34308096  30% /mnt
>I think the method get_blkdev_major_minor can not deal with the devpath
>'186.100.8.144:/mnt/nfs_storage'.
>i.e can not limit volume copy bandwidth in nfsdriver.
>
>So I think maybe we should modify the current implement to make sure
>copy-bandwidth for every volume meet the configuration requirement.
>I suggest we use loop device associated with the regular file(losetup
>/dev/loop0 /mnt/volumes/vm.qcow2), then limit the bps of loop device.(
>cgset -r blkio.throttle.write_bps_device="7:0 10000000" test) After
>copying volume, detach loop device. (losetup --detach /dev/loop0)

Interesting. I tried this locally and confirmed it's feasible.

Thanks,
Tomoki Sekiyama




More information about the OpenStack-dev mailing list