[openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?
Liuji (Jeremy)
jeremy.liu at huawei.com
Thu Feb 27 07:41:35 UTC 2014
I agree with you that we should limit the disk IO bandwidth in copy_image() to avoid to affect other instance on the same host too much .
We found a similar thing in the volume migration.
Sometimes, it will use "dd" command to migrate volume without any IO/ throughput limitation.
We try to use "pv" command to throttle it.
Detail info in blueprint:
https://blueprints.launchpad.net/cinder/+spec/throttle-cinder-migrate
Thanks,
Jeremy Liu
From: Wangpan [mailto:hzwangpan at corp.netease.com]
Sent: Monday, February 17, 2014 10:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?
Hi yunhong,
I agree with you of the taking I/O bandwidth as a resource, but it may be not so easy to implement.
Your another thinking about the launch time may be not so terrible, only the first boot it will be affected.
2014-02-17
________________________________
Wangpan
________________________________
发件人:yunhong jiang <yunhong.jiang at linux.intel.com<mailto:yunhong.jiang at linux.intel.com>>
发送时间:2014-02-15 08:21
主题:Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?
收件人:"OpenStack Development Mailing List (not for usage questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
抄送:
On Fri, 2014-02-14 at 10:22 +0100, Sylvain Bauza wrote:
> Instead of limitating the consumed bandwidth by proposiong a
> configuration flag (yet another one, and which default value to be
> set ?), I would propose to only decrease the niceness of the process
> itself, so that other processes would get first the I/O access.
> That's not perfect I assume, but that's a quick workaround limitating
> the frustration.
>
>
> -Sylvain
>
Decrease goodness is good for a short term, Some small concerns are,
will that cause long launch time if the host is I/O intensive? And if
launch time is billed also, then not fair for the new instance also.
I think the ideal world is I/O QoS like through cgroup, take I/O
bandwidth as a resource, and take the copy_image as an consumption of
the I/O bandwidth resource.
Thanks
--jyh
>
> 2014-02-14 4:52 GMT+01:00 Wangpan <hzwangpan at corp.netease.com<mailto:hzwangpan at corp.netease.com>>:
> Currently nova doesn't limit the disk IO bandwidth in
> copy_image() method while creating a new instance, so the
> other instances on this host may be affected by this high disk
> IO consuming operation, and some time-sensitive business(e.g
> RDS instance with heartbeat) may be switched between master
> and slave.
>
> So can we use the `rsync --bwlimit=${bandwidth} src dst`
> command instead of `cp src dst` while copy_image in
> create_image() of libvirt driver, the remote image copy
> operation also can be limited by `rsync --bwlimit=
> ${bandwidth}` or `scp -l=${bandwidth}`, this parameter
> ${bandwidth} can be a new configuration in nova.conf which
> allow cloud admin to config it, it's default value is 0 which
> means no limitation, then the instances on this host will be
> not affected while a new instance with not cached image is
> creating.
>
> the example codes:
> nova/virt/libvit/utils.py:
> diff --git a/nova/virt/libvirt/utils.py
> b/nova/virt/libvirt/utils.py
> index e926d3d..5d7c935 100644
> --- a/nova/virt/libvirt/utils.py
> +++ b/nova/virt/libvirt/utils.py
> @@ -473,7 +473,10 @@ def copy_image(src, dest, host=None):
> # sparse files. I.E. holes will not be written to
> DEST,
> # rather recreated efficiently. In addition, since
> # coreutils 8.11, holes can be read efficiently too.
> - execute('cp', src, dest)
> + if CONF.mbps_in_copy_image > 0:
> + execute('rsync', '--bwlimit=%s' %
> CONF.mbps_in_copy_image * 1024, src, dest)
> + else:
> + execute('cp', src, dest)
> else:
> dest = "%s:%s" % (host, dest)
> # Try rsync first as that can compress and create
> sparse dest files.
> @@ -484,11 +487,22 @@ def copy_image(src, dest, host=None):
> # Do a relatively light weight test first, so
> that we
> # can fall back to scp, without having run out of
> space
> # on the destination for example.
> - execute('rsync', '--sparse', '--compress',
> '--dry-run', src, dest)
> + if CONF.mbps_in_copy_image > 0:
> + execute('rsync', '--sparse', '--compress',
> '--dry-run',
> + '--bwlimit=%s' %
> CONF.mbps_in_copy_image * 1024, src, dest)
> + else:
> + execute('rsync', '--sparse', '--compress',
> '--dry-run', src, dest)
> except processutils.ProcessExecutionError:
> - execute('scp', src, dest)
> + if CONF.mbps_in_copy_image > 0:
> + execute('scp', '-l', '%s' %
> CONF.mbps_in_copy_image * 1024 * 8, src, dest)
> + else:
> + execute('scp', src, dest)
> else:
> - execute('rsync', '--sparse', '--compress', src,
> dest)
> + if CONF.mbps_in_copy_image > 0:
> + execute('rsync', '--sparse', '--compress',
> + '--bwlimit=%s' %
> CONF.mbps_in_copy_image * 1024, src, dest)
> + else:
> + execute('rsync', '--sparse', '--compress',
> src, dest)
>
>
> 2014-02-14
>
> ______________________________________________________________
> Wangpan
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140227/7fc86978/attachment-0001.html>
More information about the OpenStack-dev
mailing list