[openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

Sylvain Bauza sylvain.bauza at gmail.com
Fri Feb 14 09:22:13 UTC 2014


Instead of limitating the consumed bandwidth by proposiong a configuration
flag (yet another one, and which default value to be set ?), I would
propose to only decrease the niceness of the process itself, so that other
processes would get first the I/O access.
That's not perfect I assume, but that's a quick workaround limitating the
frustration.

-Sylvain


2014-02-14 4:52 GMT+01:00 Wangpan <hzwangpan at corp.netease.com>:

>   Currently nova doesn't limit the disk IO bandwidth in copy_image()
> method while creating a new instance, so the other instances on this host
> may be affected by this high disk IO consuming operation, and some
> time-sensitive business(e.g RDS instance with heartbeat) may be switched
> between master and slave.
>
> So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead
> of `cp src dst` while copy_image in create_image() of libvirt driver, the
> remote image copy operation also can be limited by `rsync
> --bwlimit=${bandwidth}` or `scp -l=${bandwidth}`, this parameter
> ${bandwidth} can be a new configuration in nova.conf which allow cloud
> admin to config it, it's default value is 0 which means no limitation, then
> the instances on this host will be not affected while a new instance with
> not cached image is creating.
>
> the example codes:
> nova/virt/libvit/utils.py:
> diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py
> index e926d3d..5d7c935 100644
> --- a/nova/virt/libvirt/utils.py
> +++ b/nova/virt/libvirt/utils.py
> @@ -473,7 +473,10 @@ def copy_image(src, dest, host=None):
>          # sparse files.  I.E. holes will not be written to DEST,
>          # rather recreated efficiently.  In addition, since
>          # coreutils 8.11, holes can be read efficiently too.
> -        execute('cp', src, dest)
> +        if CONF.mbps_in_copy_image > 0:
> +            execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image *
> 1024, src, dest)
> +        else:
> +            execute('cp', src, dest)
>      else:
>          dest = "%s:%s" % (host, dest)
>          # Try rsync first as that can compress and create sparse dest
> files.
> @@ -484,11 +487,22 @@ def copy_image(src, dest, host=None):
>              # Do a relatively light weight test first, so that we
>              # can fall back to scp, without having run out of space
>              # on the destination for example.
> -            execute('rsync', '--sparse', '--compress', '--dry-run', src,
> dest)
> +            if CONF.mbps_in_copy_image > 0:
> +                execute('rsync', '--sparse', '--compress', '--dry-run',
> +                        '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024,
> src, dest)
> +            else:
> +                execute('rsync', '--sparse', '--compress', '--dry-run',
> src, dest)
>          except processutils.ProcessExecutionError:
> -            execute('scp', src, dest)
> +            if CONF.mbps_in_copy_image > 0:
> +                execute('scp', '-l', '%s' % CONF.mbps_in_copy_image *
> 1024 * 8, src, dest)
> +            else:
> +                execute('scp', src, dest)
>          else:
> -            execute('rsync', '--sparse', '--compress', src, dest)
> +            if CONF.mbps_in_copy_image > 0:
> +                execute('rsync', '--sparse', '--compress',
> +                        '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024,
> src, dest)
> +            else:
> +                execute('rsync', '--sparse', '--compress', src, dest)
>
>
> 2014-02-14
> ------------------------------
>  Wangpan
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140214/f7329f17/attachment.html>


More information about the OpenStack-dev mailing list