[openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

Wangpan hzwangpan at corp.netease.com
Mon Feb 17 02:37:01 UTC 2014


Hi sahid, 
I have tested `scp -l xxx src dst` (local scp copy) and believe that the `-l` option is invalid in this situation,
it seems that `-l` only valid in remote copy.


2014-02-17



Wangpan



发件人:sahid <sahid.ferdjaoui at cloudwatt.com>
发送时间:2014-02-14 17:58
主题:Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?
收件人:"OpenStack Development Mailing List (not for usage questions)"<openstack-dev at lists.openstack.org>
抄送:

It could be a good idea but as Sylvain said how to configure this? Then, what about using scp instead of rsync for a local copy? 

----- Original Message ----- 
From: "Wangpan" <hzwangpan at corp.netease.com> 
To: "OpenStack Development Mailing List" <openstack-dev at lists.openstack.org> 
Sent: Friday, February 14, 2014 4:52:20 AM 
Subject: [openstack-dev] [nova] Should we limit the disk IO bandwidth in    copy_image while creating new instance? 

Currently nova doesn't limit the disk IO bandwidth in copy_image() method while creating a new instance, so the other instances on this host may be affected by this high disk IO consuming operation, and some time-sensitive business(e.g RDS instance with heartbeat) may be switched between master and slave. 

So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead of `cp src dst` while copy_image in create_image() of libvirt driver, the remote image copy operation also can be limited by `rsync --bwlimit=${bandwidth}` or `scp -l=${bandwidth}`, this parameter ${bandwidth} can be a new configuration in nova.conf which allow cloud admin to config it, it's default value is 0 which means no limitation, then the instances on this host will be not affected while a new instance with not cached image is creating. 

the example codes: 
nova/virt/libvit/utils.py: 
diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py 
index e926d3d..5d7c935 100644 
--- a/nova/virt/libvirt/utils.py 
+++ b/nova/virt/libvirt/utils.py 
@@ -473,7 +473,10 @@ def copy_image(src, dest, host=None): 
         # sparse files.  I.E. holes will not be written to DEST, 
         # rather recreated efficiently.  In addition, since 
         # coreutils 8.11, holes can be read efficiently too. 
-        execute('cp', src, dest) 
+        if CONF.mbps_in_copy_image > 0: 
+            execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, dest) 
+        else: 
+            execute('cp', src, dest) 
     else: 
         dest = "%s:%s" % (host, dest) 
         # Try rsync first as that can compress and create sparse dest files. 
@@ -484,11 +487,22 @@ def copy_image(src, dest, host=None): 
             # Do a relatively light weight test first, so that we 
             # can fall back to scp, without having run out of space 
             # on the destination for example. 
-            execute('rsync', '--sparse', '--compress', '--dry-run', src, dest) 
+            if CONF.mbps_in_copy_image > 0: 
+                execute('rsync', '--sparse', '--compress', '--dry-run', 
+                        '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, dest) 
+            else: 
+                execute('rsync', '--sparse', '--compress', '--dry-run', src, dest) 
         except processutils.ProcessExecutionError: 
-            execute('scp', src, dest) 
+            if CONF.mbps_in_copy_image > 0: 
+                execute('scp', '-l', '%s' % CONF.mbps_in_copy_image * 1024 * 8, src, dest) 
+            else: 
+                execute('scp', src, dest) 
         else: 
-            execute('rsync', '--sparse', '--compress', src, dest) 
+            if CONF.mbps_in_copy_image > 0: 
+                execute('rsync', '--sparse', '--compress', 
+                        '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, dest) 
+            else: 
+                execute('rsync', '--sparse', '--compress', src, dest) 


2014-02-14 



Wangpan 
_______________________________________________ 
OpenStack-dev mailing list 
OpenStack-dev at lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

_______________________________________________ 
OpenStack-dev mailing list 
OpenStack-dev at lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140217/f98de449/attachment.html>


More information about the OpenStack-dev mailing list