[Openstack] guest rbd block device speed limit

chagg at foxmail.com chagg at foxmail.com
Fri Aug 25 02:05:41 UTC 2017


This is my ceph performance data:

 Total time run:         30.370630
Total writes made:      849
Write size:             4194304
Bandwidth (MB/sec):     111.819
Stddev Bandwidth:       21.6665
Max bandwidth (MB/sec): 124
Min bandwidth (MB/sec): 0
Average IOPS:           27
Average Latency(s):     0.572236
Stddev Latency(s):      0.421846
Max latency(s):         4.06547
Min latency(s):         0.121321

Thanks!


chagg at foxmail.com
 
From: Chris Friesen
Date: 2017-08-24 23:14
To: openstack at lists.openstack.org
Subject: Re: [Openstack] guest rbd block device speed limit
On 08/24/2017 01:04 AM, chagg at foxmail.com wrote:
> Hello:
>      I  am using openstack + libvirt + qemu-kvm. The speed of copying files
> between virtual machines exceeds 300M Byte per second but the speed of dd command:
>   "watch dd oflag=direct,nonblock if=/dev/zero of=/opt/iotest1 bs=4M count=10"
>   is around 20M byte per second. Every guest is the same, and there is no io
> tuning in libvirt. What can I do to unleash the speed of disk io speed inside
> the guest?
>      Thanks!
 
It looks like you're using ceph for your root disk.  What sort of performance do 
you get accessing the ceph volumes from the host?
 
Chris
 
 
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170825/ca520cd3/attachment.html>


More information about the Openstack mailing list