[Openstack-operators] [nova] disk I/O perfomance
Warren Wang
warren at wangspeed.com
Wed Jul 8 14:33:18 UTC 2015
The only time we saw major performance issues with ephemeral (we're using
SSDs in RAID 0) was when we ran fio against a sparse file. It sounds like
you ran it against a properly filled file though, and it looks like you're
on a single spinning drive, based on the fio numbers. Can you confirm?
Also, what version of qemu are you running, and can you check if you have
iothread enabled? "qemu-system-x86_64 -device virtio-blk-device,help"
should show iothread somewhere. One other thing I can think of, did you
specify directIO in your fio run?
I notice you have the balloon driver on. Have you had used it in a
production environment that neared full? Curious what the feedback is here.
Warren
Warren
On Wed, Jul 8, 2015 at 6:58 AM, Gleb Stepanov <gstepanov at mirantis.com>
wrote:
> Hello, all.
>
> We have measured disk I/O performance on openstack virtual machines
> with aid of
> FIO tool. We've tested performance on root dist drive device, test
> consists of write operationby 4kb
> blocks to file with size 90Gb (prefilled in advance).
> We use qcow2 image for vm, ephemeral drive and virtio driver.
> All configuration goes in attachment.
>
> There are some results:
>
> test 1
>
> threads 1, 5, 10, 15, 20, 40
> iops 72,58,49,60,94,72
>
> test 2
> threads 1, 5, 10, 15, 20, 40
> iops 71,60,54,88,52,52
>
> test 3
> threads 1, 5, 10, 15, 20, 40
> iops 71,49,58,51,128,130
>
> test 4
> threads 1, 5, 10, 15, 20, 40
> iops 65,49,60,56,52,63
>
> As it is shown performance degraded during increasing amount of
> threads, also deviation of results on 40 threads is very big.
> Have you any ideas how to explain performance behaviour?
>
> Kind regards, Gleb Stepanov.
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150708/b0558f88/attachment.html>
More information about the OpenStack-operators
mailing list