[Openstack-operators] [nova] disk I/O perfomance
Gleb Stepanov
gstepanov at mirantis.com
Mon Jul 13 11:26:42 UTC 2015
Hello, Warren.
Yes, we use properly filled file on a single splining drive. All tests
are done with fio, here is a link on full test report -
http://koder-ua.github.io/6.1GA/ephemeral_drive.html.
Here is a link on report for same test, executed directly on HDD, used
for ephemeral storage -
http://koder-ua.github.io/6.1GA/compute_node_HDD.html.
We are using QEMU Debian 2.0.0+dfsg-2ubuntu1.13.
Your command has following output:
virtio-blk-device.drive=drive
virtio-blk-device.logical_block_size=blocksize
virtio-blk-device.physical_block_size=blocksize
virtio-blk-device.min_io_size=uint16
virtio-blk-device.opt_io_size=uint32
virtio-blk-device.bootindex=int32
virtio-blk-device.discard_granularity=uint32
virtio-blk-device.cyls=uint32
virtio-blk-device.heads=uint32
virtio-blk-device.secs=uint32
virtio-blk-device.serial=str
virtio-blk-device.config-wce=on/off
virtio-blk-device.scsi=on/off
virtio-blk-device.x-iothread=iothread
There only one vm on this compute node, and there a lot of free resources.
Ballooning driver should not influence performance(FUEL 6.1).
Kind regards, Gleb Stepanov.
On Fri, Jul 10, 2015 at 11:10 PM, Konstantin Danilov
<kdanilov at mirantis.com> wrote:
>> ...spinning drive based on fio.
> splining drive. All tests are done with fio, here is a link on full test
> report - http://koder-ua.github.io/6.1GA/ephemeral_drive.html.
> Here is a link on report for same test, executed directly on HDD, used for
> ephemeral storage -
> http://koder-ua.github.io/6.1GA/compute_node_HDD.html.
>
>> We use following version of QEMU emulator version 2.0.0 (Debian
>> 2.0.0+dfsg-2ubuntu1.13).
> We are using QEMU Debian 2.0.0+dfsg-2ubuntu1.13.
>
>> We have not used enviroment fully, so i guess there is not affection of
>> balloning.
> There only one vm on this compute node, and there a lot of free resources.
> Ballooning driver should not influence performance(FUEL 6.1).
>
>
> On Fri, Jul 10, 2015 at 11:00 PM, Gleb Stepanov <gstepanov at mirantis.com>
> wrote:
>>
>> Hello, Warren.
>>
>> Yes, we use properly filled file on a single spinning drive based on
>> fio. We use following version of QEMU emulator version 2.0.0 (Debian
>> 2.0.0+dfsg-2ubuntu1.13).
>> Your command has following output:
>>
>> virtio-blk-device.drive=drive
>> virtio-blk-device.logical_block_size=blocksize
>> virtio-blk-device.physical_block_size=blocksize
>> virtio-blk-device.min_io_size=uint16
>> virtio-blk-device.opt_io_size=uint32
>> virtio-blk-device.bootindex=int32
>> virtio-blk-device.discard_granularity=uint32
>> virtio-blk-device.cyls=uint32
>> virtio-blk-device.heads=uint32
>> virtio-blk-device.secs=uint32
>> virtio-blk-device.serial=str
>> virtio-blk-device.config-wce=on/off
>> virtio-blk-device.scsi=on/off
>> virtio-blk-device.x-iothread=iothread
>>
>> We have not used enviroment fully, so i guess there is not affection
>> of balloning.
>>
>> Kind regards, Gleb Stepanov.
>>
>> On Fri, Jul 10, 2015 at 6:01 PM, Gleb Stepanov <gstepanov at mirantis.com>
>> wrote:
>> > ---------- Forwarded message ----------
>> > From: Gleb Stepanov <gstepanov at mirantis.com>
>> > Date: Wed, Jul 8, 2015 at 1:58 PM
>> > Subject: [nova] disk I/O perfomance
>> > To: openstack-operators at lists.openstack.org,
>> > openstack-dev at lists.openstack.org
>> >
>> >
>> > Hello, all.
>> >
>> > We have measured disk I/O performance on openstack virtual machines
>> > with aid of
>> > FIO tool. We've tested performance on root dist drive device, test
>> > consists of write operationby 4kb
>> > blocks to file with size 90Gb (prefilled in advance).
>> > We use qcow2 image for vm, ephemeral drive and virtio driver.
>> > All configuration goes in attachment.
>> >
>> > There are some results:
>> >
>> > test 1
>> >
>> > threads 1, 5, 10, 15, 20, 40
>> > iops 72,58,49,60,94,72
>> >
>> > test 2
>> > threads 1, 5, 10, 15, 20, 40
>> > iops 71,60,54,88,52,52
>> >
>> > test 3
>> > threads 1, 5, 10, 15, 20, 40
>> > iops 71,49,58,51,128,130
>> >
>> > test 4
>> > threads 1, 5, 10, 15, 20, 40
>> > iops 65,49,60,56,52,63
>> >
>> > As it is shown performance degraded during increasing amount of
>> > threads, also deviation of results on 40 threads is very big.
>> > Have you any ideas how to explain performance behaviour?
>> >
>> > Kind regards, Gleb Stepanov.
>
>
>
>
> --
> Kostiantyn Danilov aka koder.ua
> Principal software engineer, Mirantis
>
> skype:koder.ua
> http://koder-ua.blogspot.com/
> http://mirantis.com
More information about the OpenStack-operators
mailing list