[openstack-dev] [nova][cinder] Limits on volume read throughput?
Preston L. Bannister
preston at bannister.us
Wed Mar 2 00:29:20 UTC 2016
I have need to benchmark volume-read performance of an application running
in an instance, assuming extremely fast storage.
To simulate fast storage, I have an AIO install of OpenStack, with local
flash disks. Cinder LVM volumes are striped across three flash drives (what
I have in the present setup).
Since I am only interested in sequential-read performance, the "dd" utility
is sufficient as a measure.
Running "dd" in the physical host against the Cinder-allocated volumes nets
~1.2GB/s (roughly in line with expectations for the striped flash volume).
Running "dd" in an instance against the same volume (now attached to the
instance) got ~300MB/s, which was pathetic. (I was expecting 80-90% of the
raw host volume numbers, or better.) Upping read-ahead in the instance via
"hdparm" boosted throughput to ~450MB/s. Much better, but still sad.
In the second measure the volume data passes through iSCSI and then the
QEMU hypervisor. I expected to lose some performance, but not more than
half!
Note that as this is an all-in-one OpenStack node, iSCSI is strictly local
and not crossing a network. (I did not want network latency or throughput
to be a concern with this first measure.)
I do not see any prior mention of performance of this sort on the web or in
the mailing list. Possible I missed something.
What sort of numbers are you seeing out of high performance storage?
Is the huge drop in read-rate within an instance something others have seen?
Is the default iSCSI configuration used by Nova and Cinder optimal?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160302/61bddf3a/attachment.html>
More information about the OpenStack-dev
mailing list