[Openstack] cinder slow (nova issues?)
Dmitry Makovey
dmitry at athabascau.ca
Tue Dec 23 16:31:18 UTC 2014
On 12/23/2014 01:38 AM, Robert van Leeuwen wrote:
>> using RDO IceHouse packages I've set up an infrastructure atop of
>> RHEL6.6 and am seeing a very unpleasant performance for the
>> storage.
>>
>> cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200
>>
>> Results are just miserable. going from 1.2G/s down to 20M/s seems
>> to be a big degradation. What should I look for? I have also tried
>> running the same
>
> 2 suggestions to check:
>
> * The dd command you run will just push to the filesystem write cache
> and never actually hits the disk before the command finishes. This
> explains the 1.2G/s. Use a command that actually syncs to disk to do
> performance tests. You can use oflag=dsync with dd or, even better,
> use a tool like fio or filebench. I would also increase the filesize
> to make sure you are not just hitting a raid controller cache.
Thanks for the suggestions, Robert. Here are updated results (yes, I
know I should use more sophisticated tool, but at the moment I'm
improvising/ballparking):
instance $ dd if=/dev/zero of=baloon bs=1048576 count=200 oflag=dsync
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 10.2613 s, 20.4 MB/s
nova-compute # dd if=/dev/zero of=baloon bs=1048576 count=200 oflag=dsync
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 3.91336 s, 53.6 MB/s
nova-compute # dd if=/dev/zero of=baloon bs=1048576 count=800 oflag=dsync
800+0 records in
800+0 records out
838860800 bytes (839 MB) copied, 14.9614 s, 56.1 MB/s
cinder-node # dd if=/dev/zero of=baloon bs=1048576 count=200 oflag=dsync
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 3.17956 s, 66.0 MB/s
libvirt # dd if=/dev/zero of=baloon bs=1048576 count=200 oflag=dsync
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 13.344 s, 15.7 MB/s
dsync did equalize things a bit... and it sounds like my kvm instance
that uses equally fast storage (libvirt above) dropped performance with
dsync. There's still 2x change in performance going from bare metal to
virtualized.
...and then I ran postmark:
pm>set location /home/cloud-user/pm
pm>set number 500
pm>set transactions 500
pm>set buffering false
pm>set size 500 1000000
pm>run
cinder-volume:
124.01 megabytes read (62.01 megabytes per second)
400.97 megabytes written (200.49 megabytes per second)
nova-compute:
124.01 megabytes read (20.67 megabytes per second)
400.97 megabytes written (66.83 megabytes per second)
instance:
124.01 megabytes read (1.19 megabytes per second)
400.97 megabytes written (3.86 megabytes per second)
libvirt:
124.01 megabytes read (41.34 megabytes per second)
400.97 megabytes written (133.66 megabytes per second)
All of the above are "averages" from several subsequent runs.
So it sounds dd was "too generous" reporting only 50% drop. reality is
that drop is quite steep. After playing with parameters write
performance for instance stays the same and read can vary wildly.
In other words - question stands: why such a huge drop going virtual?
> * Have a look at the storage node when you do the benchmark. Run atop
> or something similar to see if the storage is the actual bottleneck
> or if it is something on the client.
from above numbers it doesn't sound like storage node is the culprit -
50% drop happens on compute node going from baremetal to virtual. So I'm
inclined to think it's a tuning of virtio (if that is even possible).
--
Dmitry Makovey
Web Systems Administrator
Athabasca University
(780) 675-6245
---
Confidence is what you have before you understand the problem
Woody Allen
When in trouble when in doubt run in circles scream and shout
http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 173 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141223/4f50d753/attachment.sig>
More information about the Openstack
mailing list