[Openstack] CEPH Speed Limit

John van Ommen john.vanommen at gmail.com
Wed Jan 20 01:28:17 UTC 2016


I have a client who isn't happy with the performance of their storage.
The client is currently running a mix of SAS HDDs and SATA SSDs.

They wanted to remove the SAS HDDs and replace them with SSDs, so the
entire array would be SSDs.

I was running benchmarks on the current hardware and I found that the
performance of the HDD array was close to the performance of the SSD array.

To me, this indicates that we're reaching the limits of the controller that
it's attached to. (An LSI RAID controller that's built into the system
board.)

I was about to recommend that they add a controller, when I realized that
we may be reaching the limits of the PCI-E bus itself.

Before I go and make a bad recommendation, I have a few questions:

1) Am I correct in assuming that the RAID controller, though physically on
the system board, is still running through the PCI-E bus, just as if it was
plugged into a slot?
2) Am I correct in assuming that the limit for the PCI-E bus (version 2) is
500Mb/s? (https://en.wikipedia.org/wiki/PCI_Express)

And if points one and two are correct, is my hypothesis that adding more
SSDs won't improve things true?

Right now my benchmarks are showing that sequential reads are hitting about
600Mb/s. (I haven't confirmed if their server is PCI-E 2.0 or 3.0)

John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160119/3c0d4f00/attachment.html>


More information about the Openstack mailing list