[Openstack] CEPH Speed Limit

Van Leeuwen, Robert rovanleeuwen at ebay.com
Wed Jan 20 07:13:27 UTC 2016


Hi,

I think this question would be better suited to the ceph maillinglist but I will have a go at it.

> I have a client who isn't happy with the performance of their storage.
> The client is currently running a mix of SAS HDDs and SATA SSDs.

What part are they not happy about? Throughput or IOPS?
Unless you only have very big writes going to ceph throughput does not say a lot.
In general it is easy to get high throughput but IOPS are a whole different story.
E.g. Getting high IOPS on a single instance (e.g. running a mysql database) based on a ceph volume is problematic.
However it is pretty good at lots of low-load (100-200 IOPS) instances.

> They wanted to remove the SAS HDDs and replace them with SSDs, so the entire array would be SSDs.
> I was running benchmarks on the current hardware and I found that the performance of the HDD array was close to the performance of the SSD array.

The current ceph code does not make great use of SSDs yet.
There are some improvements being made (using tcmalloc) but afaik it is not in mainline ceph.

Also do not forget you have quite a few network round-trips before a write can be acked.
There is a nice thread about that here:
http://www.spinics.net/lists/ceph-users/msg24249.html


Something else to think about is that not all SSDs are created equal.
For journals ceph is very picky.
There are some benchmarks here but in general I would stick with Intel 3700 series because of good performance and endurance:
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/

> Right now my benchmarks are showing that sequential reads are hitting about 600Mb/s. (I haven't confirmed if their server is PCI-E 2.0 or 3.0)
As mentioned before, does the client really want MB/sec or do they want IOPS?
Check the workloads before investing any money.

Cheers,
Robert van Leeuwen

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160120/82b35a28/attachment.html>


More information about the Openstack mailing list