[Openstack] Ceph performance as volume & image store?

Jonathan Proulx jon at jonproulx.com
Tue Jul 24 03:24:51 UTC 2012


Hi All,

I've been looking at Ceph as a storage back end.  I'm running a
research cluster and while people need to use it and want it 24x7 I
don't need as many nines as a commercial customer facing service does
so I think I'm OK with the current maturity level as far as that goes,
but I have less of a sense of how far along performance is.

My OpenStack deployment is 768 cores across 64 physical hosts which
I'd like to double in the next 12 months.  What it's used for is
widely varying and hard to classify some uses are hundreds of tiny
nodes others are looking to monopolize the biggest physical system
they can get.  I think most really heavy IO currently goes to our NAS
servers rather than through nova-volumes but that could change.

Anyone using ceph at that scale (or preferably larger)?  Does it keep
up if you keep throwing hardware at it?  My proof of concept ceph
cluster on crappy salvaged hardware has proved the concept to me but
has (unsurprisingly) crappy salvaged performance. Trying to get a
sense of what performance expectations I should have given decent
hardware before I decide if I should buy decent hardware for it...

Thanks,
-Jon




More information about the Openstack mailing list