[Openstack] Ceph performance as volume & image store?

Josh Durgin josh.durgin at inktank.com
Tue Jul 24 23:08:58 UTC 2012


On 07/23/2012 08:24 PM, Jonathan Proulx wrote:
> Hi All,
>
> I've been looking at Ceph as a storage back end.  I'm running a
> research cluster and while people need to use it and want it 24x7 I
> don't need as many nines as a commercial customer facing service does
> so I think I'm OK with the current maturity level as far as that goes,
> but I have less of a sense of how far along performance is.
>
> My OpenStack deployment is 768 cores across 64 physical hosts which
> I'd like to double in the next 12 months.  What it's used for is
> widely varying and hard to classify some uses are hundreds of tiny
> nodes others are looking to monopolize the biggest physical system
> they can get.  I think most really heavy IO currently goes to our NAS
> servers rather than through nova-volumes but that could change.
>
> Anyone using ceph at that scale (or preferably larger)?  Does it keep
> up if you keep throwing hardware at it?  My proof of concept ceph
> cluster on crappy salvaged hardware has proved the concept to me but
> has (unsurprisingly) crappy salvaged performance. Trying to get a
> sense of what performance expectations I should have given decent
> hardware before I decide if I should buy decent hardware for it...
>
> Thanks,
> -Jon

Hi Jon,

You might be interested in Jim Schutt's numbers on better hardware:

http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7487

You'll probably get more response on the ceph mailing list though.

Josh




More information about the Openstack mailing list