[Openstack-operators] Cinder with Ceph which option is good ?

Tim Bell Tim.Bell at cern.ch
Fri Mar 21 06:27:10 UTC 2014

This is exactly how CERN uses ceph, for block storage with cinder and glance (using rbd). There is no need to have the additional CephFS layer for these functions. We have a 3PB ceph pool (but not all is used for OpenStack)

Some details at http://indico.cern.ch/event/300076/contribution/0/material/slides/0.pdf and  http://www.slideshare.net/Inktank_Ceph/cern-ceph-day-london-2013


From: Abel Lopez [mailto:alopgeek at gmail.com]
Sent: 21 March 2014 06:52
To: gustavo panizzo <gfa>
Cc: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Cinder with Ceph which option is good ?

While you're at it, you can also get a benefit from using ceph for glance.

On Thursday, March 20, 2014, gustavo panizzo <gfa> <gfa at zumbi.com.ar<mailto:gfa at zumbi.com.ar>> wrote:
use ceph as backend for cinder, it works fine :) doc is on inktank website

i use cephfs for other stuff but i found it slow (fuse is to blame)
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

sent from mobile
On March 20, 2014 9:24:08 AM GMT-03:00, Zeeshan Ali Shah <zashah at kth.se<javascript:_e(%7B%7D,'cvml','zashah at kth.se');>> wrote:

We are running ceph for S3 and Swift .

For Cinder I was also thinking to use the ceph . Now there are few options .

1. Should we totally drop cinder and use ceph rbd totally ? does ceph rbd integrate good with OS keystone and horizon
2. Can we some how connect cinder backend to use rbd ?
3. What if we use cephFS as backend for cinder , any performance issues ?

any suggestion ?


Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115<tel:%2B46%208%20790%209115>


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<javascript:_e(%7B%7D,'cvml','OpenStack-operators at lists.openstack.org');>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140321/d2b3381e/attachment.html>

More information about the OpenStack-operators mailing list