[Openstack-operators] OpenStack Storage Backend: Sheepdog, Ceph, or GlusterFS

Hossein Zabolzadeh zabolzadeh at gmail.com
Mon Nov 10 13:14:10 UTC 2014


Really thanks Jonathan for your useful answer.
I found ceph matched closely to my business requirements. GlusterFS
was planned to implement "On compute node storage shared file system"
model for my nova compute shared storage, before hear about cephFS as
you told me by your answer. But for now, I want to have an experience
on cephFS, and change my plan to use cephFS for ephemeral nova storage
if accepted results given.
About GlusterFS, I found a blog talked about Gluster limitation and
want to share it with others:
https://shellycloud.com/blog/2013/09/why-glusterfs-should-not-be-implemented-with-openstack

Thanks again.

On 11/7/14, Jonathan Proulx <jon at jonproulx.com> wrote:
> I see Ceph as the most unified storage solution for OpenStack. We're
> starting to use it for Cinder, and are currently using it for Glance
> and Nova.  We haven't used Swift for the 2.5 years we've been running,
> but since we have recently deployed Ceph for these other uses will do
> plan on rolling out access to the objectstore, probably through a
> Swift interface though that's currently TBD we do have a current use
> case so it is near the top of our queue.
>
> Ceph storage backend for ephemeral nova instances is something no one
> else seems to have mentioned but we find it a huge help.  If you have
> a RAW image in glance's ceph rbd backend and use the ceph rbd backend
> for your nova instances these will be copy on write clones of the
> glance image.  This makes for very fast instance startup and efficient
> use of storage. Regardless of if you have the CoW stuff plumbed
> together the rbd storage does permit easily live migration (even if
> /var/lib/nova which holds the libvirt.xml definition of th instance is
> not on shared storage)
>
> As a caveat this copy on write image creation requires a patched
> version of nova in Icehouse (which I've been running in production a
> couple months).  These were meant to land in the released version of
> Juno but I haven't yet personally verified that they have.
>
> On the block storage side Ceph passed LVM (the default driver) in the
> most recent user survey as the most used Cinder driver (see
> http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
> ).  This means that part is well exercised and if you do run into any
> issues there should be plenty of people.
>
> While I'm happy thus far with Ceph, I encourage you to consider your
> needs for each component and be sure a single choice will fit for you.
> This is especially true if you have geographic distribution.  Ceph's
> synchronous replication can be an issue in that case I hear (It is not
> my case so IDK) .
>
> -Jon
>
> On Fri, Nov 7, 2014 at 10:50 AM, Joe Topjian <joe at topjian.net> wrote:
>> Gluster indeed provides both block and object storage.
>>
>> We use the Gluster Cinder driver in production and, short of some initial
>> hiccups, it's been running great.
>>
>> I have no experience with Gluster's object storage support, though, so I
>> can't give an opinion about it -- just confirmation that it exists. :)
>>
>> My personal opinion is to just use Swift for object storage. Sure you
>> won't
>> have the whole unified storage thing going on, but you'll get Swift. :)
>>
>> On Thu, Nov 6, 2014 at 5:23 PM, Jesse Pretorius
>> <jesse.pretorius at gmail.com>
>> wrote:
>>>
>>> On 6 November 2014 13:01, Hossein Zabolzadeh <zabolzadeh at gmail.com>
>>> wrote:
>>>>
>>>> Thanks for your opinion. But I am looking for the real difference
>>>> between them...
>>>> - Which one is better support in openstack?
>>>> - Which one provides better unified storage backend for all openstack
>>>> storage controllers(cinder, swift and glance)?
>>>
>>>
>>> I don't think that Gluster or Sheepdog provide a storage back-end
>>> capable
>>> of providing block storage (cinder) and object storage (swift)
>>> back-ends.
>>> Only Ceph provides a properly unified back-end for both. Ceph has also
>>> been
>>> supported for cinder for over two years - it's very mature. The Rados
>>> Gateway (Object Storage/Swift Interface) is much more recent, but I have
>>> yet
>>> to see problems with it - as long as you do acknowledge that it is not
>>> Swift.
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



More information about the OpenStack-operators mailing list