[Openstack-operators] ceph vs gluster for block

Mike Smith mismith at overstock.com
Thu Feb 16 18:39:41 UTC 2017


Same experience here.  Gluster ‘failover’ time was an issue for as well (rebooting one of the Gluster nodes caused unacceptable locking/timeout for a period of time).  Ceph has worked well for us for both nova-ephemeral and cinder volume as well as Glance.  Just make sure you stay well ahead of running low on disk space!  You never want to run low on a Ceph cluster because it will write lock until you add more disk/OSDs

Mike Smith
Lead Cloud Systems Architect
Overstock.com<http://overstock.com>



On Feb 16, 2017, at 11:30 AM, Jonathan Abdiel Gonzalez Valdebenito <jonathan.abdiel at gmail.com<mailto:jonathan.abdiel at gmail.com>> wrote:

Hi Vahric!

We tested GlusterFS a few years ago and the latency was high, poors IOPs and every node with a high cpu usage, well that was a few years ago.

We ended up after lot of tests using fio with Ceph cluster, so my advice it's use Ceph Cluster without doubts

Regards,

On Thu, Feb 16, 2017 at 1:32 PM Vahric Muhtaryan <vahric at doruk.net.tr<mailto:vahric at doruk.net.tr>> wrote:
Hello All ,

For a long time we are testing Ceph and today we also want to test GlusterFS

Interesting thing is maybe with single client we can not get IOPS what we get from ceph cluster . (From ceph getting max 35 K IOPS for % 100 random write and gluster gave us 15-17K  )
But interesting thing when add additional client to test its get same IOPS with first client means overall performance is doubled  , couldn’t test more client but also interesting things is glusterfs do not use/eat CPU like Ceph , a few percent of CPU is used.

I would like to ask with Openstack , anybody use GlusterFS for instance workload ?
Anybody used both of them in production and can compare ? Or share experience ?

Regards
Vahric Muhtaryan
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170216/a5ad9a3c/attachment.html>


More information about the OpenStack-operators mailing list