[Openstack-operators] ceph vs gluster for block
Alex Hübner
alex at hubner.net.br
Thu Feb 16 19:27:11 UTC 2017
Gluster for block storage is definitely not a good choice, specially for
VMs and OpenStack in general. Also, there are rumors all over the place
that RedHat will start to "phase out" Gluster in favor of CephFS, the "last
frontier" of the so-called "Unicorn Storage" (Ceph does everything). But
when it comes to block, there's no better choice than Ceph for every-single
scenario I could think off.
[]'s
Hubner
On Thu, Feb 16, 2017 at 4:39 PM, Mike Smith <mismith at overstock.com> wrote:
> Same experience here. Gluster ‘failover’ time was an issue for as well
> (rebooting one of the Gluster nodes caused unacceptable locking/timeout for
> a period of time). Ceph has worked well for us for both nova-ephemeral and
> cinder volume as well as Glance. Just make sure you stay well ahead of
> running low on disk space! You never want to run low on a Ceph cluster
> because it will write lock until you add more disk/OSDs
>
> Mike Smith
> Lead Cloud Systems Architect
> Overstock.com <http://overstock.com>
>
>
>
> On Feb 16, 2017, at 11:30 AM, Jonathan Abdiel Gonzalez Valdebenito <
> jonathan.abdiel at gmail.com> wrote:
>
> Hi Vahric!
>
> We tested GlusterFS a few years ago and the latency was high, poors IOPs
> and every node with a high cpu usage, well that was a few years ago.
>
> We ended up after lot of tests using fio with Ceph cluster, so my advice
> it's use Ceph Cluster without doubts
>
> Regards,
>
> On Thu, Feb 16, 2017 at 1:32 PM Vahric Muhtaryan <vahric at doruk.net.tr>
> wrote:
>
>> Hello All ,
>>
>> For a long time we are testing Ceph and today we also want to test
>> GlusterFS
>>
>> Interesting thing is maybe with single client we can not get IOPS what we
>> get from ceph cluster . (From ceph getting max 35 K IOPS for % 100 random
>> write and gluster gave us 15-17K )
>> But interesting thing when add additional client to test its get same
>> IOPS with first client means overall performance is doubled , couldn’t
>> test more client but also interesting things is glusterfs do not use/eat
>> CPU like Ceph , a few percent of CPU is used.
>>
>> I would like to ask with Openstack , anybody use GlusterFS for instance
>> workload ?
>> Anybody used both of them in production and can compare ? Or share
>> experience ?
>>
>> Regards
>> Vahric Muhtaryan
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170216/ea8393db/attachment.html>
More information about the OpenStack-operators
mailing list