<div dir="ltr">Gluster for block storage is definitely not a good choice, specially for VMs and OpenStack in general. Also, there are rumors all over the place that RedHat will start to "phase out" Gluster in favor of CephFS, the "last frontier" of the so-called "Unicorn Storage" (Ceph does everything). But when it comes to block, there's no better choice than Ceph for every-single scenario I could think off.<div><br></div><div>[]'s</div><div>Hubner</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 16, 2017 at 4:39 PM, Mike Smith <span dir="ltr"><<a href="mailto:mismith@overstock.com" target="_blank">mismith@overstock.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word">
Same experience here. Gluster ‘failover’ time was an issue for as well (rebooting one of the Gluster nodes caused unacceptable locking/timeout for a period of time). Ceph has worked well for us for both nova-ephemeral and cinder volume as well as Glance.
Just make sure you stay well ahead of running low on disk space! You never want to run low on a Ceph cluster because it will write lock until you add more disk/OSDs<span class="HOEnZb"><font color="#888888"><br>
<div>
<div style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">
<div><br class="m_-82181735938660305Apple-interchange-newline">
Mike Smith</div>
<div>Lead Cloud Systems Architect</div>
<div><a href="http://overstock.com" target="_blank">Overstock.com</a></div>
</div>
<div style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">
<br>
</div>
<br class="m_-82181735938660305Apple-interchange-newline">
</div></font></span><div><div class="h5">
<br>
<div>
<blockquote type="cite">
<div>On Feb 16, 2017, at 11:30 AM, Jonathan Abdiel Gonzalez Valdebenito <<a href="mailto:jonathan.abdiel@gmail.com" target="_blank">jonathan.abdiel@gmail.com</a>> wrote:</div>
<br class="m_-82181735938660305Apple-interchange-newline">
<div>
<div dir="ltr">Hi Vahric!
<div><br>
</div>
<div>We tested GlusterFS a few years ago and the latency was high, poors IOPs and every node with a high cpu usage, well that was a few years ago.</div>
<div><br>
</div>
<div>We ended up after lot of tests using fio with Ceph cluster, so my advice it's use Ceph Cluster without doubts </div>
<div><br>
</div>
<div>Regards,</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr">On Thu, Feb 16, 2017 at 1:32 PM Vahric Muhtaryan <<a href="mailto:vahric@doruk.net.tr" target="_blank">vahric@doruk.net.tr</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word;font-size:14px;font-family:Calibri,sans-serif" class="m_-82181735938660305gmail_msg">
<div class="m_-82181735938660305gmail_msg">Hello All , </div>
<div class="m_-82181735938660305gmail_msg"><br class="m_-82181735938660305gmail_msg">
</div>
<div class="m_-82181735938660305gmail_msg">For a long time we are testing Ceph and today we also want to test GlusterFS</div>
<div class="m_-82181735938660305gmail_msg"><br class="m_-82181735938660305gmail_msg">
</div>
<div class="m_-82181735938660305gmail_msg">Interesting thing is maybe with single client we can not get IOPS what we get from ceph cluster . (From ceph getting max 35 K IOPS for % 100 random write and gluster gave us 15-17K )</div>
<div class="m_-82181735938660305gmail_msg">But interesting thing when add additional client to test its get same IOPS with first client means overall performance is doubled , couldn’t test more client but also interesting things is glusterfs do not use/eat CPU like Ceph , a few
percent of CPU is used. </div>
<div class="m_-82181735938660305gmail_msg"><br class="m_-82181735938660305gmail_msg">
</div>
<div class="m_-82181735938660305gmail_msg">I would like to ask with Openstack , anybody use GlusterFS for instance workload ? </div>
<div class="m_-82181735938660305gmail_msg">Anybody used both of them in production and can compare ? Or share experience ? </div>
<div class="m_-82181735938660305gmail_msg"><br class="m_-82181735938660305gmail_msg">
</div>
<div class="m_-82181735938660305gmail_msg">Regards</div>
<div class="m_-82181735938660305gmail_msg">Vahric Muhtaryan</div>
</div>
______________________________<wbr>_________________<br class="m_-82181735938660305gmail_msg">
OpenStack-operators mailing list<br class="m_-82181735938660305gmail_msg">
<a href="mailto:OpenStack-operators@lists.openstack.org" class="m_-82181735938660305gmail_msg" target="_blank">OpenStack-operators@lists.<wbr>openstack.org</a><br class="m_-82181735938660305gmail_msg">
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" class="m_-82181735938660305gmail_msg" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-operators</a><br class="m_-82181735938660305gmail_msg">
</blockquote>
</div>
______________________________<wbr>_________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.<wbr>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-operators</a><br>
</div>
</blockquote>
</div>
<br>
</div></div></div>
<br>______________________________<wbr>_________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.<wbr>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-operators</a><br>
<br></blockquote></div><br></div>