<div dir="ltr">We use ocfs2 backed by a NetAPP accessed via fiberchannel. We chose it over GlusterFS because there was a small performance increase, though it is a little more finicky to manage, but once its set up you don't really have to tinker with it any more.</div>
<div class="gmail_extra"><br clear="all"><div>------------------<br>Aubrey Wells<br>Director | Network Services<br>VocalCloud<br>678.248.2637<br><a href="mailto:support@vocalcloud.com">support@vocalcloud.com</a><br><a href="http://www.vocalcloud.com">www.vocalcloud.com</a></div>
<br><br><div class="gmail_quote">On Wed, Apr 17, 2013 at 5:41 PM, Razique Mahroua <span dir="ltr"><<a href="mailto:razique.mahroua@gmail.com" target="_blank">razique.mahroua@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word">I was about to use CephFS (Bobtail) but the I can't resize the instances without having CephFS crashing.<div>I'm currently considering GlusterFS which not only provides great performance, it's also pretty much easy to administer :)</div>
<div><br></div><div><div><div>Le 17 avr. 2013 à 22:07, JuanFra Rodriguez Cardoso <<a href="mailto:juanfra.rodriguez.cardoso@gmail.com" target="_blank">juanfra.rodriguez.cardoso@gmail.com</a>> a écrit :</div><div><div class="h5">
<br><blockquote type="cite"><div dir="ltr">Glance and Nova with MooseFS.<br>Reliable, good performance and easy configuration.<br><div class="gmail_extra"><br clear="all"><div><div>---</div>JuanFra</div>
<br><br><div class="gmail_quote">2013/4/17 Jacob Godin <span dir="ltr"><<a href="mailto:jacobgodin@gmail.com" target="_blank">jacobgodin@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hi all,<div><br></div><div>Just a quick survey for all of you running distributed file systems for nova-compute instance storage. What are you running? Why are you using that particular file system?</div>
<div><br></div><div>We are currently running CephFS and chose it because we are already using Ceph for volume and image storage. It works great, except for snapshotting, where we see slow performance and high CPU load.</div>
<div><br></div></div>
<br>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div><br></div></div>
_______________________________________________<br>OpenStack-operators mailing list<br><a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
</blockquote></div></div></div><br></div></div><br>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div><br></div>