[Openstack-operators] Distributed Filesystem

Aubrey Wells aubrey at vocalcloud.com
Wed Apr 17 22:15:36 UTC 2013


We use ocfs2 backed by a NetAPP accessed via fiberchannel. We chose it over
GlusterFS because there was a small performance increase, though it is a
little more finicky to manage, but once its set up you don't really have to
tinker with it any more.

------------------
Aubrey Wells
Director | Network Services
VocalCloud
678.248.2637
support at vocalcloud.com
www.vocalcloud.com


On Wed, Apr 17, 2013 at 5:41 PM, Razique Mahroua
<razique.mahroua at gmail.com>wrote:

> I was about to use CephFS (Bobtail) but the I can't resize the instances
> without having CephFS crashing.
> I'm currently considering GlusterFS which not only provides great
> performance, it's also pretty much easy to administer  :)
>
> Le 17 avr. 2013 à 22:07, JuanFra Rodriguez Cardoso <
> juanfra.rodriguez.cardoso at gmail.com> a écrit :
>
> Glance and Nova with MooseFS.
> Reliable, good performance and easy configuration.
>
> ---
> JuanFra
>
>
> 2013/4/17 Jacob Godin <jacobgodin at gmail.com>
>
>> Hi all,
>>
>> Just a quick survey for all of you running distributed file systems for
>> nova-compute instance storage. What are you running? Why are you using that
>> particular file system?
>>
>> We are currently running CephFS and chose it because we are already using
>> Ceph for volume and image storage. It works great, except for snapshotting,
>> where we see slow performance and high CPU load.
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130417/3ee59ae5/attachment.html>


More information about the OpenStack-operators mailing list