[Openstack] Suggestions for shared-storage cluster file system

Razique Mahroua razique.mahroua at gmail.com
Sun Feb 24 21:50:51 UTC 2013


Excellent, good to know,
thank you Diego :)

Razique Mahroua - Nuage & Co
razique.mahroua at gmail.com
Tel : +33 9 72 37 94 15



Le 19 févr. 2013 à 22:22, Diego Parrilla Santamaría <diego.parrilla.santamaria at gmail.com> a écrit :

> We have used Gluster for small deployments, but lately we have changed our mind.  Basically we have bet on Gluster for 2013 because of:
> 
> - 10GbE everywhere, and Gluster MUST run in 10GbE (or Infiniband)
> - 3.3 release fixes some issues when locking big files: Granular locking
> - libgfapi reborn, no more FUSE overhead
> - QEMU 1.3 comes with GlusterFS block driver http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/
> - Success cases everywhere
> 
> Regarding our tests, NetApp and Nexenta outperforms Gluster, but let's say we now can live with this performance penalty because cost per bit and horizontal scalability are really good.
> 
> Cheers
> Diego
> 
>  -- 
> Diego Parrilla
> CEO
> www.stackops.com |  diego.parrilla at stackops.com | +34 649 94 43 29 | skype:diegoparrilla
> 
> 
> 
> 
> 
> On Tue, Feb 19, 2013 at 9:57 PM, Razique Mahroua <razique.mahroua at gmail.com> wrote:
> Hey Marco, 
> have you been able to run some performance test on your Gluster cluster?
> 
> Thanks :)
> 
> Razique Mahroua - Nuage & Co
> razique.mahroua at gmail.com
> Tel : +33 9 72 37 94 15
> 
> 
> 
> Le 18 févr. 2013 à 14:20, Marco CONSONNI <mcocmo62 at gmail.com> a écrit :
> 
>> Hello Sam,
>> 
>> I've tried two of them: NFS and Gluster.
>> 
>> Some problems with the former (migration didn't work properly), no problem with the latter.
>> I vote for Gluster.
>> 
>> Hope it helps,
>> Marco.
>> 
>> 
>> 
>> On Fri, Feb 15, 2013 at 4:40 PM, Samuel Winchenbach <swinchen at gmail.com> wrote:
>> Hi All,
>> 
>> Can anyone give me a recommendation for a good shared-storage cluster filesystem?   I am running kvm-libvirt and would like to enable live migration.
>> 
>> I have a number of hosts (up to 16) each with 2xTB drives.  These hosts are also my compute/network/controller nodes.  
>> 
>> The three I am considering are:
>> 
>> GlusterFS - I have the most experience with this, and it seems the easiest.
>> 
>> CephFS/RADOS - Interesting because glance supports the rbd backend.  Slightly worried because of this though "Important: Mount the CephFS filesystem on the client machine, not the cluster machine." (I wish it said why...)  and "CephFS is not quite as stable as the block device and the object storage gateway."
>> Lustre - A little hesitant now that Oracle is involved with it.
>> 
>> If anyone has any advice, or can point out another that I should consider it would be greatly appreciated.
>> 
>> Thanks!
>> 
>> Sam
>> 
>> 
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>> 
>> 
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130224/758e796d/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NUAGECO-LOGO-Fblan_petit.jpg
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130224/758e796d/attachment.jpg>


More information about the Openstack mailing list