[Openstack-operators] Distributed Filesystem

Sylvain Bauza sylvain.bauza at digimind.com
Thu Apr 25 09:48:46 UTC 2013


GlusterFS support is fully available since qemu-1.3, which makes 
possible to direcly address block devices in a glusterfs namespace.
http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/

Unfortunately, I'm not aware of this integration effort in Nova. It 
would be great if one could specify GlusterFS as backend for instances 
and let Nova use qemu capabilities. Maybe it's already done or in 
progress, but I haven't heard any BP about of.

Benchmarks are very impressive with glusterfs native support in qemu. 
I'll try to find out one.

-Sylvain

Le 25/04/2013 09:51, Razique Mahroua a écrit :
> Oh heck John that is one of a feature :D
>
> Jacob, that's what I do - I mount glusterFS on /var/lib/nova/instances 
> and the backend is a distributed replicated volume (replica 4) that 
> the nodes themselves are part of.
> The only thing you need to make sure is to allocate enough space - and 
> disable the auto suppression of the unused base images.
> Performance-wise, I came up with good figures - the network was the 
> bottleneck, not so much the Gluster FS itself. I'll run soon extra 
> benches (I use iozone3 everytime) on the production environment that 
> will have the exact same setup). Make sure to use the last stable 
> version as well :)
>
> regards,
>
> *Razique Mahroua** - **Nuage & Co*
> razique.mahroua at gmail.com <mailto:razique.mahroua at gmail.com>
> Tel : +33 9 72 37 94 15
>
>
> Le 25 avr. 2013 à 02:41, Jacob Godin <jacobgodin at gmail.com 
> <mailto:jacobgodin at gmail.com>> a écrit :
>
>> Hey John,
>>
>> Thanks for the info! Have you had anyone simply using GlusterFS 
>> mounted to /var/lib/nova and running instances that way? I'd be 
>> interested in hearing some results if any.
>>
>> Cheers,
>> Jacob
>>
>>
>> On Wed, Apr 24, 2013 at 4:51 PM, John Mark Walker 
>> <johnmark at redhat.com <mailto:johnmark at redhat.com>> wrote:
>>
>>
>>
>>     ------------------------------------------------------------------------
>>
>>         I ended up using Gluster as a shared storage for instance and
>>         Ceph for Cinder/ Nova-volume and admin storage as well.
>>         works perfectly!
>>
>>
>>     For the record, this is what I usually recommend. Whenever
>>     someone asks me "Gluster or Ceph?" I ask them "for which part?"
>>     We (being the Gluster team) devoted almost all of our time to
>>     developing a scale-out NAS solution, which works well for shared
>>     storage for tenants in the cloud. We didn't really start to work
>>     on GlusterFS for hosting VMs until last year.
>>
>>     For the curious among you, feel free to try out our new KVM/QEMU
>>     integration pieces which are available in GlusterFS 3.4, which is
>>     currently in alpha with a beta coming soon:
>>     http://download.gluster.org/pub/gluster/glusterfs/qa-releases/ -
>>     NOTE: it's in alpha! It will probably break! Do not run in
>>     production!
>>
>>     I'd be very curious to hear how this works out for openstack
>>     users, particularly given our recent Cinder integration. Note
>>     that this requires QEMU >= 1.3 to utilize the new integration
>>     bits. To read a bit more about it, along with some links to
>>     howtos, see this blog post:
>>
>>     http://www.gluster.org/2012/11/integration-with-kvmqemu/
>>
>>     Again - this is alpha code, so I can't recommend using it for any
>>     production system, anywhere. But if you're interested in putting
>>     it through its paces, I'd love to hear how it goes.
>>
>>     Happy hacking,
>>     John Mark Walker
>>     Gluster Community Lead
>>
>>
>>
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130425/ad135785/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130425/ad135785/attachment.jpe>


More information about the OpenStack-operators mailing list