[Openstack-operators] Distributed Filesystem

Razique Mahroua razique.mahroua at gmail.com
Thu Apr 25 07:51:59 UTC 2013


Oh heck John that is one of a feature :D

Jacob, that's what I do - I mount glusterFS on /var/lib/nova/instances and the backend is a distributed replicated volume (replica 4) that the nodes themselves are part of.
The only thing you need to make sure is to allocate enough space - and disable the auto suppression of the unused base images.
Performance-wise, I came up with good figures - the network was the bottleneck, not so much the Gluster FS itself. I'll run soon extra benches (I use iozone3 everytime) on the production environment that will have the exact same setup). Make sure to use the last stable version as well :)

regards,

Razique Mahroua - Nuage & Co
razique.mahroua at gmail.com
Tel : +33 9 72 37 94 15



Le 25 avr. 2013 à 02:41, Jacob Godin <jacobgodin at gmail.com> a écrit :

> Hey John,
> 
> Thanks for the info! Have you had anyone simply using GlusterFS mounted to /var/lib/nova and running instances that way? I'd be interested in hearing some results if any.
> 
> Cheers,
> Jacob
> 
> 
> On Wed, Apr 24, 2013 at 4:51 PM, John Mark Walker <johnmark at redhat.com> wrote:
> 
> 
> I ended up using Gluster as a shared storage for instance and Ceph for Cinder/ Nova-volume and admin storage as well.
> works perfectly!
> 
> For the record, this is what I usually recommend. Whenever someone asks me "Gluster or Ceph?" I ask them "for which part?" We (being the Gluster team) devoted almost all of our time to developing a scale-out NAS solution, which works well for shared storage for tenants in the cloud. We didn't really start to work on GlusterFS for hosting VMs until last year. 
> 
> For the curious among you, feel free to try out our new KVM/QEMU integration pieces which are available in GlusterFS 3.4, which is currently in alpha with a beta coming soon:
> http://download.gluster.org/pub/gluster/glusterfs/qa-releases/  - NOTE: it's in alpha! It will probably break! Do not run in production!
> 
> I'd be very curious to hear how this works out for openstack users, particularly given our recent Cinder integration. Note that this requires QEMU >= 1.3 to utilize the new integration bits. To read a bit more about it, along with some links to howtos, see this blog post:
> 
>   http://www.gluster.org/2012/11/integration-with-kvmqemu/
> 
> Again - this is alpha code, so I can't recommend using it for any production system, anywhere. But if you're interested in putting it through its paces, I'd love to hear how it goes.
> 
> Happy hacking,
> John Mark Walker
> Gluster Community Lead
> 
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130425/9900826f/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NUAGECO-LOGO-Fblan_petit.jpg
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130425/9900826f/attachment-0001.jpg>


More information about the OpenStack-operators mailing list