You are describing the problems of using a shared filesystem backend for cinder, instead of using a driver with direct connection at block-device level. It has improved a lot in the last 18 months or so, specially if you want to use as shared storage for your VMs. Seems the snapshotting feature is on the way: https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/glusterfs.py But the killer feature is the direct access from QEMU to Gluster using libgfapi. It seems it has been added in Havana and it's in master branch since mid August: https://review.openstack.org/#/c/39498/ If I had to consider a scalable storage solution for an Openstack deployment for the next 10 years, I would consider Gluster. Cheers Diego -- Diego Parrilla <http://www.stackops.com/>*CEO* *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * <http://www.stackops.com/> * * On Tue, Sep 10, 2013 at 2:36 PM, Maciej Gałkiewicz <macias at shellycloud.com>wrote: > Hello > > For everyone looking for some info regarding GlusterFS and Openstack > integration I suggest my blog post: > > https://shellycloud.com/blog/2013/09/why-glusterfs-should-not-be-implemented-with-openstack > > regards > -- > Maciej Gałkiewicz > Shelly Cloud Sp. z o. o., Sysadmin > http://shellycloud.com/, macias at shellycloud.com > KRS: 0000440358 REGON: 101504426 > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130910/65065a1b/attachment.html>