[Openstack] Storage decision

Julien De Freitas bada.boum at outlook.com
Mon Nov 4 09:46:23 UTC 2013


Hi Razique,
Thanks for the link !I read the full discussion and as I tought there is no real perfect solution so far.I think i'll continue to use nexenta because it's a great solution and i'll set up multi back end storage for cinder in order to test ceph block storage.For meta data storage i'll do some test with CephFS because not production ready mean a lot and nothing at the same time. I your previsous mail you said  "the FS kept hanging on high load, so I considered it to be pretty unstable for OpenStack", but if it was  kept hanging on high load it should be pretty stable ? what was the load ? Can you share more detail with us ?That's a pity that we could not find any neutral heavy test out there.
Thanks,
Julien

Date: Sun, 3 Nov 2013 12:02:41 -0800
From: razique.mahroua at gmail.com
To: openstack at lists.openstack.org; bada.boum at outlook.com
Subject: Re: [Openstack] Storage decision

Hi Julien, we do discussed that topic many times. With Havana, things are a bit different, but on our previous discussion, we challenged a couple of technologies for the shared storage.It boils down in the end to :- the resources you have- what you are trying to achieve
•Ceph cluster IS production ready, that’s the CephFS which is not, and that’s that FS which is a shared one. In my testing (supported by others), the FS kept hanging on high load, so I considered it to be pretty unstable for OpenStack. • iSCSI gave me the best performance so far, what you need to do is to create first the iSCSI LUN on you SAN and map is as a block device. Libvirt is able to use that as storage.• NFS was too slow, and I ended up having locks, and a stalled FS• MooseFS will give you good performance, but it’s not adviced to use it for storing and manipulating big images. Make sure to have a solid network backend as well for that cluster :)• GlusterFS is easy to manage, but I only had bad experience with Windows instances (aka big instances :D) the replication process was eating all the cpu and the I/O were very slow. 
Here is a previous topic:http://lists.openstack.org/pipermail/openstack-operators/2013-July/003310.html

regards,Razique
 
On November 3, 2013 at 7:17:19, Julien De Freitas (bada.boum at outlook.com) wrote: 









Hi guys,
I knows that question has been treated hundreds of times but i
cannot find 1 good answer.
Moreover, since Havana extend support for cinder and gluster
it could be nice to review the question.


What I currently use on my
plateform :


I configured Nexenta to provide NFS and iScsi target :
NFS Instance disk : i mounted a
volume on each compute node and configured the NFS on
nova.conf. 
Iscsi for cinder back end : I
configured iScsi so when i create a volume it create an iScsi
volume and i'm then able to mount in inside instance.
But the problem is that the
replication module for nexenta to get a HA storage system is
expensive and it's not a distributed file system.



My goal : store instances ephemeral storage on a performant,
highly available and cheap storage system configured for live
migration :D


To achieve this, I studied read about CephFS  and glusterFS.
But Ceph is marked has not
ready for production and GlusterFS seems to have some concerned
about performance.



What do you think ? Does anyone
have production experiences on GlusterFS or Ceph ?


Thanks





_______________________________________________

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to     : openstack at lists.openstack.org

Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

 -- 
Razique Mahroua

 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131104/ae0705be/attachment.html>


More information about the Openstack mailing list