[Openstack] Storage, glusterfs v ceph

John Ashford logica111 at hotmail.com
Thu Oct 3 09:04:27 UTC 2013


1 – Glusterfs V Ceph

Im reading a lot of different opinions about which of these
is the best storage backend. My need is for a fully stable product that has
fault tolerance built in. It needs to support maybe 400 low traffic web sites
and a few very high traffic. I saw a Redhat diag suggesting throughput on a
Gbit nic with 2 storage servers (Glusterfs) would be around 200Mbps. I can put
quad nics in the 2 or 3 storage machines to give extra breathing room. Gluster
is of course a mature product and has Redhat pushing it forward but some people
complain of speed issues. Any real life experiences of throughput using Ceph? I
know Ceph is new but it seems there is considerable weight behind its
development so while some say its not production ready I wonder if anyone has
the experience to refute/concur?

 

2 – vm instances on clustered storage

Im reading how if you run your vm instances on gluster/ceph you
benefit from live migration and faster access times since disk access is
usually to the local disk. I just need to clarify this – and this may fully expose
my ignorance -  but surely the instance
runs on the compute node, not storage node so I don’t get how people are
claiming its faster running vm instance on the storage cluster unless they are
actually running compute on the storage cluster in which case you don’t have
proper separation of compute/storage. Also, you would have the networking
overhead unless running a compute node on storage cluster? What am I missing?!

 

 Thanks

John 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131003/78c6dbb0/attachment.html>


More information about the Openstack mailing list