[Openstack] Storage, glusterfs v ceph
Maciej Gałkiewicz
macias at shellycloud.com
Thu Oct 3 09:25:58 UTC 2013
On 3 October 2013 11:04, John Ashford <logica111 at hotmail.com> wrote:
> 1 – Glusterfs V Ceph
>
> Im reading a lot of different opinions about which of these is the best
> storage backend. My need is for a fully stable product that has fault
> tolerance built in. It needs to support maybe 400 low traffic web sites and
> a few very high traffic. I saw a Redhat diag suggesting throughput on a Gbit
> nic with 2 storage servers (Glusterfs) would be around 200Mbps. I can put
> quad nics in the 2 or 3 storage machines to give extra breathing room.
> Gluster is of course a mature product and has Redhat pushing it forward but
> some people complain of speed issues. Any real life experiences of
> throughput using Ceph? I know Ceph is new but it seems there is considerable
> weight behind its development so while some say its not production ready I
> wonder if anyone has the experience to refute/concur?
CephFS is not production ready unlike with Ceph RBD.
http://ceph.com/docs/master/architecture/
What do you really need? Distributed filesystem? Block devices? Object storage?
This might be helpful:
https://news.ycombinator.com/item?id=6359519
> 2 – vm instances on clustered storage
>
> Im reading how if you run your vm instances on gluster/ceph you benefit from
> live migration and faster access times since disk access is usually to the
> local disk. I just need to clarify this – and this may fully expose my
> ignorance - but surely the instance runs on the compute node, not storage
> node so I don’t get how people are claiming its faster running vm instance
> on the storage cluster unless they are actually running compute on the
> storage cluster in which case you don’t have proper separation of
> compute/storage. Also, you would have the networking overhead unless running
> a compute node on storage cluster? What am I missing?!
Network (1Gbit) is faster then normal 7.2k SATA disk (considering
RANDOM read/write) and is able to perform more IOPS. VM access files
from storage cluster which can read/write them simultaneously from/to
many disks in the cluster. At some point 10Gbit is a must.
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, macias at shellycloud.com
KRS: 0000440358 REGON: 101504426
More information about the Openstack
mailing list