[Openstack] Storage Nodes

Dmitri Maziuk dmaziuk at bmrb.wisc.edu
Tue Apr 8 12:49:07 UTC 2014


On 4/8/2014 7:05 AM, Darren Birkett wrote:
> Hi Ian,
>
> Unless you're going to use SSD drives in your cinder-volume nodes, why 
> do you expect to get any better performance out of this setup, versus 
> a ceph cluster?  If anything, performance would be worse since at 
> least ceph has the ability to stripe access across many nodes, and 
> therefore many more disks, per volume.

Last I looked ceph's i/o was 1/num replicas: you could get performance 
or redundancy. And with only one node I think ceph cluster will be 
"degraded" forever. Plus you may need fedora 37 with 3.42 kernel and/or 
inktank's custom build of libvirt on your openstack nodes to actually 
use it.

I'd like to go ceph, too, but ATM it looks like I'll stick to lvm on a 
big raid box and maybe play with swift in my copious free time.

Dima





More information about the Openstack mailing list