[Openstack-operators] Any serious stability and performance issues on RBD as ephemeral storage ?

Jonathan Proulx jon at csail.mit.edu
Wed Dec 7 15:25:33 UTC 2016


We've been using Ceph as ephemeral backend (and glance store and
cinder backend) for > 2 years (maybe 3 ) and have been very happy.

cinder has been rock solid on RBD side. Early on when we had 6 osd
servers we lost one in production to a memory error.  1/6 is a large
fraction to loose but Ceph handled it as designed and there was not
noticable impact to any running instances.

The ability to start VMs from snap shot of a glance image (which is
how it's implemented if glance and nova are both cinder backed) makes
start up super fast.

Having shared storage also make live migration easy so we can do
hardware maintenence (kernel and OS upgrades) without impacting
running VMs 

As of Mitaka snapshotting running VMs is also fast, though for
earlier releases VMs we suspended while the running RBD volumes was
copied down to the the hypervisors and then restarted while the
downloaded images was re-uploaded from the hypervisor to glance and
back into Ceph.  This could mean VMs down foro 15min or more if they
had large root volumes.  As I said this got fixed in Mitaka so if
you're current this is no longer a problem.

Over all We've been very happy with our switch to Ceph and I'd
definitely recommend it.

-Jon

On Wed, Dec 07, 2016 at 05:51:01PM +0300, Vahric Muhtaryan wrote:
:Hello All, 
:
:I would like to use ephemeral disks with ceph instead of on nova compute
:node. I saw that there is an option to configure it but find many different
:bugs and reports for its not working , not stable , no success at the
:instance creation time.
:Anybody In this list use ceph as an ephemeral storage without any problem ?
:Could you pls share your experiences pls ?
:
:Regards
:VM
:
:

:_______________________________________________
:OpenStack-operators mailing list
:OpenStack-operators at lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-- 



More information about the OpenStack-operators mailing list