[Openstack] Ceph + Nova

Razique Mahroua razique.mahroua at gmail.com
Wed Nov 21 21:40:40 UTC 2012


That's i think a clever approach - to set a data cluster as a backend for the configuration files - which are de facto as important as the instances themselves.
Regarding the performance, it should not be a problem - the only data that gets frequently updated being the database. 
regards,
Razique

Nuage & Co - Razique Mahroua 
razique.mahroua at gmail.com



Le 21 nov. 2012 à 15:51, Dave Spano <dspano at optogenics.com> a écrit :

> JuanFra,
> 
> I do use cephfs in production, but not for the /var/lib/instances directory. I do host the openstack database and the openstack configuration files on it for an HA cloud controller cluster, but I am probably crazier than most people, and I have a very small deployment. I currently have not had any problems with it, but due to the size of my cloud, I can afford to be very hands-on with it. 
> 
> The reason I have not hosted the /var/lib/instances directory is due to the fact that the data gets a lot more activity than my small database does. Instead, I prefer to perform block migrations rather than live ones until cephfs becomes more stable. 
> 
> Dave Spano
> Optogenics
> Systems Administrator
> 
> 
> From: "Sébastien Han" <han.sebastien at gmail.com>
> To: "JuanFra Rodríguez Cardoso" <juanfra.rodriguez.cardoso at gmail.com>
> Cc: "Openstack" <openstack at lists.launchpad.net>, "ceph-devel" <ceph-devel at vger.kernel.org>
> Sent: Wednesday, November 21, 2012 4:03:48 AM
> Subject: Re: [Openstack] Ceph + Nova
> 
> Hi,
> 
> I don't think it's the best place to ask your question since it's not
> directly related to OpenStack but more about Ceph. I just put in c/c
> the ceph ML. Anyway, CephFS is not ready yet for production but I
> heard that some people use it. People from Inktank (the company behind
> Ceph) don't recommend it, AFAIR they expect something more production
> ready for Q2 2013. You can use it (I did, for testing purpose) but
> it's at your own risk.
> Beside of this RBD and RADOS are robust and stable now, so you can go
> with the Cinder and Glance integration without any problems.
> 
> Cheers!
> 
> On Wed, Nov 21, 2012 at 9:37 AM, JuanFra Rodríguez Cardoso
> <juanfra.rodriguez.cardoso at gmail.com> wrote:
> > Hi everyone:
> >
> > I'd like to know your opinion as nova experts:
> >
> > Would you recommend CephFS as shared storage in /var/lib/nova/instances?
> > Another option it would be use GlusterFS or MooseFS for
> > /var/lib/nova/instances directory and Ceph RBD for Glance and Nova volumes,
> > don't you think?
> >
> > Thanks for your attention.
> >
> > Best regards,
> > JuanFra
> >
> > _______________________________________________
> > Mailing list: https://launchpad.net/~openstack
> > Post to     : openstack at lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121121/8d7c1a77/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NUAGECO-LOGO-Fblan_petit.jpg
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121121/8d7c1a77/attachment.jpg>


More information about the Openstack mailing list