[Openstack] Ceph + Nova

Dave Spano dspano at optogenics.com
Mon Nov 26 15:45:12 UTC 2012


My way of mitigating that risk is via an active/passive cloud controller cluster with stonith for the time being. Although, you bring up an interesting point regarding puppet. 

I have not used it it yet, but due to the complexity that cloud computing brings, I've found that it's going to be a necessity in the near future. 

In practice, upgrading that packages with a setup like I have can be problematic due to its being active/passive. It would be much easier to keep all the package config files local, and just get them up to date via puppet. 


Dave Spano 
Optogenics 
Systems Administrator 


----- Original Message -----

From: "Sébastien Han" <han.sebastien at gmail.com> 
To: "Razique Mahroua" <razique.mahroua at gmail.com> 
Cc: "Dave Spano" <dspano at optogenics.com>, "ceph-devel" <ceph-devel at vger.kernel.org>, "Openstack" <openstack at lists.launchpad.net> 
Sent: Wednesday, November 21, 2012 4:54:01 PM 
Subject: Re: [Openstack] Ceph + Nova 

As far I'm concerned, I will never put config files on share storage (especially on a non-production ready), these are too critical. I will only do it if the application specifically requires it like shared web applications that needs auto vhost sync (or stuff like that). 


If you want to keep them update & sync, simply manage this with git & puppet ;) 


My 2 cents... 



On Wed, Nov 21, 2012 at 10:40 PM, Razique Mahroua < razique.mahroua at gmail.com > wrote: 



That's i think a clever approach - to set a data cluster as a backend for the configuration files - which are de facto as important as the instances themselves. 
Regarding the performance, it should not be a problem - the only data that gets frequently updated being the database. 
regards, 
Razique 



Nuage & Co - Razique Mahroua 
razique.mahroua at gmail.com 



Le 21 nov. 2012 à 15:51, Dave Spano < dspano at optogenics.com > a écrit : 



<blockquote>


JuanFra, 

I do use cephfs in production, but not for the /var/lib/instances directory. I do host the openstack database and the openstack configuration files on it for an HA cloud controller cluster, but I am probably crazier than most people, and I have a very small deployment. I currently have not had any problems with it, but due to the size of my cloud, I can afford to be very hands-on with it. 

The reason I have not hosted the /var/lib/instances directory is due to the fact that the data gets a lot more activity than my small database does. Instead, I prefer to perform block migrations rather than live ones until cephfs becomes more stable. 


Dave Spano 
Optogenics 
Systems Administrator 




From: "Sébastien Han" < han.sebastien at gmail.com > 
To: "JuanFra Rodríguez Cardoso" < juanfra.rodriguez.cardoso at gmail.com > 
Cc: "Openstack" < openstack at lists.launchpad.net >, "ceph-devel" < ceph-devel at vger.kernel.org > 
Sent: Wednesday, November 21, 2012 4:03:48 AM 
Subject: Re: [Openstack] Ceph + Nova 

Hi, 

I don't think it's the best place to ask your question since it's not 
directly related to OpenStack but more about Ceph. I just put in c/c 
the ceph ML. Anyway, CephFS is not ready yet for production but I 
heard that some people use it. People from Inktank (the company behind 
Ceph) don't recommend it, AFAIR they expect something more production 
ready for Q2 2013. You can use it (I did, for testing purpose) but 
it's at your own risk. 
Beside of this RBD and RADOS are robust and stable now, so you can go 
with the Cinder and Glance integration without any problems. 

Cheers! 

On Wed, Nov 21, 2012 at 9:37 AM, JuanFra Rodríguez Cardoso 
< juanfra.rodriguez.cardoso at gmail.com > wrote: 
> Hi everyone: 
> 
> I'd like to know your opinion as nova experts: 
> 
> Would you recommend CephFS as shared storage in /var/lib/nova/instances? 
> Another option it would be use GlusterFS or MooseFS for 
> /var/lib/nova/instances directory and Ceph RBD for Glance and Nova volumes, 
> don't you think? 
> 
> Thanks for your attention. 
> 
> Best regards, 
> JuanFra 
> 
> _______________________________________________ 
> Mailing list: https://launchpad.net/~openstack 
> Post to : openstack at lists.launchpad.net 
> Unsubscribe : https://launchpad.net/~openstack 
> More help : https://help.launchpad.net/ListHelp 
> 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo at vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 

_______________________________________________ 
Mailing list: https://launchpad.net/~openstack 
Post to : openstack at lists.launchpad.net 
Unsubscribe : https://launchpad.net/~openstack 
More help : https://help.launchpad.net/ListHelp 



_______________________________________________ 
Mailing list: https://launchpad.net/~openstack 
Post to : openstack at lists.launchpad.net 
Unsubscribe : https://launchpad.net/~openstack 
More help : https://help.launchpad.net/ListHelp 


</blockquote>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121126/e777e010/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NUAGECO-LOGO-Fblan_petit.jpg
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121126/e777e010/attachment.jpg>


More information about the Openstack mailing list