[openstack-dev] [TripleO] UnderCloud & OverCloud

LeslieWang wqyuwss at hotmail.com
Sun Dec 29 03:54:32 UTC 2013


Hi Clint,
Thanks for your reply. Please see inline.
Best RegardsLeslie

> From: clint at fewbar.com
> To: openstack-dev at lists.openstack.org
> Date: Sat, 28 Dec 2013 08:23:45 -0800
> Subject: Re: [openstack-dev] [Spam]  [TripleO] UnderCloud & OverCloud
> 
> Excerpts from LeslieWang's message of 2013-12-24 19:19:52 -0800:
> > Dear All,
> > Merry Christmas & Happy New Year!
> > I'm new in TripleO. After some investigation, I have one question on UnderCloud & OverCloud. Per my understanding, UnderCloud will pre-install and set up all baremetal servers used for OverCloud. Seems like it assumes all baremetal server should be installed in advance. Then my question is from green and elasticity point of view. Initially OverCloud should have zero baremetal server. Per user demands, OverCloud Nova Scheduler should decide if I need more baremetal server, then talk to UnderCloud to allocate more baremetal servers, which will use Heat to orchestrate baremetal server starts. Does it make senses? Does it already plan in the roadmap?
> > If UnderCloud resources are created/deleted elastically, why not OverCloud talks to Ironic to allocate resource directly? Seems like it can achieve same goal. What else features UnderCloud will provide? Thanks in advance.
> > Best RegardsLeslie                           
> 
> Having the overcloud scheduler ask for new servers would be pretty
> interesting. It takes most large scale servers several minutes just to
> POST though, so I'm not sure it is going to work out well if you care
> about latency for booting VMs.
Leslie - Nova API can add one option (latency sensitive or not) to aid scheduler decision. If client is sensitive about latency for booting VM, it can pass one parameter to specify booting VM immediately. Then scheduler can start VM from running baremetal server. Otherwise, if client doesn't create latency, scheduler can start new servers, then start VM on top. 
> 
> What might work is to use an auto-scaler in the undercloud though, perhaps
> having it informed by the overcloud in some way for more granular policy
> possibilities, but even just knowing how much RAM and CPU are allocated
> across the compute nodes would help to inform us when it is time for
> more compute nodes.
> 
> Also the scale-up is fun, but scaling down is tougher. One can only scale
> down off nodes that have no more compute workloads. If you have live
> migration then you can kick off live migration before scale down, but
> in a highly utilized cluster I think that will be a net loss over time
> as the extra load caused by a large scale live migration will outweigh
> the savings from turning off machines. The story might be different for
> a system built on network based volumes like CEPH,  I'm not sure.
Leslie - agree.
> 
> Anyway, this is really interesting to think about, but it is not
> something we're quite ready for yet. We're just getting to the point
> of being able to deploy software updates using images, and then I hope
> to focus on improving usage of Heat with rolling updates and the new
> software config capabilities. After that it may be that we can look at
> how to scale down a compute cluster automatically. :)
Leslie - understand. Roma is not build in the one day.
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131229/26d16b02/attachment.html>


More information about the OpenStack-dev mailing list