[OpenStack-Infra] Talking is the first step.

Jesse Keating jesse.keating at RACKSPACE.COM
Wed Apr 24 06:15:09 UTC 2013


On Apr 23, 2013, at 10:48 PM, Monty Taylor <mordred at inaugust.com> wrote:
> 
> I'd like to make some headway on the tripleo-based stuff that Robert and
> I discussed with you guys when we came to the castle. If we can get that
> bit in - then deploying a base xenserver image onto some metal once we
> have it won't be a special case - and doing the gobs of combinations
> like you're talking about should be simple.
> 
> It breaks down into a few different problem areas:
> 
> a) getting the process flow around using openstack to deploy openstack
> even on VMs working. pleia2 is working on that using hte virtual bare
> metal code right now, and I think it's a great starting point.

We're working on similar inside RAX for our automated dev environments (on-demand devstack, only much more than devstack. Cells, control plane, etc…). The work we're doing is designed to align with HEAT and nova-baremetal^W nova-truss.

On our end it may involve writing power plugins and deployment plugins to work with what we have to manage the bare metal -- but that's where we are headed. We want to be deploying our dev environments (and later our actual environments) in the same fashion that upstream is.

> 
> b) figuring out how the OpenStack Infra team interacts with hardware
> from folks. We had a session about this at the summit (I think you were
> in that one? the week was a bit of a blur...) and some of the issues are
> that we'll need it to be isolated enough that dhcp/pxe-ing from one
> server to others needs to be ok - but it needs to be managed enough that
> there is a way for the infra team to deal with problems should they
> arise. jeblair brought up that if we can get hardware donations from
> mutiple vendors, then perhaps the vendor response time on a single set
> of hardware won't be as important to the gate. This is a bit of a moot
> question until there is actual hardware to consider…

Which we are working on. I believe the end result of the session was that a set of requirements was going to be written up for what the CI team is looking for w/ regard to hardware and access, so that folks like us can work on providing it. Turns out Rackspace has some experience with doing the managed hosting thing...

> 
> c) Getting the heat+nova based install of openstack working. This is a
> thing that we can start pointing you at in the #tripleo channel... and
> I'd love to follow-up/re-engage with you on what steps we need to do to
> start moving iNova more towards OpenStack Bare Metal/TripleO as an
> underlying basis.

See above :) This is definitely where our work is going.

> 
> You'll notice I didn't mention puppet or mcollective. That's not because
> I hate you or those technologies - just that I think that the
> orchestration issues in terms of wrangling the resources are the much
> harder ones to solve at the moment - and I think that
> mcollective/puppet/chef/razor/crowbar are all highly problematic from
> the gate's perspective, because they make it even harder for devs to
> debug their gate failing issue.
> 
> Now I'm rambling - maybe let's have a quick chat on the IRC tomorrow to
> get a good baseline, and then bring that back to the list here
> (recording thinking is a great idea)



-jlk




More information about the OpenStack-Infra mailing list