[OpenStack-Infra] Talking is the first step.

Monty Taylor mordred at inaugust.com
Wed Apr 24 05:48:27 UTC 2013



On 04/23/2013 01:52 PM, Brian Lamar wrote:
> (I sincerely apologize for the length of this email. I have a lot to
> ask/say and I promise to be more concise in future mailings.)

No worries - happy to have you here!

> Hey All,
> 
> I didn't get to talk to much of the Infrastructure team at the summit,
> but I'm pretty tired of not working closer with OpenStack CI.
> 
> For reference, I work at Rackspace to deploy OpenStack.

Yup. Thanks, btw.

> Long story short: *I want to create environments, manage environments,
> and deploy code to environments under the auspices of the OpenStack
> Infrastructure team.*

Awesome.

> Now, I guess the first question I'd like answered is: Is this goal under
> the purview of OSCI (do you have a nickname/shortname?)? Is it
> reasonable? Perhaps you can't answer that without elaboration. Here is some.

Yes and yes - depending on what you mean. :) (clark answered the name
part I think)

> The devstack-gate is great -- but it is devstack nonetheless. We should
> have a devstack gate. We also need other, more realistic gates. You know
> this, I'm pretty sure we agree on this at least but to be honest I can't
> remember with everything that's been happening the past week.
> 
> *We seem to have already:*
> 
>   * devstack-gate
>   * a very nice, flexible gating system
> 
> 
> *What I/we would like to help with:*
> 
>   * more gating scenarios
>   * deployment tool(s) standardization
> 
> 
> One issue is the vague term "more gating scenarios". Obviously I mean
> everything you've ever thought off. All permutations of $platform,
> $config_management, $packaging_method, $deployment_method, $use_cells I
> suppose. Since that's a lot, we'll choose the one we care about the
> most: XenServer, masterless puppet, venv, mcollective, 2 cells.
>
> Woah, woah, woah, you say. Where did masterless puppet, venv, and
> mcollective come from?! Well, those are what we're using now. If they
> have to change because they're not the technologies chosen for the
> future then so be it. I'm not attached to technologies. 

Awesome. Not being attached to technologies leads to happiness.

> We use no puppet masters because puppet did not scale for us (even with
> many puppet masters behind LBs). 

Correct. I doubt it will ever scale - it's not really designed to.

> We use venv because it's more cross-platform and makes it easy to deploy
> multiple versions of software at the same time. 

Yup. Agree.

> We use mcollective to orchestrate the deployment because PSSH is slow
> when dealing with a significantly large number of hosts.

I think you know where I'm going to go with two of these things...

> However, if you ignore all of the above, a great first step is XenServer
> and forget all the other things (for now). 

I'm going to propose a different first step, not because I hate
XenServer, but because it's significantly tricky enough (you guys don't
even automate that part yourself) that I think we need to solve a
different portion of this first - and if we get that done, the XenServer
portion should be easy.

I'd like to make some headway on the tripleo-based stuff that Robert and
I discussed with you guys when we came to the castle. If we can get that
bit in - then deploying a base xenserver image onto some metal once we
have it won't be a special case - and doing the gobs of combinations
like you're talking about should be simple.

It breaks down into a few different problem areas:

a) getting the process flow around using openstack to deploy openstack
even on VMs working. pleia2 is working on that using hte virtual bare
metal code right now, and I think it's a great starting point.

b) figuring out how the OpenStack Infra team interacts with hardware
from folks. We had a session about this at the summit (I think you were
in that one? the week was a bit of a blur...) and some of the issues are
that we'll need it to be isolated enough that dhcp/pxe-ing from one
server to others needs to be ok - but it needs to be managed enough that
there is a way for the infra team to deal with problems should they
arise. jeblair brought up that if we can get hardware donations from
mutiple vendors, then perhaps the vendor response time on a single set
of hardware won't be as important to the gate. This is a bit of a moot
question until there is actual hardware to consider...

c) Getting the heat+nova based install of openstack working. This is a
thing that we can start pointing you at in the #tripleo channel... and
I'd love to follow-up/re-engage with you on what steps we need to do to
start moving iNova more towards OpenStack Bare Metal/TripleO as an
underlying basis.

You'll notice I didn't mention puppet or mcollective. That's not because
I hate you or those technologies - just that I think that the
orchestration issues in terms of wrangling the resources are the much
harder ones to solve at the moment - and I think that
mcollective/puppet/chef/razor/crowbar are all highly problematic from
the gate's perspective, because they make it even harder for devs to
debug their gate failing issue.

Now I'm rambling - maybe let's have a quick chat on the IRC tomorrow to
get a good baseline, and then bring that back to the list here
(recording thinking is a great idea)

> So the question again becomes how can we get XenServer into the gate. I
> imagine it going something like this:
> 
> 1) Hardware is provided with XenServer 6.1 installed
> 2) Script for Jenkins will boot XenServer VM on hardware provided
> 3) VM will be configured using...devstack?
> 4) This script will be an unofficial gate (aka 3rd party gate) until it
> is considered stable
> 5) This script will be integrated into the official process either as a
> gate-gate or a periodic-gate 
> 
> As you can see I'm pretty fuzzy on the details, which is where this list
> comes in. Heck, some of my team is on this list and will probably be
> shouting at their monitors that I'm doing it all wrong. It isn't the
> first time and won't be the last time.
> 
> Ending before this email rambles on more,



More information about the OpenStack-Infra mailing list