[openstack-dev] [TripleO] An experiment with Ansible

James Slagle james.slagle at gmail.com
Mon Jul 24 12:56:28 UTC 2017


On Mon, Jul 24, 2017 at 3:12 AM, Marios Andreou <mandreou at redhat.com> wrote:
>
>
> On Fri, Jul 21, 2017 at 1:21 AM, James Slagle <james.slagle at gmail.com>
> wrote:
>>
>> Following up on the previous thread:
>> http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
>>
>> I wanted to share some work I did around the prototype I mentioned
>> there. I spent a couple days exploring this idea. I came up with a
>> Python script that when run against an in progress Heat stack, will
>> pull all the server and deployment metadata out of Heat and generate
>> ansible playbooks/tasks from the deployments.
>>
>> Here's the code:
>> https://github.com/slagle/pump
>>
>> And an example of what gets generated:
>> https://gist.github.com/slagle/433ea1bdca7e026ce8ab2c46f4d716a8
>>
>> If you're interested in any more detail, let me know.
>>
>> It signals the stack to completion with a dummy "ok" signal so that
>> the stack will complete. You can then use ansible-playbook to apply
>> the actual deloyments (in the expected order, respecting the steps
>> across all roles, and in parallel across all the roles).
>>
>> Effectively, this treats Heat as nothing but a yaml cruncher. When
>> using it with deployed-server, Heat doesn't actually change anything
>> on an overcloud node, you're only using it to generate ansible.
>
>
>
> Hi James,
>
> FYI this actually describes the current plan for Pike minor update [1] - the
> idea is to use the "openstack overcloud config download " (matbu++) to write
> the playbooks for each node from the deployed stack outputs. The minor
> update playbook(s) itself will be generated from new 'update_tasks' added to
> each of the service manifests (akin to the current upgrade_tasks). The plan
> is to disable the actual service config deployment steps so that we just get
> the stack outputs for the playbook generation.
>
> The effort is lead by shardy and he has posted reviews/comments on the
> etherpad @ [1] FYI (I know he is away this week so may not respond ++ I was
> struck by the similarity between what you described above and the consensus
> we seemed to reach towards the end of the week about the minor update plan,
> so I thought you and others may be interested to hear it).

Yes, I've been looking at that work as well. I'm not entirely sure
what the longer term goals are, although I like the approach we are
taking with updates. Looking at the patches that have been posted so
far, I'm not sure if they are meant to be Docker/container specific
only, or if they would work with the puppet services as well or any
SoftwareConfig group type.

I've pulled all the patches locally and was actually testing with a
puppet only stack for initial deployment (no stack update to
Containers), and the generated config/playbooks are not correct
(trying to do something with containers when they shouldn't).

I'm not sure if that is intended to work or if there is a bug. I can
check with shardy when he returns what the goals and further context
around that approach are.

I think one of the primary differences between that approach and what
I prototyped was that my goal was to completely eliminate the
os-collect-config -> Heat metadata Deployment "transport" for any
SoftwareConfig group type (puppet, script, hiera, ansible). IME, that
has been one of the most difficult aspects of TripleO for users and
operators to reason about, reproduce, troubleshoot, and understand.

An additional goal is to see if it would be possible to do that
entirely external to Heat and/or tripleo-heat-templates. Just
considering all the reviews that are currently in progress for the
"config download" approach, there is a lot of refactoring, output
changes, and yaql churning in tripleo-heat-templates.

Certainly the approaches are similar, and could even co-exist,
although they are tackling the problem from different angles.

>
> Your review @ /#/c/485303/ is slightly different in that it doesn't disable
> the deployment/postdeploy steps but signals completion to Heat. Haven't
> checked that review in  detail but first concern is can/do you catch it in
> time... I mean you start the heat stack update and have to immediately call
> the openstack overcloud signal , if I understood correctly?

Yes, you'd have to signal the deployments before they time out on the Heat side.

You could also configure the signal_transport to NO_SIGNAL, in which
case Heat would just create the stack to completion without waiting
(and thus possibly timing out) for any signals.





-- 
-- James Slagle
--



More information about the OpenStack-dev mailing list