[tripleo][operators] Removal of mistral from the TripleO Undercloud

Rabi Mishra ramishra at redhat.com
Sat Mar 14 12:06:20 UTC 2020

On Sat, Mar 14, 2020 at 2:10 AM John Fulton <johfulto at redhat.com> wrote:

> On Fri, Mar 13, 2020 at 3:27 PM Kevin Carter <kecarter at redhat.com> wrote:
> >
> > Hello stackers,
> >
> > In the pursuit to remove Mistral from the TripleO undercloud, we've
> discovered an old capability that we need to figure out how best to handle.
> Currently, we provide the ability for an end-user (operator / deployer) to
> pass in "N" Mistral workflows as part of a given deployment plan which is
> processed by python-tripleoclient at runtime [0][1]. From what we have
> documented, and what we can find within the code-base, we're not using this
> feature by default. That said, we do not remove something if it is valuable
> in the field without an adequate replacement. The ability to run arbitrary
> Mistral workflows at deployment time was first created in 2017 [2] and
> while present all this time, its documented [3] and intra-code-base uses
> are still limited to samples [4].

As it stands now, we're on track to making Mistral inert this cycle and if
> our progress holds over the next couple of weeks the capability to run
> arbitrary Mistral workflows will be the only thing left within our codebase
> that relies on Mistral running on the Undercloud.
> > So the question is what do we do with functionality. Do we remove this
> ability out right, do we convert the example workflow [5] into a
> stand-alone Ansible playbook and change the workflow runner to an arbitrary
> playbook runner, or do we simply leave everything as-is and deprecate it to
> be removed within the next two releases?

Yeah, as John mentioned, tripleo.derive_params.v1.derive_parameters
workflow is surely being used for NFV (DPDK/SR-IOV) and HCI use cases and
can't be deprecated or dropped. Though we've a generic interface in
tripleoclient to run any workflow in plan-environment, I have not seen it
being used for anything other than the mentioned workflow.

In the scope of 'mistral-ansible' work, we seem to  have two options.

1. Convert the workflow to ansible playbook 'as-is' i.e calculate and merge
the derived parameters in plan-environment and as you've mentioned, change
tripleoclient code to call any playbook in plan-environment.yaml and the

2. Move the functionality further down the component chain in TripleO to
have the required ansible host/group_vars set for them to be used by
config-download playbooks/ansible/puppet.

I guess option 1 would be easier within the timelines. I've done some
preliminary work to move some of the functionality in relevant mistral
actions to utils modules[1], so that they can be called from ansible
modules without depending on mistral/mistral-lib and use those in a
playbook that kinda replicate the tasks in the mistral workflow.

Having said that, it would be good to know what the DFG:NFV folks think, as
they are the original authors/maintainers of that workflow.


The Mistral based workflow took advantage of the deployment plan which
> was stored in Swift on the undercloud. My understanding is that too is
> going away.

I'm not sure that would be in the scope of 'mstral-to-ansible' work.
Dropping swift would probably be a bit more complex, as we use it to store
templates, plan-environment, plan backups (possibly missing few more) etc
and would require significant design rework (may be possible when we get
rid of heat in undercloud). In spite of heat using the templates from swift
and merging environments on the client side, we've had already bumped
heat's REST API json body size limit (max_json_body_size) on the undercloud
to 4MB[2] from the default 1MB and sending all required templates as part
of API request would not be a good idea from undercloud scalability pov.

[1] https://review.opendev.org/#/c/709546/

Rabi Mishra
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200314/c217df77/attachment.html>

More information about the openstack-discuss mailing list