[openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters
Giulio Fidente
gfidente at redhat.com
Mon Jan 23 12:55:37 UTC 2017
On 01/23/2017 11:07 AM, Saravanan KR wrote:
> Thanks John for the info.
>
> I am going through the spec in detail. And before that, I had few
> thoughts about how I wanted to approach this, which I have drafted in
> https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
> 100% ready yet, I was still working on it.
I've linked this etherpad for the session we'll have at the PTG
> As of now, there are few differences on top of my mind, which I want
> to highlight, I am still going through the specs in detail:
> * Profiles vs Features - Considering a overcloud node as a profiles
> rather than a node which can host these features, would have
> limitations to it. For example, if i need a Compute node to host both
> Ceph (OSD) and DPDK, then the node will have multiple profiles or we
> have to create a profile like -
> hci_enterprise_many_small_vms_with_dpdk? The first one is not
> appropriate and the later is not scaleable, may be something else in
> your mind?
> * Independent - The initial plan of this was to be independent
> execution, also can be added to deploy if needed.
> * Not to expose/duplicate parameters which are straight forward, for
> example tuned-profile name should be associated with feature
> internally, Workflows will decide it.
for all of the above, I think we need to decide if we want the
optimizations to be profile-based and gathered *before* the overcloud
deployment is started or if we want to set these values during the
overcloud deployment basing on the data we have at runtime
seems like both approaches have pros and cons and this would be a good
conversation to have with more people at the PTG
> * And another thing, which I couldn't get is, where will the workflow
> actions be defined, in THT or tripleo_common?
to me it sounds like executing the workflows before stack creation is
started would be fine, at least for the initial phase
running workflows from Heat depends on the other blueprint/session we'll
have about the WorkflowExecution resource and once that will be
available, we could trigger the workflow execution from tht if beneficial
> The requirements which I thought of, for deriving workflow are:
> Parameter Deriving workflow should be
> * independent to run the workflow
> * take basic parameters inputs, for easy deployment, keep very minimal
> set of mandatory parameters, and rest as optional parameters
> * read introspection data from Ironic DB and Swift-stored blob
>
> I will add these comments as starting point on the spec. We will work
> towards bringing down the differences, so that operators headache is
> reduced to a greater extent.
thanks
--
Giulio Fidente
GPG KEY: 08D733BA
More information about the OpenStack-dev
mailing list