[openstack-dev] [tripleo] When to use parameters vs parameter_defaults
Jiří Stránský
jistr at redhat.com
Thu Nov 26 13:12:28 UTC 2015
<snip>
> My personal preference is to say:
>
> 1. Any templates which are included in the default environment (e.g
> overcloud-resource-registry-puppet.yaml), must expose their parameters
> via overcloud-without-mergepy.yaml
>
> 2. Any templates which are included in the default environment, but via a
> "noop" implementation *may* expose their parameters provided they are
> common and not implementation/vendor specific.
>
> 3. Any templates exposing vendor specific interfaces (e.g at least anything
> related to the OS::TripleO::*ExtraConfig* interfaces) must not expose any
> parameters via the top level template.
>
> How does this sound?
Pardon the longer e-mail please, but i think this topic is very far
reaching and impactful on the future of TripleO, perhaps even strategic,
and i'd like to present some food for thought.
I think as we progress towards more composable/customizable overcloud,
using parameter_defaults will become a necessity in more and more places
in the templates.
Nowadays we can get away with hierarchical passing of some parameters
from the top-level template downwards because we can make very strong
assumptions about how the overcloud is structured, and what each piece
of the overcloud takes as its parameters. Even though we support
customization via the resource registry, it's still mostly just
switching between alternate implementations of the same thing, not
strong composability.
I would imagine that going forward, TripleO would receive feature
requests to add custom node types into the deployment, be it e.g.
separating neutron network node functionality out of controller node
onto its own hardware, or adding custom 3rd party node types into the
overcloud, which need to integrate with the rest of the overcloud tightly.
When such scenario is considered, even the most code-static parameters
like node-type-specific ExtraConfig, or a nova flavor to use for a node
type, suddenly become dynamic on the code level (think
parameter_defaults), simply because we can't predict upfront what node
types we'll have.
I think a parallel with how Puppet evolved can be observed here. It used
to be that Puppet classes included in deployments formed a sort-of
hierarchy and got their parameters fed in a top-down cascade. This
carried limitations on composability of machine configuration manifests
(collisions when using the same class from multiple places, huge number
of parameters in the higher-level manifests). Hiera was introduced to
solve the problem, and nowadays top-level Puppet manifests contain a lot
of include statements, and the parameter values are mostly read from
external hiera data files, and hiera values transcend through the class
hierarchy freely. This hinders easy discoverability of "what settings
can i tune within this machine's configuration", but judging by the
adoption of the approach, the benefits probably outweigh the drawbacks.
In Puppet's case, at least :)
It seems TripleO is hitting similar composability and sanity limits with
the top-down approach, and the number of parameters which can only be
fed via parameter_defaults is increasing. (The disadvantage of
parameter_defaults is that, unlike hiera, we currently have no clear
namespacing rules, which means a higher chance of conflict. Perhaps the
unit tests suggested in another subthread would be a good start, maybe
we could even think about how to do proper namespacing.)
Does what i described seem somewhat accurate? Should we maybe buy into
the concept of "composable templates, externally fed
hierarchy-transcending parameters" for the long term?
Thanks for reading this far :)
Jirka
More information about the OpenStack-dev
mailing list