[openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities

Steven Hardy shardy at redhat.com
Sat Jun 27 09:29:11 UTC 2015


On Fri, Jun 26, 2015 at 03:05:31PM -0400, James Slagle wrote:
> On Thu, Jun 25, 2015 at 5:40 PM, Steven Hardy <shardy at redhat.com> wrote:
> > On Tue, Jun 23, 2015 at 04:05:08PM -0400, Jay Dobies wrote:
> >> On top of that, only certain templates can be used to fulfill certain
> >> resource types. For instance, you can't point CinderBackend to
> >> rhel-registration.yaml. That information isn't explicitly captured by Heat
> >> templates. I suppose you could inspect usages of a resource type in
> >> overcloud to determine the "api" of that type and then compare that to
> >> possible implementation templates' parameter lists to figure out what is
> >> compatible, but that seems like a heavy-weight approach.
> >>
> >> I mention that because part of the user experience would be knowing which
> >> resource types can have a template substitution made and what possible
> >> templates can fulfill it.
> >
> > This is an interesting observation - assuming we wanted to solve this in
> > Heat (and I'm not saying we necessarily should), one can imagine a
> > resource_registry feature which works like constraints do for parameters,
> > e.g:
> >
> > paramters:
> >   my_param:
> >     type: string
> >     constraints:
> >     - allowed_values: [dog, cat]
> >
> > We could do likewise in the environment:
> >
> > resource_registry:
> >   OS::TripleO::ControllerConfig: puppet/controller-config.yaml
> >   ...
> >   constraints:
> >     OS::TripleO::ControllerConfig:
> >     - allowed_values:
> >       - puppet/controller-config.yaml,
> >       - foo/other-config.yaml]
> >
> > These constraints would be enforced at stack validation time such that the
> > environment would be rejected if the optional constraints were not met.
> 
> I like this approach.
> 
> Originally, I was thinking it might be cleaner to encode the
> relationship in the opposite direction. Something like this in
> puppet/controller-config.yaml:
> 
> implements:
>   OS::TripleO::ControllerConfig
> 
> But then, you leave it up to the external tools (a UI, etc) to know
> how to discover these implementing templates. If they're explicitly
> listed in a list as in your example, that helps UI's / API's more
> easily present these choices. Maybe it could work both ways.

Yeah the strict interface definition is basically the TOSCA approach
referenced by Thomas in my validation thread, and while I'm not opposed to
that, it just feels like overkill for this particular problem.

I don't see any mutually exclusive logic here, we could probably consider
adding resource_registry constraints and still add interfaces later if it
becomes apparent we really need them - atm I'm just slightly wary of adding
more complexity to already complex templates, and also on relying on deep
introspection to match up interfaces (when we've got no deep validation
capabilities at all in heat atm) vs some simple rules in the environment.

Sounds like we've got enough consensus on this idea to be worth raising a
spec, I'll do that next week.

> Regardless, I think this does need to be solved in Heat. We've got
> these features in Heat now that are enabling all this flexibility, but
> what we're finding is that when you try to use things like the
> resource registry at scale, it becomes difficult to manage. So, we end
> up writing a new tool, API, or whatever to do that management. If Heat
> doesn't solve it, it's likely that Tuskar could evolve into your
> "Resource registry index service". I'm not saying that's what Heat
> needs to do now, but whatever it can enable such that consumers don't
> feel that they need to write an API that has to encode a bunch of
> additional logic, would be a good thing IMO.
> 
> Historically, TripleO has leaned on new features in Heat extensively,
> and where things aren't available, new tooling is written to address
> those needs. Then Heat matures, and ends up solving just a slightly
> different problem in just a slightly different way, and we're having
> this same conversation about how we need to move forward with the
> TripleO tooling built on top of Heat (merge.py, now tuskar, etc).

+1, I'm *really* trying to break the cycle of papering over feature gaps in
heat elsewhere :)

I understand why that's happened in the past, but I think it's better for
everyone long term if we manage to just mature heat faster rather than
maintaining workarounds elsewhere (particularly TripleO specific ones, as
time has taught us TripleO requirements are normally just requirements
nobody else has discovered yet..)

> > That leaves the problem of encoding "this is the resource which selects the
> > CinderBackends", which I currently think can be adequately expressed via
> > resource namespacing/naming?
> 
> I don't think I'm following how the constraints example wouldn't solve
> this part as well...

I guess I just meant we'll have to rely on naming e.g
OS::TripleO::CinderBackend or something to express that the thing being
included/configured is a CinderBackend, as opposed to some other chunk of
logic.  And you probably want to configure this setting after you've, say,
chosen the implementation for your Controller node deployment (e.g puppet
vs containers or whatever).

This comes back to the interfaces discussion above, we don't encode
inside the template "this template provides a cinder backend", we rely on
logic outside the resource_registry to know and present the choices in the
correct order.

And we don't express composite constraints, e.g CinderBackend X is only
valid with Controller implementation Y, so you'd have to infer that from
say the path to the implementation defines that (if you choose
puppet/controller.yaml then you can only choose puppet/* for all other
ControllerFoo choices, etc, which is probably fine).

You're probably right though, maybe it's enough for a Ux to just present
the choices based on the constraints described in the resource_registry
fairly transparently (this will certainly be much easier than deep
inspection of a massive tree of interfaces IMO).

> >> == Responsibility ==
> >>
> >> Where should that be implemented? That's a good question.
> >>
> >> The idea of resolving resource type uses against candidate template
> >> parameter lists could fall under the model Steve Hardy is proposing of
> >> having Heat do it (he suggested the validate call, but this may be leading
> >> us more towards template inspection sorts of APIs supported by Heat.
> >>
> >> It is also possibly an addition to HOT, to somehow convey an interface so
> >> that we can more easily programatically look at a series of templates and
> >> understand how they play together. We used to be able to use the
> >> resource_registry to understand those relationships, but that's not going to
> >> work if we're trying to find substitutions into the registry.
> >>
> >> Alternatively, if Heat/HOT has no interest in any of this, this is something
> >> that Tuskar (or a Tuskar-like substitute) will need to solve going forward.
> >
> > I think this problem of template composition is a general one, and I'm keen
> > to at least partially solve it via some (hopefully relatively simple and
> > incremental) additions to heat.
> >
> > Clearly there's still plenty of scope for "application builder" type API's
> > on top of any new iterfaces, and I'm not trying to solve that problem,
> > but by exposing some richer interfaces around both template
> > inspection/validation and composition, hopefully we make such tasks
> > easier.
> 
> Yea, I think this is line with my thinking as well. Heat doesn't have
> to do it *all*, there's always going to be rich modeling and UX flows
> that are going to need separate API's. But +1 to Heat making that as
> easy as possible, especially when it's just enabling it's own features
> to be more easily used.
> 
> >> = Consolidated Parameter List =
> >>
> > Basically, this boils down to a heat validation pass which exposes any
> > parameters not defined by the parent template, which I think should be
> > possible via the recursive validation approach I outlined in my other mail,
> > do we have sufficient consensus to raise a spec for that and potentially
> > work up a PoC patch (I can do this)?
> 
> FWIW, I liked what you were proposing in the other thread. In thinking
> about the deployment flow in the Tuskar-UI, I think it would enable
> exposing and setting the nested stack parameters easily (you choose
> various resources as displayed in a widget, click a reload/refresh
> button, and new parameters are exposed).
> 
> What might also be neat is if something like heatclient then had
> support to automatically generate stub yaml environment files based on
> the output of the template-validate. So it could spit out a yaml file
> that had a parameter_defaults: section with all the expected
> parameters and their default values, that way the user could then just
> edit that stub to complete the required inputs.

Ah, yeah that's a nice idea!  I think we've reached enough consensus on the
deep validation that I'll raise a spec for that too - I'll include your
heatclient idea, thanks! :)

> 
> >> = Saving Undeployed Stack Configuration =
> >> == Responsibility ==
> >>
> >> From what I understand, Heat doesn't have any interest in storing plan
> >> parameters in this fashion. Though that comes from a while ago, so it's
> >> possible direction has changed.
> >>
> >> Otherwise, this one likely still falls in Tuskar. It's possible it's done
> >> through client-side calls directly to some sort of storage, but I really
> >> don't like the idea of having that much logic tied to a client (more
> >> specifically, tied into a Python client in the event that shops looking to
> >> integrate aren't using Python).
> >
> > This is true, Heat doesn't want to store stack definitions other than for
> > live stacks, backups during update, and snapshots.
> >
> > As outlined above, I think there are many (better) ways to store a series
> > of text files than inside heat, which is one reason why we've resisted
> > these sorts of "catalog" requirements in the past.
> >
> > I don't really think it's tied to a client, if we can shape things such
> > that it's just pushing some collection of data to e.g swift after following
> > a well defined series of pre-deployment validation steps.
> 
> I agree that there is some aspect to this that is just based around
> yaml files and git/swift/whatever. I don't think that we necessarily
> need an API for that either.
> 
> What TripleO does have though is an expected set of complex steps,
> that need to be executed in a correct order. That sounds like workflow
> to me. And when you think about integration with external tooling
> trying to drive a TripleO deployment, you want to avoid reimplementing
> that workflow every time. So, you do have some need for either a
> deployment API, or a generic workflow API that can handle a
> deployment, if you want to avoid the reimplementation of that logic
> (and I think we do).

Yeah, although we have to be careful with the definition of "workflow"
here, we're really talking about collecting a series of user inputs (either
interactively or programatically), which is different to say defining a
repeatable non-interactive workflow for orchestrating upgrades.

Obviously the generic workflow API we could investigate is Mistral, but atm
I'm not sure if it's a good fit for the "series of inputs" use case,
whereas it probably is for the upgrades.  I guess I see the former as more
of a user interface workflow, but we can certainly discuss where this logic
lives further.

Thanks for all the feedback!

Steve



More information about the OpenStack-dev mailing list