[openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities
James Slagle
james.slagle at gmail.com
Fri Jun 26 19:05:31 UTC 2015
On Thu, Jun 25, 2015 at 5:40 PM, Steven Hardy <shardy at redhat.com> wrote:
> On Tue, Jun 23, 2015 at 04:05:08PM -0400, Jay Dobies wrote:
>> On top of that, only certain templates can be used to fulfill certain
>> resource types. For instance, you can't point CinderBackend to
>> rhel-registration.yaml. That information isn't explicitly captured by Heat
>> templates. I suppose you could inspect usages of a resource type in
>> overcloud to determine the "api" of that type and then compare that to
>> possible implementation templates' parameter lists to figure out what is
>> compatible, but that seems like a heavy-weight approach.
>>
>> I mention that because part of the user experience would be knowing which
>> resource types can have a template substitution made and what possible
>> templates can fulfill it.
>
> This is an interesting observation - assuming we wanted to solve this in
> Heat (and I'm not saying we necessarily should), one can imagine a
> resource_registry feature which works like constraints do for parameters,
> e.g:
>
> paramters:
> my_param:
> type: string
> constraints:
> - allowed_values: [dog, cat]
>
> We could do likewise in the environment:
>
> resource_registry:
> OS::TripleO::ControllerConfig: puppet/controller-config.yaml
> ...
> constraints:
> OS::TripleO::ControllerConfig:
> - allowed_values:
> - puppet/controller-config.yaml,
> - foo/other-config.yaml]
>
> These constraints would be enforced at stack validation time such that the
> environment would be rejected if the optional constraints were not met.
I like this approach.
Originally, I was thinking it might be cleaner to encode the
relationship in the opposite direction. Something like this in
puppet/controller-config.yaml:
implements:
OS::TripleO::ControllerConfig
But then, you leave it up to the external tools (a UI, etc) to know
how to discover these implementing templates. If they're explicitly
listed in a list as in your example, that helps UI's / API's more
easily present these choices. Maybe it could work both ways.
Regardless, I think this does need to be solved in Heat. We've got
these features in Heat now that are enabling all this flexibility, but
what we're finding is that when you try to use things like the
resource registry at scale, it becomes difficult to manage. So, we end
up writing a new tool, API, or whatever to do that management. If Heat
doesn't solve it, it's likely that Tuskar could evolve into your
"Resource registry index service". I'm not saying that's what Heat
needs to do now, but whatever it can enable such that consumers don't
feel that they need to write an API that has to encode a bunch of
additional logic, would be a good thing IMO.
Historically, TripleO has leaned on new features in Heat extensively,
and where things aren't available, new tooling is written to address
those needs. Then Heat matures, and ends up solving just a slightly
different problem in just a slightly different way, and we're having
this same conversation about how we need to move forward with the
TripleO tooling built on top of Heat (merge.py, now tuskar, etc).
> That leaves the problem of encoding "this is the resource which selects the
> CinderBackends", which I currently think can be adequately expressed via
> resource namespacing/naming?
I don't think I'm following how the constraints example wouldn't solve
this part as well...
>
>> == Responsibility ==
>>
>> Where should that be implemented? That's a good question.
>>
>> The idea of resolving resource type uses against candidate template
>> parameter lists could fall under the model Steve Hardy is proposing of
>> having Heat do it (he suggested the validate call, but this may be leading
>> us more towards template inspection sorts of APIs supported by Heat.
>>
>> It is also possibly an addition to HOT, to somehow convey an interface so
>> that we can more easily programatically look at a series of templates and
>> understand how they play together. We used to be able to use the
>> resource_registry to understand those relationships, but that's not going to
>> work if we're trying to find substitutions into the registry.
>>
>> Alternatively, if Heat/HOT has no interest in any of this, this is something
>> that Tuskar (or a Tuskar-like substitute) will need to solve going forward.
>
> I think this problem of template composition is a general one, and I'm keen
> to at least partially solve it via some (hopefully relatively simple and
> incremental) additions to heat.
>
> Clearly there's still plenty of scope for "application builder" type API's
> on top of any new iterfaces, and I'm not trying to solve that problem,
> but by exposing some richer interfaces around both template
> inspection/validation and composition, hopefully we make such tasks
> easier.
Yea, I think this is line with my thinking as well. Heat doesn't have
to do it *all*, there's always going to be rich modeling and UX flows
that are going to need separate API's. But +1 to Heat making that as
easy as possible, especially when it's just enabling it's own features
to be more easily used.
>> = Consolidated Parameter List =
>>
> Basically, this boils down to a heat validation pass which exposes any
> parameters not defined by the parent template, which I think should be
> possible via the recursive validation approach I outlined in my other mail,
> do we have sufficient consensus to raise a spec for that and potentially
> work up a PoC patch (I can do this)?
FWIW, I liked what you were proposing in the other thread. In thinking
about the deployment flow in the Tuskar-UI, I think it would enable
exposing and setting the nested stack parameters easily (you choose
various resources as displayed in a widget, click a reload/refresh
button, and new parameters are exposed).
What might also be neat is if something like heatclient then had
support to automatically generate stub yaml environment files based on
the output of the template-validate. So it could spit out a yaml file
that had a parameter_defaults: section with all the expected
parameters and their default values, that way the user could then just
edit that stub to complete the required inputs.
>> = Saving Undeployed Stack Configuration =
>> == Responsibility ==
>>
>> From what I understand, Heat doesn't have any interest in storing plan
>> parameters in this fashion. Though that comes from a while ago, so it's
>> possible direction has changed.
>>
>> Otherwise, this one likely still falls in Tuskar. It's possible it's done
>> through client-side calls directly to some sort of storage, but I really
>> don't like the idea of having that much logic tied to a client (more
>> specifically, tied into a Python client in the event that shops looking to
>> integrate aren't using Python).
>
> This is true, Heat doesn't want to store stack definitions other than for
> live stacks, backups during update, and snapshots.
>
> As outlined above, I think there are many (better) ways to store a series
> of text files than inside heat, which is one reason why we've resisted
> these sorts of "catalog" requirements in the past.
>
> I don't really think it's tied to a client, if we can shape things such
> that it's just pushing some collection of data to e.g swift after following
> a well defined series of pre-deployment validation steps.
I agree that there is some aspect to this that is just based around
yaml files and git/swift/whatever. I don't think that we necessarily
need an API for that either.
What TripleO does have though is an expected set of complex steps,
that need to be executed in a correct order. That sounds like workflow
to me. And when you think about integration with external tooling
trying to drive a TripleO deployment, you want to avoid reimplementing
that workflow every time. So, you do have some need for either a
deployment API, or a generic workflow API that can handle a
deployment, if you want to avoid the reimplementation of that logic
(and I think we do).
--
-- James Slagle
--
More information about the OpenStack-dev
mailing list