[openstack-dev] [Heat] A concrete proposal for Heat Providers

Thomas Spatzier thomas.spatzier at de.ibm.com
Mon May 6 19:25:40 UTC 2013


Zane Bitter <zbitter at redhat.com> wrote on 06.05.2013 16:38:43:

> From: Zane Bitter <zbitter at redhat.com>
> To: openstack-dev at lists.openstack.org,
> Date: 06.05.2013 16:40
> Subject: Re: [openstack-dev] [Heat] A concrete proposal for Heat
Providers
>
> On 02/05/13 19:39, Tripp, Travis S wrote:
> > I agree with Clint.  This essentially boils down to being able to
> describe my application(s) and how to deploy them separately from
> the resources where I want to deploy them. Then based on the target
> environment, I have a way to deploy my app to different resources
> without having to modify / copy paste my app model everywhere.  In
> TOSCA terms, this is what the "relationship" concept provides. For
> example, I can design my application in one template and
> infrastructure(s) in another template. Then I essentially can have
> different deployments where I use the relationship to establish a
> source and a target for these relationship (App part A is associated
> to Infra part X). I just spoke with Thomas Spatzier and I think he
> is going to provide a simplified JSON or YAML representation of this
concept.
>
> This use case makes complete sense to me.
>
> Here's the part I'm struggling with:
>
> > For example, I can design my application in one template and
> infrastructure(s) in another template.
>
> So... this exists, right?
>
> We call the first part "configuration management" (Puppet, Chef) and the

Yes and no. I think Chef/Puppet solve parts of the problem. If you have
self-contained artifacts for deploying a piece of software, they work fine.
It starts to get complicated when you have to pass parameters between
different components, potentially on different hosts, and when you have to
keep timing dependencies in mind. This is where orchestration starts
IMO. ... relates to a point you raise further below, so more thoughts
below.

> second part "cloud orchestration" (Heat, CloudFormation).
>
> The question here is how do we manage the interface between those two
> parts. In doing so, there are two things I firmly believe we need to
avoid:
>
> 1) Writing our own configuration management
>   - This is not a good use of the OpenStack project's resources. There is
>     no particular reason to think we could do it any better than existing
>     solutions. Even if we could the result does not belong in OpenStack,
>     since it would be just as desirable in other contexts.

Agree, if problems have been solved already, let's use what is there. But
let's keep in mind the restrictions of existing solutions and let's see if
we can fill the gaps.

>
> 2) Depending on one particular configuration management tool
>   - Neither is there any particular reason to think that the existing
>     solutions are the best that will ever exist, nor that one can ever
>     be universal. If we lock OpenStack in to one particular tool, we
>     also lock out any future innovation in the field and all users for
>     whom that tool does not work.

Couldn't agree more. All such tools are very good and useful when used for
what they can do well. At some point where handling timing dependencies,
parameter passing etc. comes in, we should see what an orchestration can do
to fill the gaps. So we need a way to be able to invoke existing things
like Chef (i.e. pass parameters, wait for results ...), look at
similarities of such solutions to define kind of pluggable interface and
define who the data flow works.

>
>
> One thing that would be very useful to me - and maybe the folks with
> TOSCA experience would be well-placed to help out here - would be to
> describe what Heat would actually _do_ to deploy an application, with a
> focus on data flow, rather than just looking at what the template would
> look like.

I can try to come up with a concrete example along the lines of the DSL
samples to sketch this. For now, maybe just a few sentences to line this
out.

In TOSCA, we have a common interface for parameters in declarative
processing - those are Node Type (or call it component) properties. So this
is what the orchestration engine understands. Automation (e.g. scripts) for
a component type are aware of those properties. A TOSCA orchestrator has
bindings for several automations (e.g. specific script languages) to pass
properties in the right fashion (e.g. as env variables for a bash script).
Something a few folks have been thinking of is kind of generic resource
types in Heat that support different automation frameworks and implement
parameter passing for the respective framework (e.g. a
"ChefManagedResource").

Relationships in TOSCA are used for two main purposes: (1) for deriving
processing order, and (2) for defining data flow. For (2), the convention
is that properties is the "source" and "target" of a relation are available
on the other side. So you can basically define data using symbolic names
(e.g. target.my_property) to access data without know the exact ID of the
target (allows composability).

... I'll try to come up with something more concrete.

>
> Maybe take Travis's excellent diagram as a starting point. Where would
> App Part A and App Part B be defined? Where would the deployments be
> defined? Which actors could define them and their constituent parts? How
> would Heat combine the data? What calls would Heat end up making as a
> result?
>
> Subject to the two constraints above, I am very supportive of this
> concept. But I still don't feel like I understand how it would work in
> practice.
>
> cheers,
> Zane.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list