[openstack-dev] [Heat] A concrete proposal for Heat Providers
Zane Bitter
zbitter at redhat.com
Tue May 7 15:30:34 UTC 2013
On 06/05/13 21:25, Thomas Spatzier wrote:
> Zane Bitter <zbitter at redhat.com> wrote on 06.05.2013 16:38:43:
>> So... this exists, right?
>>
>> We call the first part "configuration management" (Puppet, Chef) and the
>
> Yes and no. I think Chef/Puppet solve parts of the problem. If you have
> self-contained artifacts for deploying a piece of software, they work fine.
> It starts to get complicated when you have to pass parameters between
> different components, potentially on different hosts, and when you have to
> keep timing dependencies in mind. This is where orchestration starts
> IMO. ... relates to a point you raise further below, so more thoughts
> below.
Cool, I think we agree on the scope then. Passing parameters between
components and maintaining timing relationships clearly falls on the
orchestration side of the divide.
So it's just a question of defining that interface in a clean way while
remaining agnostic about what's on the other side. I think we all agree
the current interface (essentially a UserData section with wait
condition URLs and parameters spatchcocked in, plus metadata) is pretty
awful :)
>
>> second part "cloud orchestration" (Heat, CloudFormation).
>>
>> The question here is how do we manage the interface between those two
>> parts. In doing so, there are two things I firmly believe we need to
> avoid:
>>
>> 1) Writing our own configuration management
>> - This is not a good use of the OpenStack project's resources. There is
>> no particular reason to think we could do it any better than existing
>> solutions. Even if we could the result does not belong in OpenStack,
>> since it would be just as desirable in other contexts.
>
> Agree, if problems have been solved already, let's use what is there. But
> let's keep in mind the restrictions of existing solutions and let's see if
> we can fill the gaps.
>
>>
>> 2) Depending on one particular configuration management tool
>> - Neither is there any particular reason to think that the existing
>> solutions are the best that will ever exist, nor that one can ever
>> be universal. If we lock OpenStack in to one particular tool, we
>> also lock out any future innovation in the field and all users for
>> whom that tool does not work.
>
> Couldn't agree more. All such tools are very good and useful when used for
> what they can do well. At some point where handling timing dependencies,
> parameter passing etc. comes in, we should see what an orchestration can do
> to fill the gaps. So we need a way to be able to invoke existing things
> like Chef (i.e. pass parameters, wait for results ...), look at
> similarities of such solutions to define kind of pluggable interface and
> define who the data flow works.
+1
>>
>>
>> One thing that would be very useful to me - and maybe the folks with
>> TOSCA experience would be well-placed to help out here - would be to
>> describe what Heat would actually _do_ to deploy an application, with a
>> focus on data flow, rather than just looking at what the template would
>> look like.
>
> I can try to come up with a concrete example along the lines of the DSL
> samples to sketch this. For now, maybe just a few sentences to line this
> out.
>
> In TOSCA, we have a common interface for parameters in declarative
> processing - those are Node Type (or call it component) properties. So this
> is what the orchestration engine understands. Automation (e.g. scripts) for
> a component type are aware of those properties. A TOSCA orchestrator has
> bindings for several automations (e.g. specific script languages) to pass
> properties in the right fashion (e.g. as env variables for a bash script).
> Something a few folks have been thinking of is kind of generic resource
> types in Heat that support different automation frameworks and implement
> parameter passing for the respective framework (e.g. a
> "ChefManagedResource").
>
> Relationships in TOSCA are used for two main purposes: (1) for deriving
> processing order, and (2) for defining data flow. For (2), the convention
> is that properties is the "source" and "target" of a relation are available
> on the other side. So you can basically define data using symbolic names
> (e.g. target.my_property) to access data without know the exact ID of the
> target (allows composability).
>
> ... I'll try to come up with something more concrete.
Sounds great, thanks :)
cheers,
Zane.
More information about the OpenStack-dev
mailing list