[openstack-dev] [Heat] A concrete proposal for Heat Providers

Thomas Spatzier thomas.spatzier at de.ibm.com
Fri May 10 14:17:51 UTC 2013


Hi Clint,

sorry for taking some time, but finally let me sketch a proposal that could
cover the use case (... of course with a couple of details to be figured
out).
Referring back to Travis' diagram, an abstract description of the problem
from my perspective would be: there are software / app layer components
comp_a, comp_b etc. that are related and we want the flexibility of
deploying them either on one server or multiple servers (in general:
different infrastructure layouts) without having to duplicate/re-code my
app-level template for each deployment option.

An important point is: we can talk about app-layer tiers which do not
necessarily have a 1:1 mapping to infrastructure tiers. You can, for
example, have 2 tiers in your app architecture and still run the 2 tiers on
one server.

That said, you can have a 2-tier app with comp_a in tier 1 and comp_b in
tier 2 and that requires a "2 tier hosting environment" (see also attached
diagram).
Then there can be multiple nested templates for such a two tier hosting
env: one which maps both tiers on the same server, and one which provides
each app tier to a separate server.

Those nested templates (or just templates - you could use them stand-alone)
are providers for "2 tier hosting environment". Now if we pick up the
"Environments" idea from Adrian's DSL wiki page, we could define an
environment TEST where we say to use "TinyDeployment" as provider for the 2
tier hosting env, and in PROD we say to use "LargeDeployment" (see
diagram). So when the orchestration engine is asked to find a realization
for 2 tier hosting env, it selects it depending on the environment you
deploy to.

Here are some snippets to outline how the templates could look like:

The app-layer model would be:

# application layer template
# all the header info, inputs, ...

components:
  # some software component
  comp_a:
    type: os::heat::softwareconfig::chef_solo
    # all the details of comp_a
    requires:
      # c1 means tier1 slot of hosting env
      hosted_on: hosting_env.c1
      # comp_a needs to connect to comp_b
      connects_to: comp_b

  # some other software component
  comp_b:
    type: os::heat::softwareconfig::chef_solo
    # all the details of comp_b
    requires:
      # c2 means tier2 slot of hosting env
      hosted_on: hosting_env.c2

  # representation/placeholder for hosting env
  # where details are provided in separate template
  hosting_env:
    # assumes that there is a provider for my::custom::2_tier_env
    # which can be a nested template
    type: my::custom::2_tier_env


Note the "hosting_env" component which just says it is of a specific type.
The orchestration engine will have to look for that type at deployment
time. It will also have to figure out what .c1 and .c2 mean (it's like
slots that the 2_tier_env component offers, but this is in the nested
templates).
Basically, what the orchestrator does when resolving the abstract component
"hosting_env" to one of the two infrastructure templates is to merge two
graphs into one based on clear contracts and then run the merged graph thru
normal orchestration.

The TinyDeployment template could look like this:

# TinyDeployment 2 tier hosting env template
# all the header info ...

implements:
  tiny_env: # this is an arbitrary label
    type: my::custom::2_tier_env # this is what is actually implemented
    provides:
      c1:
        type: os::nova::compute
        provided_by: server
      c2:
        type: os::nova::compute
        provided_by: server

components:
  server:
    type: os::nova::compute


This defines one server, declares that the template implements the
2_tier_env component, and exposes two compute capabilities, which are both
realized by the single server.

The LargeDeployment template would be:

# LargeDeployment 2 tier hosting env template
# all the header info ...

implements:
  large_env: # this is an arbitrary label
    type: my::custom::2_tier_env # this is what is actually implemented
    provides:
      c1:
        type: os::nova::compute
        provided_by: server1
      c2:
        type: os::nova::compute
        provided_by: server2

components:
  server1:
    type: os::nova::compute

  server2:
    type: os::nova::compute


I.e. the same external interface, but the two compute capabilities map to
separate servers.
We can drive this further (details not included here): if the internal
components have properties we want to personalize from the outside, we
could have an inputs section like in the draft I posted to gerrit (
https://review.openstack.org/#/c/28598) which map input parameters to
properties of the components. That is, the inputs section corresponds to
the properties of the abstract "2_tier_env" component type.

Hope this makes sense and conveys the idea.
I think this also fits very well to Zane's nested stack proposal.

Regards,
Thomas
(See attached file: app_layers_and_infrastructure_layers.pdf)

Clint Byrum <clint at fewbar.com> wrote on 06.05.2013 22:55:10:

> From: Clint Byrum <clint at fewbar.com>
> To: <openstack-dev at lists.openstack.org>,
> Date: 06.05.2013 22:56
> Subject: Re: [openstack-dev] [Heat] A concrete proposal for Heat
Providers
>
> On 2013-05-06 07:38, Zane Bitter wrote:
> > On 02/05/13 19:39, Tripp, Travis S wrote:
> >> I agree with Clint.  This essentially boils down to being able to
> >> describe my application(s) and how to deploy them separately from the
> >> resources where I want to deploy them. Then based on the target
> >> environment, I have a way to deploy my app to different resources
> >> without having to modify / copy paste my app model everywhere.  In
> >> TOSCA terms, this is what the "relationship" concept provides. For
> >> example, I can design my application in one template and
> >> infrastructure(s) in another template. Then I essentially can have
> >> different deployments where I use the relationship to establish a
> >> source and a target for these relationship (App part A is associated
> >> to Infra part X). I just spoke with Thomas Spatzier and I think he is
> >> going to provide a simplified JSON or YAML representation of this
> >> concept.
> >
> > This use case makes complete sense to me.
> >
> > Here's the part I'm struggling with:
> >
> >> For example, I can design my application in one template and
> >> infrastructure(s) in another template.
> >
> > So... this exists, right?
> >
> > We call the first part "configuration management" (Puppet, Chef) and
> > the second part "cloud orchestration" (Heat, CloudFormation).
> >
>
> As Thomas said, this is not really the case I'm talking about.
>
> Notice here:
>
> https://github.com/openstack-ops/templates/blob/master/heat.yaml
>
> All the things to deploy the Heat engine on one machine, and the API's
> all together in a scaling group.
>
> https://github.com/openstack-ops/templates/blob/master/keystone.yaml
>
> All the things to deploy Keystone on an instance group.
>
> You'll notice a complete lack of configuration management there. We use
> Metadata to drive os-config-applier and os-refresh-config, which are the
> TripleO minimalistic "config management" equivalents.
>
> What I'd like to do is run keystone and heat-api's on a single instance
> group which encompass all of my stateless API services. To do that, I
> have to duplicate *most* of the templates I've already written, only
> with the metadata merged.
>
> So, I submit that there need to be a way to specify an abstract
> resource which has the Metadata/Parameters/Relationships needed to
> deploy said resource, but does not have the machine placement details.
> Those would then be included in the instance groups, something like:
>
> IncludeMetadata:
>    - HeatAPIMetadata
>    - KeystoneMetadata
>
> I believe what Thomas is saying is that TOSCA already has sophisticated
> ways to express this concept.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: app_layers_and_infrastructure_layers.pdf
Type: application/pdf
Size: 107002 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130510/96ad4b37/attachment.pdf>


More information about the OpenStack-dev mailing list