[openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

Steve Baker sbaker at redhat.com
Sun Apr 6 21:27:54 UTC 2014


On 05/04/14 04:47, Tomas Sedovic wrote:
> Hi All,
>
> I was wondering if the time has come to document what exactly are we
> doing with tripleo-heat-templates and merge.py[1], figure out what needs
> to happen to move away and raise the necessary blueprints on Heat and
> TripleO side.
>
> (merge.py is a script we use to build the final TripleO Heat templates
> from smaller chunks)
>
> There probably isn't an immediate need for us to drop merge.py, but its
> existence either indicates deficiencies within Heat or our unfamiliarity
> with some of Heat's features (possibly both).
>
> I worry that the longer we stay with merge.py the harder it will be to
> move forward. We're still adding new features and fixing bugs in it (at
> a slow pace but still).
>
> Below is my understanding of the main marge.py functionality and a rough
> plan of what I think might be a good direction to move to. It is almost
> certainly incomplete -- please do poke holes in this. I'm hoping we'll
> get to a point where everyone's clear on what exactly merge.py does and
> why. We can then document that and raise the appropriate blueprints.
>
>
> ## merge.py features ##
>
>
> 1. Merging parameters and resources
>
> Any uniquely-named parameters and resources from multiple templates are
> put together into the final template.
>
> If a resource of the same name is in multiple templates, an error is
> raised. Unless it's of a whitelisted type (nova server, launch
> configuration, etc.) in which case they're all merged into a single
> resource.
>
> For example: merge.py overcloud-source.yaml swift-source.yaml
>
> The final template has all the parameters from both. Moreover, these two
> resources will be joined together:
>
> #### overcloud-source.yaml ####
>
>   notCompute0Config:
>     Type: AWS::AutoScaling::LaunchConfiguration
>     Properties:
>       ImageId: '0'
>       InstanceType: '0'
>     Metadata:
>       admin-password: {Ref: AdminPassword}
>       admin-token: {Ref: AdminToken}
>       bootstack:
>         public_interface_ip:
>           Ref: NeutronPublicInterfaceIP
>
>
> #### swift-source.yaml ####
>
>   notCompute0Config:
>     Type: AWS::AutoScaling::LaunchConfiguration
>     Metadata:
>       swift:
>         devices:
>           ...
>         hash: {Ref: SwiftHashSuffix}
>         service-password: {Ref: SwiftPassword}
>
>
> The final template will contain:
>
>   notCompute0Config:
>     Type: AWS::AutoScaling::LaunchConfiguration
>     Properties:
>       ImageId: '0'
>       InstanceType: '0'
>     Metadata:
>       admin-password: {Ref: AdminPassword}
>       admin-token: {Ref: AdminToken}
>       bootstack:
>         public_interface_ip:
>           Ref: NeutronPublicInterfaceIP
>       swift:
>         devices:
>           ...
>         hash: {Ref: SwiftHashSuffix}
>         service-password: {Ref: SwiftPassword}
>
>
> We use this to keep the templates more manageable (instead of having one
> huge file) and also to be able to pick the components we want: instead
> of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
> uses the VirtualPowerManager driver) or `ironic-vm-source`.
>
>
>
> 2. FileInclude
>
> If you have a pseudo resource with the type of `FileInclude`, we will
> look at the specified Path and SubKey and put the resulting dictionary in:
>
> #### overcloud-source.yaml ####
>
>   NovaCompute0Config:
>     Type: FileInclude
>     Path: nova-compute-instance.yaml
>     SubKey: Resources.NovaCompute0Config
>     Parameters:
>       NeutronNetworkType: "gre"
>       NeutronEnableTunnelling: "True"
>
>
> #### nova-compute-instance.yaml ####
>
>   NovaCompute0Config:
>     Type: AWS::AutoScaling::LaunchConfiguration
>     Properties:
>       InstanceType: '0'
>       ImageId: '0'
>     Metadata:
>       keystone:
>         host: {Ref: KeystoneHost}
>       neutron:
>         host: {Ref: NeutronHost}
>           tenant_network_type: {Ref: NeutronNetworkType}
>           network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
>           bridge_mappings: {Ref: NeutronBridgeMappings}
>           enable_tunneling: {Ref: NeutronEnableTunnelling}
>           physical_bridge: {Ref: NeutronPhysicalBridge}
>           public_interface: {Ref: NeutronPublicInterface}
>         service-password:
>           Ref: NeutronPassword
>       admin-password: {Ref: AdminPassword}
>
> The result:
>
>   NovaCompute0Config:
>     Type: AWS::AutoScaling::LaunchConfiguration
>     Properties:
>       InstanceType: '0'
>       ImageId: '0'
>     Metadata:
>       keystone:
>         host: {Ref: KeystoneHost}
>       neutron:
>         host: {Ref: NeutronHost}
>           tenant_network_type: "gre"
>           network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
>           bridge_mappings: {Ref: NeutronBridgeMappings}
>           enable_tunneling: "True"
>           physical_bridge: {Ref: NeutronPhysicalBridge}
>           public_interface: {Ref: NeutronPublicInterface}
>         service-password:
>           Ref: NeutronPassword
>       admin-password: {Ref: AdminPassword}
>
> Note the `NeutronNetworkType` and `NeutronEnableTunneling` parameter
> substitution.
>
> This is useful when you want to pick only bits and pieces of an existing
> template. In the example above, `nova-compute-instance.yaml` is a
> standalone template you can launch on its own. But when launching the
> overcloud, you don't want to merge nova-compute-instance wholesale. All
> you want is the NovaCompute0Config resource plus a few others.
>
> I admit I'm not entirely clear on how/why do we need this, though.
> FileInclude is being used just for the overcloud nova compute resources
> (server, config, waitcondition, etc.).
>
> The undercloud as well as the additional overcloud resources (swift,
> block storage) seem get by without FileInclude.
>
>
> 3. OpenStack::Role metadata key
>
> I'm not sure what this does or why would we need it. The Ironic
> templates used it but it was removed because they were broken.
>
> Right now it's used in `tuskar-source.yaml` and `undercloud-source.yaml`
> only.
>
>
> 4. OpenStack::ImageBuilder::Elements metadata key
>
> Again, this seems to receive custom handling by merge.py, but I'm
> unclear as to why.
>
>
> 5. Scaling
>
> We can mark resources in a Heat template as scalable by giving them the
> '0' suffix. We can then use the `--scale` parameter to make copies of these:
>
>   SwiftStorage0CompletionHandle:
>     Type: AWS::CloudFormation::WaitConditionHandle
>
>   SwiftStorage0:
>     Type: OS::Nova::Server
>     Properties:
>       image:
>         {Ref: SwiftStorageImage}
>       flavor: {Ref: OvercloudSwiftStorageFlavor}
>       key_name: {Ref: KeyName}
>     ...
>
> $ merge.py swift-storage-source.yaml --scale SwiftStorage=2
>
> result:
>
>   SwiftStorage0CompletionHandle:
>     Type: AWS::CloudFormation::WaitConditionHandle
>
>   SwiftStorage0:
>     Type: OS::Nova::Server
>     Properties:
>       image:
>         {Ref: SwiftStorageImage}
>       flavor: {Ref: OvercloudSwiftStorageFlavor}
>       key_name: {Ref: KeyName}
>     ...
>
>   SwiftStorage1CompletionHandle:
>     Type: AWS::CloudFormation::WaitConditionHandle
>
>   SwiftStorage1:
>     Type: OS::Nova::Server
>     Properties:
>       image:
>         {Ref: SwiftStorageImage}
>       flavor: {Ref: OvercloudSwiftStorageFlavor}
>       key_name: {Ref: KeyName}
>     ...
>
> This seems rather close to what OS::Heat::ResourceGroup[2] does. Can we
> just switch to using that instead? I seem to remember reading something
> about the order of deleted resources of being wrong for our purposes.
>
> Is that correct? What are the specifics?
>
>
> 6. Merge::Map helper function
>
> Returns the list of values from a dictionary given as a parameter.
>
> I'm assuming this would be great to have, but isn't blocking us from
> moving to native Heat templates.
>
> Clint sent a proposal of this to the Heat developers but it went largely
> unanswered. Perhaps we could revisit it?
>
>
> 7-... Whatever I've missed!
>
> (here's where you get to point out my vast ignorance)
>
>
>
> ## a vision of a world without merge.py ##
>
> Seems to me provider resources ought be able to handle the composability
> side of things (while still allowing us to use them as standalone
> templates) and resource groups should handle the scaling.
>
> We could keep roughly the same structure: a separate template for each
> OpenStack service (compute, block storage, object storage, ironic, nova
> baremetal). We would then use Heat environments to treat each of these
> templates as a custom resource (e.g. OS::TripleO::Nova,
> OS::TripleO::Swift, etc.).
>
> Our overcloud and undercloud templates would then reference all these
> each wrapped in a ResourceGroup or a more sophisticated scaling mechanism.
>
>
>
> ## tangential but nice to have stuff ##
>
> * move to HOT
>   - It's where any new syntax extensions will happen
>
> * move to Software Config
>   - Clint has a WIP patch:
>   - https://review.openstack.org/#/c/81666/
>
>
> What have I missed and what are your thoughts?
> Tomas
>
>
> [1]:
> https://github.com/openstack/tripleo-heat-templates/blob/master/tripleo_heat_merge/merge.py
> [2]:
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup
> [3]:
> http://docs.openstack.org/developer/heat/template_guide/environment.html
>
The maintenance burden of merge.py can be gradually reduced if features
in it can be deleted when they are no longer needed. At some point in
this process merge.py will need to accept HOT templates, and risk of
breakage during this changeover would be reduced the smaller merge.py is.

How about this for the task order?
1. remove OpenStack::ImageBuilder::Elements support from merge.py
2. move to software-config based templates
3. remove the following from merge.py
   3.1. merging params and resources
   3.2. FileInclude
   3.3. OpenStack::Role
4. port tripleo templates and merge.py to HOT
5. use some HOT replacement for Merge::Map, delete Merge::Map from tripleo
6. move to resource providers/scaling groups for scaling
7. rm -f merge.py




More information about the OpenStack-dev mailing list