[openstack-dev] [Heat] A concrete proposal for Heat Providers

Zane Bitter zbitter at redhat.com
Fri Apr 26 17:41:25 UTC 2013


On 26/04/13 09:39, Thomas Spatzier wrote:
>> Of course that's exactly what we're all about on this project :). So my
>> proposal is to allow users to define their own resource types using a
>> Heat template. Heat would know to use this template instead of a
>> built-in type from a "Provider" member in the Resource definition that
>> contains a URL to the template. (I'm appropriating the name "Provider"
>> from Rackspace's DSL proposal for now because inventing new names for
>> things that already exist is a sucker's game.)
>
> Yes, this is exactly a use case we also have in mind in TOSCA. That's the
> concept of a NodeType with its NodeTypeImplementation - or just call it
> provider to stick to the current DSL proposal, which is totally fine with
> me. In any case it is something that allows the user to create and use a
> custom implementation of any kind of service (my special version of MySQL,
> my special Tomcat release ...).

Great, that was one of my big questions: does this map cleanly to what 
TOSCA does? Sounds like yes :)

> All such kinds of providers really play on-top of the base IaaS level (VM,
> OS, net, storage), so should not require injection of magic python code
> into the orchestration engine itself.
> Things like core infrastructure like compute, network, storage probably
> fall into a different category, since they would required tweaking
> OpenStack itself.
>
>>
>> These are the tasks that come to mind that would be required to
>> implement this (each of these bullet points could become a blueprint):
>>
>> * Create a Custom resource type that is based on a nested stack but,
>> unlike the AWS::CloudFormation::Stack type, has properties and
>> attributes inferred from the parameters and outputs (respectively) of
>> the template provided.
>
> If nested stack is the way to go, and if that allows to use a lot of
> current Heat, perfect.
> What I am asking myself - and maybe you can help answer based on your Heat
> insight: Stacks for me seem to result at some point (in some cases?) in the
> deployment of VMs.

In most cases, yes. I mean, there are only 2 interesting things you can 
do in the cloud: store data or compute. Everything else is just glue. 
There are a few use cases that consist of setting up just storage (e.g. 
RDB or S3 in AWS terms) + glue, but most will include storage and 
compute. And while there are a number of ways to store data, all 
computing is based on Nova servers so most templates are going to 
include one or more of those.

> So if I use multiple nested stacks with each one
> deploying a couple of VMs, will I end up with the sum of VMs that all the
> stacks create? Or will it be possible to, for example, please Tomcat
> defined in one Stack on the same VM as MySQL defined in another Stack? I
> think it should be possible to have means for defining collocation or
> anti-collocation kinds of constraints.

Heat as it exists now manages resources that map more or less directly 
to objects you can manipulate with the OpenStack APIs. Nova Servers, 
Swift Buckets, Quantum Networks, &c. The software that runs on an 
instance is not modelled in the template; it's just data that is passed 
to Nova when a server is created.

I'm struggling to understand why you would want Tomcat from one stack 
co-located with MySQL from another stack (which implies a completely 
different application)... isn't the popularity of virtualisation 
entirely due to people _not_ wanting to have to do that? Can you 
elaborate on the use case a little more here?

>> * Allow JSON values for parameters.
>> * Introduce the concept of _stack_ Metadata, and provide a way to access
>> it in a template (pseudo-parameter?).
>> * Modify resource instantiation to create a Custom resource whenever a
>> resource has a non-empty "Provider" attribute.
>> * Introduce a schema for attributes (i.e. allowed arguments to
>> Fn::GetAttr) [desirable anyway for autogenerating documentation]
>> * Add an API to get a generic template version of any built-in resource
>> (with all properties/outputs defined) that can be easily customised to
>> make a new provider template.
>>
>> A possible avenue for increasing flexibility:
>> * Add support for more template functions that manipulate the template
>> directly:
>>     - Array/dictionary (uh, object) lookup
>>     - more string functions?
>>     - maybe conditionals?
>>     - definitely NOT loops/map.
>>
>> What might all of this give us?
>>    + Users are no longer dependent on the operator to provide the
>> resource type they need (perhaps for cross-cloud compatibility), but can
>> supply their own.
>>    + Users can effectively subclass built-in types. For example, you
>> could create a Puppet Instance template that configures an instance as a
>> Puppet slave, then selectively use that in place of a regular instance
>> and just pass the metadata to specialise it.
>>    + Users can share their provider templates: as soon as a second person
>> is trying to configure a puppet slave in a template we're wasting an
>> opportunity - this will enable work like that to be shared.
>>    + This is infinitely flexible at the platform layer - anybody (Puppet,
>> Chef, OpenShift, &c.) can publish an Instance provider template
>>
>> How else might we improve this? Well, having to load a template from a
>> URL is definitely a limitation - albeit not a new one (the
>> AWS::CloudFormation::Stack resource type already has this limitation).
>> Perhaps we should let the user post multiple, named, implementations of
>> a resource type and reference them by name (instead of URL) in the
>> "Provider" field.
>
> I know, some people like to stick to text files only, but still wanted to
> bring up the idea of a package (basically a zip file) again like we have in
> TOSCA or CAMP where you can bundle multiple files together and deploy them
> as a unit.
> Also imagine non-text-only content, e.g. if you want to provide a tarball
> with some middleware installable that your providers needs. Having this all
> packaged up would allow for something like this:
> 1) package contains all the descriptions (yaml files), those can point to
> binaries (e.g. tarballs) inside the package by relative URLs, meaning that
> at runtime when the stack is instantiated, those binaries should be
> deployed on a VMs
> 2) import the package into "provider catalog", and while doing this, place
> all binary files in some repository (e.g. on swift) which gives you new
> URLs to those files
> 3) tweak the URLs in the yaml files to use the new pointers (e.g. into
> swift)
> 4) at runtime, when instantiating an implementation, full files from repo
> (e.g. swift) and deploy them.

The downside of zip files is that you can't (usefully) check them in to 
version control. But I can definitely imagine some sort of feature where 
the front end (i.e. python-heatclient) would resolve local paths for you 
and automatically upload all of the required files.
>
>>
>> * Modify the resource instantiation to search among multiple named
>> definitions (either Custom or Plugin) for a resource type, according to
>> the "Provider" name.
>> * Add an API for posting multiple named implementations of a resource
> type.
>>
>>    + The user can modify the default Provider for a resource if they so
>> desire (but this is tenant-wide... or perhaps that's where Environments
>> come in).
>
> This is an interesting point: what is the default provider? And would users
> be able to override it with their implementation? I would assume no. E.g.
> if one user/tenant chooses to have its own providers, he should probably
> call out explicitly that he wants to use them, but not impact all other
> users.

The default provider would be the Plugin if it exists. My thought was 
that the user would be able to override it - although there would need 
to be some way to force it to use the plugin to prevent recursion. That 
would only be on a per-user (or at most per-tenant) basis though. And 
potentially scoped to one particular "environment", if we introduced 
that concept as well.

cheers,
Zane.



More information about the OpenStack-dev mailing list