[openstack-dev] [Heat] A concrete proposal for Heat Providers

Thomas Spatzier thomas.spatzier at de.ibm.com
Fri Apr 26 07:39:03 UTC 2013


Hi Zane,

a clear (D) from me :-)
I added some comments/thoughts inline below.

Regards,
Thomas

Zane Bitter <zbitter at redhat.com> wrote on 25.04.2013 21:35:51:

> From: Zane Bitter <zbitter at redhat.com>
> To: OpenStack Development Mailing List
<openstack-dev at lists.openstack.org>,
> Date: 25.04.2013 21:36
> Subject: [openstack-dev] [Heat] A concrete proposal for Heat Providers
>
> Greetings Heaters,
> I'm hearing a lot about (and struggling to keep track of) multiple
> people working on competing proposals for how a Heat template language
> should look if we designed it from scratch and, while that's valuable in
> helping to figure out the primitives we need, I'd also like to approach
> it from the other direction and start figuring out what path we need to
> get on to bring the new feature direction that our users want to
> fruition. We all agreed at the Summit that we need to pursue these new
> features in an incremental manner, and we should not forget *why*:
>
>     A complex system that works is invariably found to have evolved
>     from a simple system that worked.
>                       -- http://en.wikipedia.org/wiki/Gall's_law
>
> What follows is a concrete proposal for how we could implement one of
> the requested features, starting from use cases, through the predicted
> code changes required and looping back around to user benefits. It
> certainly does not purport to implement every feature requested by
> users. It is likely wrong in the particulars, certainly in ways that
> will not be discovered at least until we have implemented it. But it is
> a concrete proposal. It begins with an excellent summary of the use case
> from Adrian...
>
>
> On 13/04/13 00:36, Adrian Otto wrote:
>  > Goal: Deploy my app on my chosen OpenStack based public cloud, using
> Heat for Dev/Test. For Production, deploy to my private OpenStack cloud.
>  >
>  > Scenario: My app depends on a MySQL database. My public cloud
> provider has a hosted "mysql" service that is offered through a Provider
> plug-in. It's there automatically because my cloud hosting company put
> it there.  I deploy, and finish my testing on the public cloud. I want
> to go to production now.
>  >
>  > Solution: The Provider gives you a way to abstract the different
> cloud implementations. I establish an equivalent Provider on my private
> OpenStack cloud using RedDwarf. I set up a Provider that offers "mysql"
> in my private cloud. Now the same setup works on both clouds, even
> though the API for my local "mysql" service may actually differ from the
> database provisioning API in the public cloud. Now I deploy on my
> "production" Environment in my private cloud, and it works!
>
> So, the first and most important thing to note is that this is exactly
> how Heat works _now_.
>
> There is a resource type called AWS::RDS::DBInstance and the operator of
> your cloud can choose a plugin to implement that interface. The plugin
> shipping with Heat actually just spins up a somewhat hacky Heat stack
> running MySQL... in future we hope to ship a RedDwarf Lite plugin, and
> of course any cloud provider with their own DBaaS could easily write a
> plugin to interface with that. (Note that when we do see a RedDwarf Lite
> plugin we'll probably also see a variant called something like
> OS::RedDwarf::DBInstance with more OpenStack-specific properties &c.)
>
> So that's how Heat works today. How can we make this better? Well, one
> thing obviously sucks: your cloud operator, and not you, gets to decide
> which plugin is used. That sorta makes sense when it's an interface to
> an XaaS thing in their cloud, but if it's just a Nova instance running
> MySQL and you don't like the version your operator has gone with, you
> are SOL. You can try running your own Heat engine, but you're probably
> going to have to really hack at it first because whenever anything
> in-guest has to talk back to you, the endpoint is obtained from the same
> Keystone catalog that you're using to talk to the other services. And no
> cloud operator in the world - not even your friendly local IT department
> - is going to let users upload Python code to run in-memory in their
> orchestration engine along with all of the other users' code.
>
> If only there were some sort of language for defining OpenStack services
> that could be safely executed by users...
>
> Of course that's exactly what we're all about on this project :). So my
> proposal is to allow users to define their own resource types using a
> Heat template. Heat would know to use this template instead of a
> built-in type from a "Provider" member in the Resource definition that
> contains a URL to the template. (I'm appropriating the name "Provider"
> from Rackspace's DSL proposal for now because inventing new names for
> things that already exist is a sucker's game.)

Yes, this is exactly a use case we also have in mind in TOSCA. That's the
concept of a NodeType with its NodeTypeImplementation - or just call it
provider to stick to the current DSL proposal, which is totally fine with
me. In any case it is something that allows the user to create and use a
custom implementation of any kind of service (my special version of MySQL,
my special Tomcat release ...).

All such kinds of providers really play on-top of the base IaaS level (VM,
OS, net, storage), so should not require injection of magic python code
into the orchestration engine itself.
Things like core infrastructure like compute, network, storage probably
fall into a different category, since they would required tweaking
OpenStack itself.

>
> These are the tasks that come to mind that would be required to
> implement this (each of these bullet points could become a blueprint):
>
> * Create a Custom resource type that is based on a nested stack but,
> unlike the AWS::CloudFormation::Stack type, has properties and
> attributes inferred from the parameters and outputs (respectively) of
> the template provided.

If nested stack is the way to go, and if that allows to use a lot of
current Heat, perfect.
What I am asking myself - and maybe you can help answer based on your Heat
insight: Stacks for me seem to result at some point (in some cases?) in the
deployment of VMs. So if I use multiple nested stacks with each one
deploying a couple of VMs, will I end up with the sum of VMs that all the
stacks create? Or will it be possible to, for example, please Tomcat
defined in one Stack on the same VM as MySQL defined in another Stack? I
think it should be possible to have means for defining collocation or
anti-collocation kinds of constraints.

> * Allow JSON values for parameters.
> * Introduce the concept of _stack_ Metadata, and provide a way to access
> it in a template (pseudo-parameter?).
> * Modify resource instantiation to create a Custom resource whenever a
> resource has a non-empty "Provider" attribute.
> * Introduce a schema for attributes (i.e. allowed arguments to
> Fn::GetAttr) [desirable anyway for autogenerating documentation]
> * Add an API to get a generic template version of any built-in resource
> (with all properties/outputs defined) that can be easily customised to
> make a new provider template.
>
> A possible avenue for increasing flexibility:
> * Add support for more template functions that manipulate the template
> directly:
>    - Array/dictionary (uh, object) lookup
>    - more string functions?
>    - maybe conditionals?
>    - definitely NOT loops/map.
>
> What might all of this give us?
>   + Users are no longer dependent on the operator to provide the
> resource type they need (perhaps for cross-cloud compatibility), but can
> supply their own.
>   + Users can effectively subclass built-in types. For example, you
> could create a Puppet Instance template that configures an instance as a
> Puppet slave, then selectively use that in place of a regular instance
> and just pass the metadata to specialise it.
>   + Users can share their provider templates: as soon as a second person
> is trying to configure a puppet slave in a template we're wasting an
> opportunity - this will enable work like that to be shared.
>   + This is infinitely flexible at the platform layer - anybody (Puppet,
> Chef, OpenShift, &c.) can publish an Instance provider template
>
> How else might we improve this? Well, having to load a template from a
> URL is definitely a limitation - albeit not a new one (the
> AWS::CloudFormation::Stack resource type already has this limitation).
> Perhaps we should let the user post multiple, named, implementations of
> a resource type and reference them by name (instead of URL) in the
> "Provider" field.

I know, some people like to stick to text files only, but still wanted to
bring up the idea of a package (basically a zip file) again like we have in
TOSCA or CAMP where you can bundle multiple files together and deploy them
as a unit.
Also imagine non-text-only content, e.g. if you want to provide a tarball
with some middleware installable that your providers needs. Having this all
packaged up would allow for something like this:
1) package contains all the descriptions (yaml files), those can point to
binaries (e.g. tarballs) inside the package by relative URLs, meaning that
at runtime when the stack is instantiated, those binaries should be
deployed on a VMs
2) import the package into "provider catalog", and while doing this, place
all binary files in some repository (e.g. on swift) which gives you new
URLs to those files
3) tweak the URLs in the yaml files to use the new pointers (e.g. into
swift)
4) at runtime, when instantiating an implementation, full files from repo
(e.g. swift) and deploy them.

>
> * Modify the resource instantiation to search among multiple named
> definitions (either Custom or Plugin) for a resource type, according to
> the "Provider" name.
> * Add an API for posting multiple named implementations of a resource
type.
>
>   + The user can modify the default Provider for a resource if they so
> desire (but this is tenant-wide... or perhaps that's where Environments
> come in).

This is an interesting point: what is the default provider? And would users
be able to override it with their implementation? I would assume no. E.g.
if one user/tenant chooses to have its own providers, he should probably
call out explicitly that he wants to use them, but not impact all other
users.

>   + Provider templates can be uploaded directly to Heat instead of e.g.
> to Swift.
>   + Operators can reuse the mechanism to provide versioned Plugins
> and/or multiple implementations of Resource types.
>
>
> So, in summary, this plan appears to provide real, identified value to
> users; relies for the most part on existing technology in Heat (nested
> stacks are actually pretty cool, if I may say so); includes a series of
> changes required to implement the feature; has zero impact on existing
> users; does not IMO decrease maintainability of the Heat code base; and
> is entirely achievable before the Havana feature freeze, which is only
> ~4 months from now.
>
> That said, I am not wedded to any of the particular details of this
> proposal - though I do regard it as a real proposal, not just a straw
> man. If anybody has suggestions for where I've got the requirements,
> concepts or implementation ideas wrong then rip in. But I'd love to hear
> either a discussion at the level of these concrete details or a
> competing proposal at a similar level.
>
>
> For easier collation, please categorise your response as follows:
>   (A) I'm appalled at the mere suggestion
>   (B) This just prevents us solving the real problem (please specify)
>   (C) Meh
>   (D) This looks kind of interesting
>   (E) OMG!! UNICORNS!!!
>
> ;)
>
> cheers,
> Zane.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list