[openstack-dev] [Heat] A concrete proposal for Heat Providers

Adrian Otto adrian.otto at rackspace.com
Tue Apr 30 14:33:08 UTC 2013

There are ways to handle revision control for a zip. For example, you can simply expand it, and check in the folder, and then re-zip it when you are ready to move it somewhere. This way you can see changes to the individual files.

Another way is to simply reference a repo folder directly, so the need for the zip is completely eliminated for setups that have network access to the repo. The zip file can be used in those cases where there is not a suitable network present.


-----Original message-----
From: Zane Bitter <zbitter at redhat.com>
To: Alex Heneveld <alex.heneveld at cloudsoftcorp.com>
Cc: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
Sent: Mon, Apr 29, 2013 18:42:34 GMT+00:00
Subject: Re: [openstack-dev] [Heat] A concrete proposal for Heat Providers

On 26/04/13 13:24, Alex Heneveld wrote:
> D+ here too.  seems unanimous.  (a bad mark at school but here a good
> thing!)
> this captures the single biggest thing i think most of us care about:
> composition / more flexible re-use.
> and does it in a manageable incremental way.
> in the interest of separating issues, these are two other areas which
> are mostly independent but i think
> also very important:
> * intuitive DSL -- something easier to write and easier to read. two
> offenders which it seems not hard to
> fix are long type names eg "OS::Nova::Server" (for which could introduce
> supertypes/aliases eg "server"?),

Aliases are easy to implement, and it's also easy to make a translation
tool to reverse them, so if this would move the needle on adoption then
it sounds like a good thing. Presumably we'd want to wait until we have
native resource types for everything implemented, since there's no point
aliasing to AWS resource types.

> and embedded scripts/mappings (which a bundle could alleviate eg ZIP as
> per Thomas's suggestion)

Yeah, the embedded script markup is horrible at the moment - it's
basically write-only code. There are a number of ideas being kicked
around in these two blueprints:


And I think that the YAML format plus whichever of those we end up
implementing will go a long way towards making the scripts actually
readable again.

I understand the appeal of the zip archive - you can mix different file
formats easily - but the downside is that the template becomes basically
a black box and you can't e.g. check in to version control and do diffs.

Not that things are perfect at the moment - nested stacks already make
it difficult to guarantee that you can e.g. launch an identical stack
again at a later date, unless you have a lot of very disciplined
practices around how you use them. Provider stacks would make that even
worse. I'm not sure yet what the best answer is.

Maybe we should just scrap our API and replace it with a Git repo that
updates your stack every time you do git push. I'm only half joking.

> * modelling relationships -- if we can support typed relationships
> between components (resources)
> then we have a leg-up both for auto-wiring and for acting on wait
> conditions at a finer granularity.  this
> makes the descriptions smaller and more portable, makes deployment
> faster, and allows more use cases.
> (i've experimented with modelling this as requirements and fulfillments,
> which seems parsimonious and
> natural, but that's part of what we need to figure out!)

I'm still trying to get my head around this part, and in particular to
what extent it overlaps with existing configuration management tools
(i.e. Puppet/Chef). Concrete use cases are definitely the most helpful
thing for trying to figure that out though, so thanks!

>      as an example, here are two use cases i believe we would struggle
> with (people would have to roll
> their own co-ordination; but please correct me if i'm wrong):
> ** some of the hadoop distros need to know the ip's of a quorum of
> servers in order to seed the
> config for those servers; and

As Steve said, it seems like this might be possible even now. It would
be interesting to dig into this a bit deeper and either confirm that or
refine our understanding of the problem.

> ** some databases require special connectors installed at the client
> (e.g. install php_mysql at the
> wordpress node iff the database is mysql -- a conclusion from TOSCA is
> that if this logic lives in a
> relationship/requirement type PhpDatabaseRequirement then we can reuse
> it, as opposed to
> baking in java+php+ruby+etc support into all database resources, or
> mysql+db2+oracle+etc
> support into all appserver resources)  [hope this is clear, please ping
> me if not]

This makes sense, but it seems to me pretty independent of the
underlying infrastructure. Can you make a case for why this kind of
thing should be baked in to OpenStack, rather than using tools like
Puppet and Chef?

I'm going to take a stab at it... say that different cloud providers
offer different DBaaS services, perhaps some based on MySQL and others
on... I don't know... something different. Then, in order to spin up
your stack on any cloud you have to configure your app to talk to the
right DBaaS, and you don't want to modify your template for each one.

Am I close? It sounds interesting, but also doesn't seem like a high
priority (not as high a priority as getting actually a DBaaS into
OpenStack, for example).

> but we need to iterate on these and meanwhile we need to make progress!
> so:
> +1 to the small focussed separate blueprints Zane proposes

Cool, thanks for the input, it's very helpful.


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130430/05366916/attachment.html>

More information about the OpenStack-dev mailing list