[openstack-dev] [Heat] Future Vision for Heat

Steven Hardy shardy at redhat.com
Tue Apr 23 18:20:30 UTC 2013

On Tue, Apr 23, 2013 at 10:50:06AM +0200, Thomas Spatzier wrote:
> Steven,
> > Excerpt from
> > From: Steven Hardy <shardy at redhat.com>
> > To: OpenStack Development Mailing List
> <openstack-dev at lists.openstack.org>,
> > Date: 23.04.2013 09:29
> > Subject: Re: [openstack-dev] [Heat] Future Vision for Heat
> >
> > I agree, the model interpreter probably won't end up in the API (at least
> > in the short/medium term, because it will be necessarily coupled with the
> > model processor, aka "parser" in our previous discussions)
> I agree that the Model Interpreter shouldn't be part of the API layer.
> Actually, before I read your mail, I already started to draw an alternative
> architecture diagram as base for discussion. I updated the wiki page, so if
> you refresh, you should see my version with some notes on the changes I
> made.

So your updated diagram is definitely getting closer to what I envisage,
couple of comments:

- I don't see us having to read monitoring data from the AMQP bus, my
  expectation is that we define an alarm and associated metric(s) in
  ceilometer, then simply wait for a callback (web-hook, request to our ReST
  API which triggers us of an alarm state change)

- The workflow service layer is not on our immediate roadmap (although I
  support the idea), as I'm not sure who will implement it - I'll raise a
  BP to track the idea, but I'm not sure this is realistic as a near-term
  goal unless someone indicates they're interested in working on it

> > The way I see it working (short/medium term) is:
> >
> > - Stack template (either CFN or HOT) is passed to either the ReST or CFN
> >   API
> > - Template is passed to the engine via RPC call
> > - Template is processed by "model interpreter", which mangles e.g CFN to
> >   the internal format (HOT), or does nothing if it's already HOT format
> > - Processed template is passed to the "Model Processor", aka "parser",
> >   which renders the template in our internal objects, ready to
> orchestrate
> I am asking myself if there is a model interpreter at all in the Heat
> Engine, or if the Engine just parses (processes) the model into its
> internal objects. Since CFN is tightly built in at the moment and I thought
> you said it would be hard to decouple it short term, I was assuming that
> the Model Processor for now would have to understand both Heat Template and
> CFN. I tried to mark this with the little CFN box in the diagram.

Ok, so most of this is true, we have a thing called "parser" which
transforms the data-structure representing the template into "Stack"
object, which contains the various "Resource" objects and related data

My aim is to avoid the engine parser/Model-Processor having to understand
two formats (since I think this will be complex, error-prone, and also
difficult to maintain)

So I see us doing the following

- Add concepts identified as missing to out internal data model
- Have a thin "interpreter" which performs a one-time rendering/translation
  from the incoming template format to that supported by the parser.
  Initially this will transform from DSL/HOT->HeatCFN+ExtraStuff, then when
  this works, we can flip the translation incrementally so the parser
  eventually becomes tightly coupled to DSL/HOT and the translation becomes
  CFN->HOT (and at this point the translation could be done somewhere other
  than the heat engine)

> > Long term (when the internal HOT format is stable), it might make sense
> to
> > push the "model interpreter" up to the API level, probably via a
> > template-interpreter-plugin model, but I see this as something we
> shouldn't
> > care about during the first-pass where the priority must be
> > capturing/defining the new template language, and keeping our existing
> > functionality working.
> That seems to make sense. Or I could also image to have the interpreter
> layer (actually I see it more as a translation layer) as a layer below the
> API layer, basically by pulling the grayed out box (out of scope and add-on
> for now) in my diagram into the core.
> That layer would do translation only and leave interpretation completely to
> the Heat Engine's Model Processor.

This is sort of the translation plugin model I mentioned earlier, but as
previously discussed, I think it's best we consider this out-of-scope for
Heat until it becomes obvious there are people (ie new Heat core members or
regular contributors) willing to implement and maintain the translator


More information about the OpenStack-dev mailing list