[openstack-dev] [Mistral][Heat] Feedback on the Mistral DSL
Zane Bitter
zbitter at redhat.com
Wed May 7 15:29:32 UTC 2014
Hi Mistral folks,
Congrats on getting the 0.0.2 release out. I had a look at Renat's
screencast and the examples, and I wanted to share some feedback based
on my experience with Heat. Y'all will have to judge for yourselves to
what extent this experience is applicable to Mistral. (Assume that
everything I know about it was covered in the screencast and you won't
be far wrong.)
The first thing that struck me looking at
https://github.com/stackforge/mistral-extra/tree/master/examples/create_vm
is that I have to teach Mistral how to talk to Nova. I can't overstate
how surprising this is as a user, because Mistral is supposed to want to
become a part of OpenStack. It should know how to talk to Nova! There is
actually an existing DSL for interacting with OpenStack[1], and here's
what the equivalent operation looks like:
os server create $server_name --image $image_id --flavor $flavor_id
--nic net-id=$network_id
Note that this is approximately exactly 96.875% shorter (or 3200%
shorter, if you're in advertising).
This approach reminds me a bit of TOSCA, in the way that it requires you
to define every node type before you use it. (Even TOSCA is moving away
from this by developing a Simple Profile that includes the most common
ones in the box - an approach I assume/hope you're considering also.)
The stated reason for this is that they want TOSCA templates to run on
any cloud regardless of its underlying features (rather than take a
lowest-common-denominator approach, as other attempts at hybrid clouds
have done). Contrast that with Heat, which is unapologetically an
orchestration system *for OpenStack*.
I note from the screencast that Mistral's stated mission is to:
Provide a mechanism to define and execute
tasks and workflows *in OpenStack clouds*
(My emphasis.) IMO the design doesn't reflect the mission. You need to
decide whether you are trying to build the OpenStack workflow DSL or the
workflow DSL to end all workflow DSLs.
That problem could be solved by including built-in definitions for core
OpenStack service in a similar way to std.* (i.e. take the TOSCA Simple
Profile approach), but I'm actually not sure that goes far enough. The
lesson of Heat is that we do best when we orchestrate *only* OpenStack APIs.
For example, when we started working on Heat, there was no autoscaling
in OpenStack so we implemented it ourselves inside Heat. Two years
later, there's still no autoscaling in OpenStack other than what we
implemented, and we've been struggling for a year to try to split Heat's
implementation out into a separate API so that everyone can use it.
Looking at things like std.email, I feel a similar way about them.
OpenStack is missing something equivalent to SNS, where a message on a
queue can trigger an email or another type of notification, and a lot of
projects are going to eventually need something like that. It would be
really unfortunate if all of them went out and invented it
independently. It's much better to implement such things as their own
building blocks that can be combined together in complex ways rather
than adding that complexity to a bunch of services.
Such a notification service could even be extended to do std.http-like
ReST calls, although personally the whole idea of OpenStack services
calling out to arbitrary HTTP APIs makes me extremely uncomfortable.
Much better IMO to just post messages to queues and let the receiver
(long) poll for it.
So I would favour a DSL that is *much* simpler, and replaces all of
std.* with functions that call OpenStack APIs, and only OpenStack APIs,
including the API for posting messages to Marconi queues, which would be
the method of communication to the outside world. (If the latter part
sounds a bit like SWF, it's for a good reason, but the fact that it
would allow access directly to all of the OpenStack APIs before
resorting to an SDK makes it much more powerful, as well as providing a
solid justification for why this should be part of OpenStack.)
The ideal way to get support for all of the possible OpenStack APIs
would be to do it by introspection on python-openstackclient. That means
you'd only have to do the work once and it will stay up to date. This
would avoid the problem we have in Heat, where we have to implement each
resource type separately. (This is the source of a great deal of Heat's
value to users - the existence of tested resource plugins - but also the
thing that stops us from iterating the code quicker.)
I'm also unsure that it's a good idea for things like timers to be set
up inside the DSL. I would prefer that the DSL just define workflows and
export entry points to them. Then have various ways to trigger them:
from the API manually, from a message to a Marconi queue, from a timer,
&c. The latter two you'd set up through the Mistral API. If a user
wanted a single document that set up one or more workflows and their
triggers, a Heat template would do that job.
I can see that your goal is to make a system that works with any
existing application without changes. I think this is not so important
as you think; the lesson of AWS is that developers will happily write
their applications to use your service if you make it simple enough for
them to understand. In a year's time will anybody think twice about
spinning up a container to poll a message queue and proxy it into ReST
calls, if that's what they need to do to interface to some
legacy/outside code?
As I said at the beginning, you are the experts and you'll have to
decide for yourselves how much of this feedback is relevant to Mistral.
You certainly know a bunch of things that I don't about
workflow-as-a-service. (I am, of course, interested in being
re-educated!) But I hope that some of our experience on the Heat project
might be helpful to you.
cheers,
Zane.
[1] http://docs.openstack.org/developer/python-openstackclient/
More information about the OpenStack-dev
mailing list