[openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

Zane Bitter zbitter at redhat.com
Mon Mar 21 18:48:47 UTC 2016

Late to the party, but this comparison seems misleading to me...

On 26/01/16 04:46, Steven Hardy wrote:
> It's one more thing, which is already maintained and has an active
> community, vs yet-another-bespoke-special-to-tripleo-thing.  IMHO we have
> *way*  too many tripleo specific things already.
> However, lets look at the "python knowledge" thing in a bit more detail.
> Let's say, as an operator I want to wire in a HTTP call to an internal asset
> management system.  The requirement is to log an HTTP call with some
> content every time an overcloud is deployed or updated.  (This sort of
> requirement is*very*  common in enterprise environments IME)
> In the mistral case[1], the modification would look something like:
> http_task:
>    action: std.http url='assets.foo.com' <some arguments>
> You'd simply add two lines to your TripleO deployment workflow yaml[2]:
> Now, consider the bespoke API case.  You have to do some or all of the
> following:
> - Find the python code which handles deployment and implements the workflow
> - Pull and fork the code base, resolve any differences between the upstream
>    version and whatever pacakged version you're running
> - Figure out how to either hack in your HTTP calls via a python library, or
>    build a new plugin mechanism to enable out-of-tree deployment hooks
> - Figure out a bunch of complex stuff to write unit tests, battle for
>    weeks/months to get your code accepted upstream (or, maintain the fork
>    forever and deal with rebasing, packaging, and the fact that your entire
>    API is no longer supported by your vendor because you hacked on it)

If I were doing it I would write a piece of WSGI middleware - a highly 
standardised thing, not specific to TripleO or even OpenStack, that a 
non-python-ninja could easily figure out from StackOverflow - then 
deploy it on the undercloud machine and add it into the paste pipeline.

   class AssetControl(wsgi.Middleware):
       def process_request(self, req):
           requests.get('assets.foo.com', data={'some':'arguments'})

It's true that the 'deploy it on the machine' step is probably more 
complicated than the 'upload a new workflow' one. OTOH most sysadmins 
are *really* good at installing stuff on a machine, and there is a HUGE 
advantage in not ever having to merge your forked workflow definitions.

> Which of these is most accessible to a traditional non-python-ninja
> sysadmin?

Given the above, I would genuinely have to say the second. WSGI and 
Requests are *very* well documented *everywhere*.

Though the biggest difference, I suspect, comes when you have to 
incorporate some logic in there. Say you want to log the request to a 
different server when the user's manager's oldest pet's middle name 
begins with 'Q' or something. (I would venture to speculate that this 
kind of requirement is, ahem, not all that uncommon in enterprise 
environments either ;) In Python this is pretty trivial and you always 
have StackOverflow to help when you get stuck; if you're having to 
implement it in some obscure DSL that knows nothing about your 
application then you could be in for a world of hurt.


More information about the OpenStack-dev mailing list