[openstack-dev] [marconi] Reconsidering the unified API model

Janczuk, Tomasz tomasz.janczuk at hp.com
Mon Jun 9 21:19:53 UTC 2014


I could not agree more with the need to re-think Marconi’s current approach to scenario breadth and implementation extensibility/flexibility. The broader the HTTP API surface area, the more limited are the implementation choices, and the harder are performance trade-offs. Current HTTP APIs of Marconi have a large surface area that aspires to serve too many purposes. It seriously limits implementation choices. For example, one cannot fully map Marconi’s HTTP APIs onto an AMQP messaging model (I tried last week to write a RabbitMQ plug-in for Marconi with miserable results).

I strongly believe Marconi would benefit from a very small  HTTP API surface that targets queue based messaging semantics. Queue based messaging is a well understood and accepted messaging model with a lot of proven prior art and customer demand from SQS, to Azure Storage Queues, to IronMQ, etc. While other messaging patterns certainly exist, they are niche compared to the basic, queue based, publish/consume pattern. If Marconi aspires to support non-queue messaging patterns, it should be done in an optional way (with a “MAY” in the HTTP API spec, which corresponds to option A below), or as a separate project (option B). Regardless the choice, the key to success is in in keeping the “MUST” HTTP API endpoints of Marconi limited in scope to the strict queue based messaging semantics.

I would be very interested in helping to flesh out such minimalistic HTTP surface area.

Thanks,
Tomasz Janczuk
@tjanczuk
HP

From: Kurt Griffiths <kurt.griffiths at rackspace.com<mailto:kurt.griffiths at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Mon, 9 Jun 2014 19:31:03 +0000
To: OpenStack Dev <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [marconi] Reconsidering the unified API model

Folks, this may be a bit of a bombshell, but I think we have been dancing around the issue for a while now and we need to address it head on. Let me start with some background.

Back when we started designing the Marconi API, we knew that we wanted to support several messaging patterns. We could do that using a unified queue resource, combining both task distribution and feed semantics. Or we could create disjoint resources in the API, or even create two separate services altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

  *   It would afford hybrid patterns, such as auditing or diagnosing a task distribution queue
  *   Once you implement guaranteed delivery for a message feed over HTTP, implementing task distribution is a relatively straightforward addition. If you want both types of semantics, you don’t necessarily gain anything by implementing them separately.

Lately we have been talking about writing drivers for traditional message brokers that will not be able to support the message feeds part of the API. I’ve started to think that having a huge part of the API that may or may not “work”, depending on how Marconi is deployed, is not a good story for users, esp. in light of the push to make different clouds more interoperable.

Therefore, I think we have a very big decision to make here as a team and a community. I see three options right now. I’ve listed several—but by no means conclusive—pros and cons for each, as well as some counterpoints, based on past discussions.

Option A. Allow drivers to only implement part of the API

For:

  *   Allows for a wider variety of backends. (counter: may create subtle differences in behavior between deployments)
  *   May provide opportunities for tuning deployments for specific workloads

Against:

  *   Makes it hard for users to create applications that work across multiple clouds, since critical functionality may or may not be available in a given deployment. (counter: how many users need cross-cloud compatibility? Can they degrade gracefully?)

Option B. Split the service in two. Different APIs, different services. One would be message feeds, while the other would be something akin to Amazon’s SQS.

For:

  *   Same as Option A, plus creates a clean line of functionality for deployment (deploy one service or the other, or both, with clear expectations of what messaging patterns are supported in any case).

Against:

  *   Removes support for hybrid messaging patterns (counter: how useful are such patterns in the first place?)
  *   Operators now have two services to deploy and support, rather than just one (counter: can scale them independently, perhaps leading to gains in efficiency)

Option C. Require every backend to support the entirety of the API as it now stands.

For:

  *   Least disruptive in terms of the current API design and implementation
  *   Affords a wider variety of messaging patterns (counter: YAGNI?)
  *   Reuses code in drivers and API between feed and task distribution operations (counter: there may be ways to continue sharing some code if the API is split)

Against:

  *   Requires operators to deploy a NoSQL cluster (counter: many operators are comfortable with NoSQL today)
  *   Currently requires MongoDB, which is AGPL (counter: a Redis driver is under development)
  *   A unified API is hard to tune for performance (counter: Redis driver should be able to handle high-throughput use cases, TBD)

I’d love to get everyone’s thoughts on these options; let's brainstorm for a bit, then we can home in on the option that makes the most sense. We may need to do some POCs or experiments to get enough information to make a good decision.

@kgriffs
_______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list