[openstack-dev] [marconi] Reconsidering the unified API model
Flavio Percoco
flavio at redhat.com
Tue Jun 10 08:23:57 UTC 2014
On 09/06/14 19:31 +0000, Kurt Griffiths wrote:
>Folks, this may be a bit of a bombshell, but I think we have been dancing
>around the issue for a while now and we need to address it head on. Let me
>start with some background.
>
>Back when we started designing the Marconi API, we knew that we wanted to
>support several messaging patterns. We could do that using a unified queue
>resource, combining both task distribution and feed semantics. Or we could
>create disjoint resources in the API, or even create two separate services
>altogether, one each for the two semantic groups.
>
>The decision was made to go with a unified API for these reasons:
>
> • It would afford hybrid patterns, such as auditing or diagnosing a task
> distribution queue
> • Once you implement guaranteed delivery for a message feed over HTTP,
> implementing task distribution is a relatively straightforward addition. If
> you want both types of semantics, you don’t necessarily gain anything by
> implementing them separately.
>
>Lately we have been talking about writing drivers for traditional message
>brokers that will not be able to support the message feeds part of the API.
>I’ve started to think that having a huge part of the API that may or may not
>“work”, depending on how Marconi is deployed, is not a good story for users,
>esp. in light of the push to make different clouds more interoperable.
>
>Therefore, I think we have a very big decision to make here as a team and a
>community. I see three options right now. I’ve listed several—but by no means
>conclusive—pros and cons for each, as well as some counterpoints, based on past
>discussions.
>
>Option A. Allow drivers to only implement part of the API
>
>For:
>
> • Allows for a wider variety of backends. (counter: may create subtle
> differences in behavior between deployments)
> • May provide opportunities for tuning deployments for specific workloads
* Simplifies client implementation and API
>Against:
>
> • Makes it hard for users to create applications that work across multiple
> clouds, since critical functionality may or may not be available in a given
> deployment. (counter: how many users need cross-cloud compatibility? Can
> they degrade gracefully?)
This is definitely unfortunate but I believe it's a fair trade-off. I
believe the same happens in other services that have support for
different drivers.
We said we'd come up with a set of features that we considered core
for Marconi and that based on that we'd evaluate everything. Victoria
has been doing a great job with identifying what endpoints can/cannot
be supported by AMQP brokers. I believe this is a key thing to have
before we make any decision here.
>
>Option B. Split the service in two. Different APIs, different services. One
>would be message feeds, while the other would be something akin to Amazon’s
>SQS.
>
>For:
>
> • Same as Option A, plus creates a clean line of functionality for deployment
> (deploy one service or the other, or both, with clear expectations of what
> messaging patterns are supported in any case).
>
>Against:
>
> • Removes support for hybrid messaging patterns (counter: how useful are such
> patterns in the first place?)
> • Operators now have two services to deploy and support, rather than just one
> (counter: can scale them independently, perhaps leading to gains in
> efficiency)
>
Strong -1 for having 2 separate services. IMHO, this would just
complicate things from a admin / user perspective.
>
>Option C. Require every backend to support the entirety of the API as it now
>stands.
>
>For:
>
> • Least disruptive in terms of the current API design and implementation
> • Affords a wider variety of messaging patterns (counter: YAGNI?)
> • Reuses code in drivers and API between feed and task distribution
> operations (counter: there may be ways to continue sharing some code if the
> API is split)
>
>Against:
>
> • Requires operators to deploy a NoSQL cluster (counter: many operators are
> comfortable with NoSQL today)
> • Currently requires MongoDB, which is AGPL (counter: a Redis driver is under
> development)
> • A unified API is hard to tune for performance (counter: Redis driver should
> be able to handle high-throughput use cases, TBD)
>
A and C are both reasonable solutions. I personally prefer A with a
set of optional features well defined. In addition to that, we have
discussed having features discoverability that will allow users to
know features are supported. This makes developing applications on top
of Marconi a bit harder, though.
That said, I believe what needs to be done is re-think some of the
existing endpoints in order to embrace existing technologies /
protocols. With the addition of flavors, we'll have the same issue.
For instance, a user with 2 queues on different storage drivers - 1 on
mongodb and 1 on say rabbitmq - will likely have to develop based on
the features exposed by the driver with less supported features -
unless the application is segmented. In other words, the user won't be
able to dynamically create queue instances and re-use the same code.
Our API, as-is, is quite dependant on a store-forward message delivery
model and on a database-like storage. I don't think this is entirely
wrong but I do believe it could be simplified in order to support
other drivers without splitting it.
>I’d love to get everyone’s thoughts on these options; let's brainstorm for a
>bit, then we can home in on the option that makes the most sense. We may need
>to do some POCs or experiments to get enough information to make a good
>decision.
I think Victoria's work - AMQP 1.0 driver - is important for this
decision. Lets work with her on what she's achieved and what won't be
supported. We should come back here with that sub set of features
considered core.
Flavio
--
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140610/5ea1db27/attachment.pgp>
More information about the OpenStack-dev
mailing list