[Openstack] Queue Service

Eric Day eday at oddments.org
Mon Feb 14 17:51:42 UTC 2011


Hi everyone,

When looking at other services to include as part of OpenStack, the
first that comes to mind for many is a queue. A queue service can
not only provide a useful public cloud service, but can also provide
one of the building blocks for other services. I've been leading an
effort to research and gather requirements for a queue service and
I'd like to share the current state and get community feedback. I
expect real development to begin very soon, and would also like to
identify developers who will have time to dedicate to this project.

I'd like to note this is not an official OpenStack project yet. The
intention is once we have the community support and a simple
implementation, we will submit the project to the OpenStack Project
Oversight Committee for approval.

The reason we are initiating our own project vs using an existing one
is due to simplicity, modularity, and scale. Also, very few (if any)
existing queue systems out there were built with multi-tenant cloud
use cases in mind. Very few also have a simple and extensible REST
API. There are possible solutions to build an AMQP based service,
but AMQP brings complexity and a protocol not optimized for high
latency and intermittent connectivity.

The primary goals of the queue service are:

* Simple - Think simple REST based queues for most use cases. Easy
  to access and use from any language. It should not require much
  setup, if any, before you can start pushing messages into it.

* Modular API - Initially we'll focus on a simple REST API,
  but this will not in any way be a first-class API. It should be
  possible to add other protocols (AMQP, protocol buffers, Gearman,
  etc) for other use cases. Note that the internal service API will
  not always provide a 1-1 mapping with the external API, so some
  features with advanced protocols may be unavailable.

* Fast - Since this will act as a building block for other services
  that my drive heavy throughput, performance will have a focus. This
  mostly comes down to implementation language and how clients and
  workers interact with the broker to reduce network chatter.

* Multi-tenant - Support multiple accounts for the service, and since
  this will also be a public service for some deployments, protect
  against potentially malicious users.

* Persistent - Allow messages to optionally be persistent. For
  protocols that can support it, this can be an optional flag while
  the message is submitted. The persistent storage should also be
  modular so we can test various data stores and accommodate different
  deployment options.

* Zones and locality awareness - As we've been discussing in other
  threads, locality in cloud services is an important feature. When
  dealing with where messages should be processed, we need to have
  location awareness to process data where it exists to reduce network
  overhead and processing time.

Before diving down into implementation details, I would like to hear
what folks have to say about the initial requirements above. Once
there is something along the lines of agreement, I'll be sending out
other topics for discussion dealing with implementation.

I'm looking forward to your feedback. Thanks!

-Eric




More information about the Openstack mailing list