[openstack-dev] Oslo messaging API design

Mark McLoughlin markmc at redhat.com
Wed Jun 5 11:28:43 UTC 2013


On Fri, 2013-05-31 at 16:15 +0100, Mark McLoughlin wrote:
> On Mon, 2013-05-13 at 11:27 -0400, Doug Hellmann wrote:
> > 
> > 
> > 
> > On Mon, May 13, 2013 at 11:02 AM, Mark McLoughlin <markmc at redhat.com> wrote:
> >         Hey Doug,
> >         
> >         On Mon, 2013-05-13 at 10:43 -0400, Doug Hellmann wrote:
> >         >
> >         >
> >         >
> >         > On Sat, May 11, 2013 at 1:07 PM, Mark McLoughlin <markmc at redhat.com> wrote:
> >         >         On Mon, 2013-04-29 at 11:12 +0100, Mark McLoughlin wrote:
> >         >         > Hey
> >         >         >
> >         >         > I've been working on gathering requirements and design ideas for a
> >         >         > re-design of Oslo's RPC and notifications API. The goals are:
> >         >         >
> >         >         >   1) A simple and easy to understand RPC abstraction which enables
> >         >         >      all of the intra project communication patterns we need for
> >         >         >      OpenStack
> >         >         >
> >         >         >   2) It should be possible to implement the RPC abstraction using a
> >         >         >      variety of messaging transports, not just AMQP or AMQP-like
> >         >         >
> >         >         >   3) It should be a stable API with plenty of room for backwards
> >         >         >      compatible evolution in the future so that we can release it as a
> >         >         >      standalone library
> >         >         >
> >         >         > Here's what I have so far:
> >         >         >
> >         >         >   https://wiki.openstack.org/wiki/Oslo/Messaging
> >         >
> >         >
> >         >         Just a quick status update. We're using this etherpad to coordinate:
> >         >
> >         >           https://etherpad.openstack.org/HavanaOsloMessaging
> >         >
> >         >         and this branch:
> >         >
> >         >           https://github.com/markmc/oslo-incubator/commits/messaging
> >         >
> >         >         At this point, we've got a pretty solid API design, a working fake
> >         >         driver and some simple test code.
> >         >
> >         >
> >         > Have you given any thought to how the MessageHandlingServer can listen
> >         > on more than one target? That's an important use case for ceilometer,
> >         > which I didn't address in my earlier changes.
> >         >
> >         >
> >         > Do we need to support different transports (and drivers), or just
> >         > multiple targets?
> >         
> >         
> >         I guess I was thinking you'd just create multiple servers, but you
> >         probably really want a single executor and dispatcher pair with multiple
> >         listeners.
> >         
> >         Would something like this work?
> >         
> >             def start(self):
> >                 if self._executor is not None:
> >                     return
> >                 self._executor = self._executor_cls(self.conf, self.dispatcher)
> >                 self.listen(self.transport, self.target)
> >                 self._executor.start()
> >         
> >             def listen(self, transport, target):
> >                 self._executor.add_listener(transport._listen(target))
> > 
> > 
> > I am worried there might be a case where the executor will have a hard
> > time adding a listener after it has been started. How would the
> > blocking executor do that, for example?
> 
> Very interesting question and after spending a good deal of time on it,
> the only conclusion I've really come to is we need to spend more time
> thinking about it. Oh, and I need to go investigate tulip like you
> originally said ... :)

Well, that was quite an adventure.

You might have seen a blog I wrote on the general topic of async I/O and
tulip:

  http://blogs.gnome.org/markmc/2013/06/04/async-io-and-python/

And now I've tried to play with an implementation for a tulip based
executor:

  https://github.com/markmc/oslo-incubator/blob/8509b8b/openstack/common/messaging/_executors/impl_tulip.py

In terms of the capabilities of listeners, I'm thinking there are three
possibilities:

  1) listeners who can poll for messages asynchronously - i.e. can
     expose a method to poll for messages which returns a coroutine
     object (listener would have a poll_async() method)

  2) listeners who can poll for messages in a non-blocking fashion and
     give us a select()able read file descriptor (listener would have
     a poll() method and accept a timeout kwarg to poll())

  3) listeners who must block to poll for a message

We could require that all listeners expose an interface for (1) and hide
the fact they can do (2) or (3), or we can just make the executor handle
all three modes.

In my mockup, the process_async(), process_selectable() and
process_blocking() methods correspond to these three modes. It seems
fairly plausible to me, but I could be missing something huge since I
haven't even tried to run the code :)

But there's also the dispatch callback side to think of. In the tulip
executor code, I'm assuming the callback is async (i.e. it will return a
coroutine object or future) so it needs to be paired with a tulip
dispatcher which would know whether the target method is async and
execute it in a thread pool if not

i.e. we'd have something like:

  @messaging.expose(async=True)
  def do_something(self):
      result = yield from self.remote_server.get_something_async()
      return self.db_api.store_async(result)

or:

  @messaging.expose()
      result = self.remote_server.get_something()
      return self.db_api.store(result)

and in the latter case, the tulip based dispatcher would know to use a
thread pool.

That implies that you need to pair a tulip-aware executor with a
tulip-aware dispatcher. We'll probably need to do something similar for
eventlet too.

Maybe a dispatcher should return metadata from the expose() decorator
and a partial which encapsulates the invocation on the target method and
its the executor that actually executes it? The problem right now is
that the dispatcher finds the target and immediately decides how to
execute it, but we want executor specific logic about how to execute it.

I think we're getting close to an answer :)

Oh, your original question was about multiple listeners - that still
seems perfectly doable to me. In the tulip executor, we'd just add a new
messaging processing task for that listener.

Cheers,
Mark.




More information about the OpenStack-dev mailing list