[openstack-dev] Oslo messaging API design

Mark McLoughlin markmc at redhat.com
Fri May 31 15:15:07 UTC 2013


On Mon, 2013-05-13 at 11:27 -0400, Doug Hellmann wrote:
> 
> 
> 
> On Mon, May 13, 2013 at 11:02 AM, Mark McLoughlin <markmc at redhat.com> wrote:
>         Hey Doug,
>         
>         On Mon, 2013-05-13 at 10:43 -0400, Doug Hellmann wrote:
>         >
>         >
>         >
>         > On Sat, May 11, 2013 at 1:07 PM, Mark McLoughlin <markmc at redhat.com> wrote:
>         >         On Mon, 2013-04-29 at 11:12 +0100, Mark McLoughlin wrote:
>         >         > Hey
>         >         >
>         >         > I've been working on gathering requirements and design ideas for a
>         >         > re-design of Oslo's RPC and notifications API. The goals are:
>         >         >
>         >         >   1) A simple and easy to understand RPC abstraction which enables
>         >         >      all of the intra project communication patterns we need for
>         >         >      OpenStack
>         >         >
>         >         >   2) It should be possible to implement the RPC abstraction using a
>         >         >      variety of messaging transports, not just AMQP or AMQP-like
>         >         >
>         >         >   3) It should be a stable API with plenty of room for backwards
>         >         >      compatible evolution in the future so that we can release it as a
>         >         >      standalone library
>         >         >
>         >         > Here's what I have so far:
>         >         >
>         >         >   https://wiki.openstack.org/wiki/Oslo/Messaging
>         >
>         >
>         >         Just a quick status update. We're using this etherpad to coordinate:
>         >
>         >           https://etherpad.openstack.org/HavanaOsloMessaging
>         >
>         >         and this branch:
>         >
>         >           https://github.com/markmc/oslo-incubator/commits/messaging
>         >
>         >         At this point, we've got a pretty solid API design, a working fake
>         >         driver and some simple test code.
>         >
>         >
>         > Have you given any thought to how the MessageHandlingServer can listen
>         > on more than one target? That's an important use case for ceilometer,
>         > which I didn't address in my earlier changes.
>         >
>         >
>         > Do we need to support different transports (and drivers), or just
>         > multiple targets?
>         
>         
>         I guess I was thinking you'd just create multiple servers, but you
>         probably really want a single executor and dispatcher pair with multiple
>         listeners.
>         
>         Would something like this work?
>         
>             def start(self):
>                 if self._executor is not None:
>                     return
>                 self._executor = self._executor_cls(self.conf, self.dispatcher)
>                 self.listen(self.transport, self.target)
>                 self._executor.start()
>         
>             def listen(self, transport, target):
>                 self._executor.add_listener(transport._listen(target))
> 
> 
> I am worried there might be a case where the executor will have a hard
> time adding a listener after it has been started. How would the
> blocking executor do that, for example?

Very interesting question and after spending a good deal of time on it,
the only conclusion I've really come to is we need to spend more time
thinking about it. Oh, and I need to go investigate tulip like you
originally said ... :)

I'm not sure it's so much the problem of how to add a listener after it
has been started. It's more a question of how best to go about
supporting multiple listeners at all.

Firstly, what are the semantics of the blocking executor? That start()
will block and messages will be dispatched in the thread which called
start()? Right?

Or do we go further, and say that the blocking executor never creates
any (native) threads? Is that an implementation detail, or is it
important semantics for some classes of applications?

I would have thought that a "no threads" semantic would be a useful
thing to support, but looking at e.g. the python-qpid library I see that
it creates threads without giving the caller any option in the matter.

But, I digress ...

If the blocking executor semantics is purely that start() blocks and
messages are dispatched in the calling thread, then we can implement
support for multiple listeners by spawning off threads which call
listener.poll() and put messages on a queue for the calling start()
thread to dispatch. Adding another listener is simply a matter of
spawning off a new polling thread.

And a further digression ...

I would have liked if we could have relied on listeners exposing a
select()able file handle and we could implement the blocking executor
very simply using select(). I was all happy because I got this working
with kombu:

  https://gist.github.com/markmc/5685616

but I'm 95% certain there's no selectable file handle exposed by
python-qpid :(

Perhaps we could add a simple abstraction for transports that don't
support this - i.e. create a thread to poll for messages and add them to
a selectable queue. We just require that listeners are selectable and
move the responsibility for polling in a thread out of the executor and
into the transport driver.

But, with all that said, we're using eventlet right now and making
listeners selectable just isn't interesting in that context. I'd be
tempted to just say the blocking executor doesn't support multiple
listeners right now.

Hmm.

Cheers,
Mark.




More information about the OpenStack-dev mailing list