[openstack-dev] [OSLO][RPC] AMQP / ZeroMQ control_exchange vs port numbers

Doug Hellmann doug.hellmann at dreamhost.com
Mon Apr 29 16:12:47 UTC 2013


On Mon, Apr 29, 2013 at 12:04 PM, Mark McLoughlin <markmc at redhat.com> wrote:

> On Mon, 2013-04-29 at 11:44 -0400, Doug Hellmann wrote:
> >
> >
> >
> > On Mon, Apr 29, 2013 at 11:25 AM, Mark McLoughlin <markmc at redhat.com>
> wrote:
> >         On Mon, 2013-04-29 at 10:43 -0400, Doug Hellmann wrote:
> >         >
> >         >
> >         >
> >         > On Mon, Apr 29, 2013 at 7:00 AM, Mark McLoughlin <
> markmc at redhat.com> wrote:
> >         >         On Fri, 2013-04-26 at 15:18 -0400, Doug Hellmann wrote:
> >         >
> >         >         > We've gone around a few times with ideas for having
> better driver-parity in
> >         >         > the rpc library, so maybe the best thing to do is
> start by making sure we
> >         >         > have all of the requirements lined up. Here's a list
> of some that I came up
> >         >         > with based on existing features and my understanding
> of the shortcomings
> >         >         > (numbered for reference, but in no particular order):
> >         >
> >         >
> >         >         Thanks for doing this. We definitely need to be
> stepping back and
> >         >         thinking about this at a high level. I've attempted to
> step a little
> >         >         further back in my writeup:
> >         >
> >         >           https://wiki.openstack.org/wiki/Oslo/Messaging
> >         >
> >         >
> >         > A few comments/questions on that:
> >
> >
> >         All good questions, thanks Doug.
> >
> >         > In the client, it seems like the rpc.Target() should be passed
> to the
> >         > RPCClient constructor, rather than specified as a class
> attribute.
> >
> >
> >         I was seeing the class attribute as being the defaults for method
> >         invocations ... to make those defaults be per instance, you
> could do
> >         something like:
> >
> >           class BaseAPIClient(rpc.RPCClient):
> >
> >               def __init__(self, driver,
> >                            topic='blaa', version='1.0',
> namespace='baseapi'):
> >                   self.target = rpc.Target(topic=topic,
> >                                            version=version,
> >                                            namespace=namespace)
> >
> >         ... but maybe the point here is that we never expect this class
> to be
> >         used with a different topic, version or namespace.
> >
> >
> > Right, I see all of those settings as part of defining where the
> > messages sent by the client should be going. I like the idea of
> > encapsulating them in a Topic which is either passed to the base class
> > or a required property (if BaseAPIClient is an abstract base class,
> > for example).
>
> So, this Target term is supposed to be exactly "where the messages
> should be going" ... i.e. (exchange, topic, host, fanout, namespace,
> version) are the routing attributes that the client application is aware
> of ... not just the topic.
>

Sorry, I meant Target and typed Topic. We agree on this, I think.


>
> >         >  The target parameters should include the host (or "peer" to
> >         use
> >         > Eric's terminology), shouldn't it?
> >
> >
> >         Yes, absolutely:
> >
> >               def get_backdoor_port(self, context, host):
> >                   self.call(self.target(version='1.1', host=host),
> >                             'get_backdoor_port', context)
> >
> >         The host would be added to the default target parameters at
> >         method
> >         invocation time.
> >
> >
> > Ah, no, that's not what I meant. I meant that the host should be
> > specified when the client is constructed. Maybe that makes it too
> > inconvenient to use given our existing patterns?
>
> Yeah, we (well, I'm thinking of Nova) typically have one instance of
> this client proxy class which will invoke methods by default on e.g.
>
>   (exchange='nova', topic='compute', namespace='baseapi', version='1.1')
>
> but individual methods can take a host parameter and invoke the method
> on that host specifically.
>
> I guess you could have choose to have a client object per destination
> host so long as the client constructor accepted a host parameter.
>
> >         Note, though, this is about a client specifying one of a pool
> >         of
> >         listening servers ... it's certainly not a peer of a client,
> >         and we're
> >         not talking about peers at the transport level either.
> >
> >
> > Yeah, we still need to figure out what to call that. "Server" is
> > heavily overloaded, so Eric suggested "peer" as the name of the remote
> > thing we're talking to.
>
> Oh, a peer at transport level, you mean? I'm not sure it makes sense to
> talk about it generically ... I mean, the only thing that's really
> important to describe at the transport level with AMQP transport drivers
> is the broker.
>

On the side sending the message we need to know the host where the broker
is and the host where the server we really want to talk to is (the
OpenStack service that is going to receive the call and respond to it).
Instead of calling both of those "host" Eric suggested that we call the
latter "peer". I don't care what we call it, but "host" isn't sufficiently
descriptive, IMO.


>
> >         > On the server side, I like separating the dispatcher from
> >         the server,
> >         > but I wonder if the server needs to know about the
> >         dispatcher at all?
> >         > Why not just give the server a single callable, which might
> >         be a
> >         > Dispatcher instance? It isn't clear why the dispatcher has
> >         start() and
> >         > stop() methods, either, but maybe that has something to do
> >         with this
> >         > design.
> >
> >
> >         Yeah, exactly.
> >
> >         In BlockingDispatcher, start() would read messages and
> >         dispatch them to
> >         the API objects we have listening on this topic and stop()
> >         would cause
> >         that dispatch loop to exit.
> >
> >         In EventletDispatch, start() would spawn off a greenthread to
> >         do the
> >         dispatching and stop() would cancel the thread.
> >
> >         That said, this part of the proposal doesn't feel very solid
> >         yet.
> >
> >
> >
> > Listening in a loop feels like a responsibility of the Service, not
> > the dispatcher. The Dispatcher is both routing messages and
> > (potentially) starting tasks now. Would it be cleaner to add another
> > class that knows how to start and stop tasks, and let the dispatcher
> > just use that? It would avoid permutations like EventletRPCDispatcher
> > and EventletNotificationDispatcher. We would just have an
> > RPCDispatcher that knows how to look up a method from a message and a
> > NotificationDispatcher that looks up a method from a notification
> > event type. They would both use an EventletTaskManager,
> > BlockingTaskManager, or even implementations based on threads or
> > multiprocessing.
>
> Sounds reasonable to me ... want to replace my proposal with that in the
> wiki?
>

Sure.


>
> > I don't see anything in the current dispatcher that looks like it is
> > starting an eventlet task, though. Is that happening somewhere else,
> > or do the individual methods deal with it themselves?
>
> It's the consume_in_thread() method on the connection.
>

It looks like that is dealing with the reads. How do you envision moving
that up the stack, and out of the driver?

Doug


>
> Cheers,
> Mark.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130429/a4909c79/attachment.html>


More information about the OpenStack-dev mailing list