[openstack-dev] [OSLO][RPC] AMQP / ZeroMQ control_exchange vs port numbers

Doug Hellmann doug.hellmann at dreamhost.com
Thu Apr 25 19:50:12 UTC 2013


On Thu, Apr 25, 2013 at 11:03 AM, Julien Danjou <julien at danjou.info> wrote:

> On Wed, Apr 24 2013, Eric Windisch wrote:
>
> > It gets tricky because ZeroMQ doesn't have "a connection", although I
> > suppose it could have different connection profiles (a different port
> > number, for instance).
>
> Well, a "connection" could be something specific to the implementation.
> - RabbitMQ (anq qpid I guess) would be host/port/control-exchange
> - ZMQ would be host/port
>
> And each project would use a separate connection to communicate.
>
> > We have been discussing specifying a message destination as part of
> > call/cast instead of topic as presently sent.
> >
> > The new API might be something akin to:
> >
> > dest = rpc.make_destination('compute', host='compute1', cell=CellState)
> >         rpc.cast(context, dest, msg)
> >
> > # This would replace rpc.cast(context, topic, msg)
> >         # which would have a check to set topic = dest if not
> isinstance(RpcDestination, dest)
>
> That sounds like a good move and handles transition nicely.
> I'm not sure about the make_destination() prototype however -- but that
> doesn't worry me for now.
>

What about making all of those arguments except topic part of the
constructor for a Connection, and just instantiating more than one of them
to talk to different services?


>
> > Presently, we have various ways of specifying the location of the queues:
> >  * rabbit_host + rabbit_port + control_exchange (plus auth)
> >  * qpid_host + qpid_port + control_exchange (plus auth)
> >
> >
> >  * zmq_port + (matchmaker || consumer_host)
> >  * cell (which maps to some subset of the above)
> >
> > This makes figuring out the proper abstraction for make_destination()
> > a bit complex if we want to do anything besides topic and host.
>
> I see that.
>

It seems like the host and port options for rabbit and qpid map to the same
values for the matchmaker, don't they? The fact that the actual message
doesn't go to the matchmaker is an implementation detail of ZMQ that it can
hide.


>
> > The ZeroMQ example above is a bit strange, but I wasn't sure how else
> > to express it. Basically, we need to lookup some topics in the
> > matchmaker, but if we're sending messages to a host directly, we can
> > skip that. The code presently knows to send messages to a host by
> > delimiting on a period, but it turns out that projects are suddenly
> > using periods in topics to delimitate all sorts of other things.
>
> This is something that could probably be improved in a new version of
> the API I presume.
>

Right, it seems like the host portion needs to be passed as a separate
argument and the AMQP drivers can combine it with the topic, instead of
going the other way around.


>
> > For ZeroMQ, a control_exchange is effectively a zmq_port *AND* its
> > associated matchmaker, which is pluggable. Plugins usually will have
> > their own host/port/auth. The MatchMaker basically maintains a mapping
> > of topics to consuming peers. Technically, we could lookup all topics
> > here, even "compute.host" topics, but we can circumvent a global
> > lookup by sending messages directly to "host".
>
> Understood (because well explained, thanks :)
>

How does the ZMQ driver know where the host is without asking the
matchmaker? Is that "host" value assumed to be resolvable via DNS?


>
> > Perhaps an rpc-abstraction level concept of a "cell" or "project" is
> > needed? That would be an identifier encompassing all the connection
> > details (i.e. host/control_exchange, zmq_port/matchmaker), however
> > such may be defined for the driver being used?
>
> That sounds like a good idea to me. That's what I called "connection" in
> my previous mail actually. This may be just an abstract class
> implemented by each driver in its own way.
>

What defines a "cell" for RPC? Just the alternate host and port for rabbit,
qpid, or matchmaker? If so, then the caller just needs to create a new
Connection with the right settings, and we don't need to have the concept
of "cell" leak into the RPC library API at all. It's there now because
we're using a global connection object, but sometimes want to talk to
something different. If we stop having a global, the awkwardness will go
away and the API can be simpler.

Doug


>
> --
> Julien Danjou
> # Free Software hacker # freelance consultant
> # http://julien.danjou.info
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130425/c5d8a8a3/attachment.html>


More information about the OpenStack-dev mailing list