[openstack-dev] [OSLO][RPC] AMQP / ZeroMQ control_exchange vs port numbers

Doug Hellmann doug.hellmann at dreamhost.com
Thu Apr 25 21:45:18 UTC 2013

On Thu, Apr 25, 2013 at 5:10 PM, Eric Windisch <eric at cloudscaling.com>wrote:

> >
> > I'd rather not mix and match. If the host value is just a way to limit
> the recipient of the message going to topic (or to have the sender choose
> instead of message broker), then the fact that we define a separate queue
> for that is an implementation detail of the driver, just like the fact that
> the ZMQ driver doesn't have a broker, but is always communicating
> point-to-point.
> >
> > >
> >
> I'm saying that the topic is an arbitrary identifier for messages set by
> the application. It can include the host if it wants to, it shouldn't
> matter. Presently, it *does* matter, although users of the API don't seem
> to know this.

OK, I think we more or less agree. I don't care what the topic value is, as
long as it isn't seen as the way to route a message to a single host.

> > OK, so host is intended to be the sort of thing you could pass as part
> of an address when creating a socket. If we make the host argument to
> ConnectionFactory.__call__ optional then the topic values can be consistent
> (i.e., not include the host) and the caller can still control who receives
> the message by constructing the Connection appropriately. That way the
> semantics of the call are controlled by the right arguments. The topic says
> "the message is about this" and the host limits the "audience" that gets
> the message.
> To confirm: what you're suggesting is that if we know we're sending a
> message to a host, we construct a connection to that host -- although that
> doesn't necessarily map 1:1 to either AMQP or ZeroMQ connections? If so,
> that seems a possible solution, as long as we can get that users to do use
> it correctly.

Yes, that's the idea. If you're doing point-to-point, you need to know it
and construct your connection appropriately. I believe that will be
possible in all of our current use cases, though I'm not sure. Can anyone
point out a case where it doesn't work?

> > cf = make_connection_factory(cfg.CONF, cfg.CONF.rpc.driver_name)
> > c = cf(host)
> > response = c.call(topic, message)
> In this model, every message from a scheduler to a compute node will look
> like:
> > c = cf(compute_host)
> > response = c.call('compute', {})

The design we discussed at the summit was to have a get_destination_for()
call that knew how to build an argument to  be passed to every rpc call.
After thinking about it more, it seems better to cache as much of that as
possible in the Connection instance, and just use multiple instances if we
need them. There's no reason a Connection instance couldn't be reused to
send multiple messages to the same host. Maybe the cache is built into the
ConnectionFactory itself, in fact, although I'm not sure we want to assume

> How would round-robin messages look where the host is not known in
> advance? It would use a global namespace as present? Unless you intend to
> make factories for sending to topics, too...

This ties in with the topic stuff from above.

AMQP talks to a host by using a special topic that only that host is
listening to. That's a leaky abstraction, and we should get rid of it.

If you want the message to go to "any" host, then don't pass a host to
cf(). The rabbit driver would then use the topic passed to call() and the
broker would handle the round-robin. If you want it to go to a specific
host, pass that value to cf(). The rabbit driver would then combine the
topic with the compute_host value to make the custom topic (e.g.,
'compute.myfavoritehost') that only that host is listening for.

def call(host, message):
    true_topic = self.default_topic + '.' + host if host else
    # call kombu to send the message

For ZMQ, the Connection class is going to have to know how to do
round-robin itself (just as it does now). But when the ConnectionFactory
does get a host, it can take whatever shortcuts it knows how to communicate
with only that host.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130425/c0d06754/attachment.html>

More information about the OpenStack-dev mailing list