[openstack-dev] [OSLO][RPC] AMQP / ZeroMQ control_exchange vs port numbers

Eric Windisch eric at cloudscaling.com
Fri Apr 26 00:25:13 UTC 2013


Doug,  

Cells require sending to a separate broker - at least where a broker as is supported. Again, what that means for ZeroMQ isn't quite clear. At a minimum, it likely means sending data to a different matchmaker, but that would also be true of exchanges. (I'd argue that specifying the broker owning an exchange with AMQP protocols is also necessary, any time an exchange is specified)

The whole host/port settings terminology as it pertains to drivers, mixing with the terminology of a host as a peer is confusing. Perhaps we should just standardize on 'peer' when discussing the eventual destination of messages?  

Regards,
Eric Windisch


On Thursday, April 25, 2013 at 20:08 PM, Doug Hellmann wrote:

>  
>  
>  
> On Thu, Apr 25, 2013 at 4:39 PM, Doug Hellmann <doug.hellmann at dreamhost.com (mailto:doug.hellmann at dreamhost.com)> wrote:
> >  
> >  
> >  
> > On Thu, Apr 25, 2013 at 4:06 PM, Eric Windisch <eric at cloudscaling.com (mailto:eric at cloudscaling.com)> wrote:
> > > >  
> > > > What about making all of those arguments except topic part of the constructor for a Connection, and just instantiating more than one of them to talk to different services?
> > > If we can figure out what they are, or how to make this per-driver and store them in a reasonable way - I think we might be on the right path.
> > > >  
> > > >  
> > > >  
> > > > It seems like the host and port options for rabbit and qpid map to the same values for the matchmaker, don't they? The fact that the actual message doesn't go to the matchmaker is an implementation detail of ZMQ that it can hide.
> > > The matchmaker is pluggable. Some matchmakers use a flat file or local database, not a network service (although this may practically become a requirement for a network service, in the long run?). There may not be a single IP or DNS based endpoint for this, it may be multiple hosts, or a multicast address for communications, etc. I think it is premature to suggest that all the backends will have a host/port/user/password associated, and that other configuration fields won't be necessary.
> >  
> >  
> > OK, I was thinking of something like this:
> >  
> > # in openstack.common.rpc  
> >  
> > def make_connection_factory(config, rpc_driver_name):  
> > driver = load_driver(rpc_driver_name)
> > conn_factory = driver.make_connection_factory(config)
> > return conn_factory
>  
>  
>  
> After thinking about this a little more I realized that this doesn't solve the problem of allowing one service to talk to another on a separate "exchange." If that's still something we want to do, we need another argument passed to the ConnectionFactory to represent that exchange explicitly (I know we need another name, but until we come up with one I'll stick with "exchange"). The default can come from the config, but callers like ceilometer need a way to specify an alternative value.  
>  
> Do we care if the exchange for a service is not on the same host? Do we need to allow users to provide different host/port settings for every exchange?
>  
> Doug
>  
> >  
> > # in the driver  
> >  
> > class ConnectionFactory:
> > def __init__(self, config):
> > # know how to find useful configuration settings for the driver,  
> > # like the amqp server or the location of the matchmaker.
> >  
> > def __call__(self, host):
> > # driver-specific logic for combining "host" with the values pulled in __init_
> > # to have enough connection parameters
> > return Connection(those_parameters)
> >  
> > # in the app
> >  
> > cf = make_connection_factory(cfg.CONF, cfg.CONF.rpc.driver_name)
> > c = cf(host)
> > response = c.call(topic, message)
> >  
> > >  
> > > Also, I think that at least optionally, the matchmaker might become tied to the AMQP drivers. That might be a good thing, at least in leveling the field. The more we talk about message encryption, and in some models, even for signing, we might find having the matchmaker advantageous. TBD.
> > > > Right, it seems like the host portion needs to be passed as a separate argument and the AMQP drivers can combine it with the topic, instead of going the other way around.
> > >  
> > > I'm suggesting we let "service.host" be a valid topic, but *also* pass the host variable into the call/cast so that drivers can do what they will with it. It won't be *necessary* for the topic to be called "service.host", but it won't be illegal.
> >  
> >  
> > I'd rather not mix and match. If the host value is just a way to limit the recipient of the message going to topic (or to have the sender choose instead of message broker), then the fact that we define a separate queue for that is an implementation detail of the driver, just like the fact that the ZMQ driver doesn't have a broker, but is always communicating point-to-point.  
> >  
> > >  
> > > > > > For ZeroMQ, a control_exchange is effectively a zmq_port *AND* its
> > > > > > associated matchmaker, which is pluggable. Plugins usually will have
> > > > > > their own host/port/auth. The MatchMaker basically maintains a mapping
> > > > > > of topics to consuming peers. Technically, we could lookup all topics
> > > > > > here, even "compute.host" topics, but we can circumvent a global
> > > > > > lookup by sending messages directly to "host".
> > > > >  
> > > > >  
> > > > >  
> > > > > Understood (because well explained, thanks :)
> > > >  
> > > > How does the ZMQ driver know where the host is without asking the matchmaker? Is that "host" value assumed to be resolvable via DNS?
> > > Either because it infers it from the call/cast command (presently by inspecting the topic - which is no longer a valid source of authority), or if there is no host to be inferred, by looking up in the matchmaker.
> > >  
> > > The host itself is whatever is passed as CONF.rpc_zmq_host, which MUST match whatever Nova's CONF.host is set to. This defaults to the FQDN, such that DNS resolution is required. In *practice*, we override both of these variables to the system's IP address (for the interface on which messaging should happen)
> >  
> > OK, so host is intended to be the sort of thing you could pass as part of an address when creating a socket. If we make the host argument to ConnectionFactory.__call__ optional then the topic values can be consistent (i.e., not include the host) and the caller can still control who receives the message by constructing the Connection appropriately. That way the semantics of the call are controlled by the right arguments. The topic says "the message is about this" and the host limits the "audience" that gets the message.  
> >  
> > > >  
> > > >  
> > > > > > Perhaps an rpc-abstraction level concept of a "cell" or "project" is
> > > > > > needed? That would be an identifier encompassing all the connection
> > > > > > details (i.e. host/control_exchange, zmq_port/matchmaker), however
> > > > > > such may be defined for the driver being used?
> > > > >  
> > > > >  
> > > > >  
> > > > > That sounds like a good idea to me. That's what I called "connection" in
> > > > > my previous mail actually. This may be just an abstract class
> > > > > implemented by each driver in its own way.
> > > >  
> > > >  
> > > >  
> > > > What defines a "cell" for RPC? Just the alternate host and port for rabbit, qpid, or matchmaker? If so, then the caller just needs to create a new Connection with the right settings, and we don't need to have the concept of "cell" leak into the RPC library API at all. It's there now because we're using a global connection object, but sometimes want to talk to something different. If we stop having a global, the awkwardness will go away and the API can be simpler.
> > > You have this basically correct. Part of the problem is that we don't have that abstraction currently, and the parameters passed from the cell code for host/etc are really ill-matching for the MatchMaker requirements… and confused with the unclear separation of zmq_port and control_exchange for ZeroMQ -- and a question if ZeroMQ messages should, or should not, be brokered through to child cells (which may be necessary for some network security purposes, but unnecessary for queue scaling)
> > >  
> > > What I'd like to have, ideally, is the cell configuration essentially reconfigure the ZeroMQ settings as necessary, per the deployer's preference. A brokered-to-cell solution wouldn't be available, at least not presently, but such a design would allow that to be a possible feature addition/request, should it be requested. (I also don't want to bike shed or overdesign too much on what people *might* do or want down the road)
> >  
> > Given the fact that we have completely different calls right now, it seems like the caller must already know when it is talking to a cell. Instantiating a separate Connection for those cases should be fine, as long as we can find the matchmaker for that cell using the configuration data passed to make_connection_factory(). Maybe we need a cell_name argument to that call?  
> >  
> > Doug
> >  
> > >  
> > > Regards,
> > > Eric Windisch
> >  
>  






More information about the OpenStack-dev mailing list