[openstack-dev] [OSLO] Comments/Questions on Messaging Wiki

Mark McLoughlin markmc at redhat.com
Tue Jul 16 20:57:14 UTC 2013


On Tue, 2013-07-16 at 16:34 -0400, William Henry wrote:
> 
> ----- Original Message -----
> > 
> > 
> > ----- Original Message -----
> > > Hi William,
> > > 
> > > I think Doug has done a good job of answering all these, but here's
> > > another set of answers to make sure there's no confusion :)
> > > 
> > > On Fri, 2013-07-12 at 17:40 -0400, William Henry wrote:
> > > > Hi all,
> > > > 
> > > > I've been reading through the Messaging Wiki and have some comments.
> > > 
> > > The docs generated from the code are now up on:
> > > 
> > >   http://docs.openstack.org/developer/oslo.messaging/
> > > 
> > > There should be some useful clarifying stuff in there too. Indeed some
> > > of thinking has moved on a bit since the wiki page was written.
> > > 
> > > >  Not criticisms, just comments and questions.
> > > > I have found this to be a very useful document. Thanks.
> > > > 
> > > > 1. "There are multiple backend transport drivers which implement the
> > > > API semantics using different messaging systems - e.g. RabbitMQ, Qpid,
> > > > ZeroMQ. While both sides of a connection must use the same transport
> > > > driver configured in the same way, the API avoids exposing details of
> > > > transports so that code written using one transport should work with
> > > > any other transport."
> > > > 
> > > > The good news for AMQP 1.0 users is that technically "boths sides of
> > > > the connection" do not have to use same transport driver. In pre-AMQP
> > > > 1.0 days this was the case. But today interoperability between AMQP
> > > > 1.0 implementations has been demonstrated.
> > > 
> > > Yeah, the point was more that like you need to use the zmq driver on
> > > both sides.
> > > 
> > > I could imagine us having multiple "amqp 1.0" interoperable drivers. I
> > > don't know what the use case would be for using one of those drivers on
> > > one side and another on the other side, but there's no reason why it
> > > should be impossible.
> > > 
> > > > 2. I notice under the RPC concepts section that you mention Exchanges
> > > > as a container in which topics are scoped. Is this exchange a pre AMQP
> > > > 1.0 artifact or just a general term for oslo.messaging that is loosely
> > > > based on the pre-AMQP 1.0 artifact called an Exchange? i.e. are you
> > > > assuming that messaging implementations have something called an
> > > > exchange? Or do you mean that messaging implementations can scope a
> > > > topic and in oslo we call that scoping an exchange?
> > > 
> > > Yeah, it really is only loosely related to the AMQP concept.
> > > 
> > > It's purely a namespace thing. You could e.g. have two Nova deployments
> > > with exactly the same messaging transport (and e.g. sending messages
> > > over the same broker, using the same topic names, etc.) and you could
> > > keep them separated from one another by using a different exchange name
> > > for each.
> > > 
> > > The reason we've stuck with the name "exchange" is that we have a
> > > "control_exchange" configuration variable (defaulting to e.g. 'nova')
> > > that servers roughly this purpose now and we want to continue using it
> > > rather than renaming it to something else.
> > > 
> > > Which raises a point about all of this - we need to be able to
> > > interoperate with existing OpenStack deployments using the current RPC
> > > code. So, we really don't have the luxury of changing on-the-wire
> > > formats, basic messaging semantics, configuration settings, etc.
> > > 
> > > oslo.messaging is mostly about cleaning up the python API which services
> > > use to issue/receive RPCs and send notifications.
> > > 
> > > > 3. Some messaging nomenclature: The way the wiki describes RPC "
> > > > Invoke Method on One of Multiple Servers " is more like a queue than a
> > > > topic. In messaging a queue is something that multiple consumers can
> > > > attach to and one of them gets and services a message/request. A topic
> > > > is where 1+ consumers are "connected" and each receives a the message
> > > > and each can service it as it sees fit. In pre-AMQP 1.0 terms what
> > > > this seems to describe is a direct exchange. And a direct excahnge can
> > > > have multiple consumers listening to a queue on that exchange.
> > > > (Remember that fanout is just a generalization of topic in that all
> > > > consumers get all fanout messages - there are no sub-topics etc.)
> > > > 
> > > > In AMQP 1.0 the addressing doesn't care or know about exchanges but it
> > > > can support this queue type behavior on an address or topic type
> > > > behavior on an address.
> > > > 
> > > > I know this isn't about AMQP specifically but therefore this is even
> > > > more important. Topics are pub/sub with multiple consumer/services
> > > > responding to a single message. Queues are next consumer up gets the
> > > > next message.
> > > > 
> > > > (BTW I've seen this kind of confusion also in early versions of
> > > > MCollective in Puppet.)
> > > > 
> > > > It might be better to change some of the references to "topic" to
> > > > "address". This would solve the problem. i.e. a use case where one of
> > > > many servers listening on an address services a message/request. And
> > > > later all of servers listening on an address service a
> > > > message/request. Addressing also solves the one-to-one as the address
> > > > is specific to the server (and the others don't have to receive and
> > > > reject the message).
> > > 
> > > It sounds to me like the qpid-proton based transport driver could easily
> > > map the semantics we expect from topic/fanout to amqp 1.0 addresses.
> > > 
> > > The 'topic' nomenclature is pretty baked in the various services doing
> > > RPC and notifications, especially in the naming of configuration
> > > options.
> > > 
> > > The basic semantics is a nova compute service listens on the 'compute'
> > > topic on the 'nova' exchange and a client can cause a method to be
> > > invoked on the service with either of the following targets:
> > > 
> > >   Target(exchange='nova', topic='compute')
> > >   Target(exchange='nova', topic='compute', server='compute1')
> > >   Target(exchange='nova', topic='compute', fanout=True)
> > > 
> > > In the first case, any compute service will do. In the second, you want
> > > to invoke the method on a particular compute service. The the latter
> > > case, you want to invoke it on all compute services.
> > 
> > This really helps understand some of what I've read. Thanks.
> > 
> > It seems that exchange is really just a high level qualifier of a namespace
> > for the most part.

Yep.

> > Q: if in the above last Target, fanout was false (fanout=False) would that
> > mean that you are expecting queue type behavior in that instance? i.e. I
> > want only one consumer, I don't care which one, but only one consumer to
> > service this request? So that syntax would change the semantics from pub/sub
> > topic (i.e. all subscribers to the topic get it) to a queue semantic (first
> > consumer to acquire the message causes it to dequeue and be not available t
> > others?
> 
> Actually rereading this I think it does. i.e. the first example is essentially fanout=false.

Right, exactly.

And this doesn't make sense:

  Target(exchange='nova', topic='compute', server='compute1', fanout=True)

Cheers,
Mark.




More information about the OpenStack-dev mailing list