[openstack-dev] [OSLO][RPC] AMQP / ZeroMQ control_exchange vs port numbers

Doug Hellmann doug.hellmann at dreamhost.com
Mon Apr 29 14:55:31 UTC 2013


On Mon, Apr 29, 2013 at 10:23 AM, Mark McLoughlin <markmc at redhat.com> wrote:

> On Mon, 2013-04-29 at 09:39 -0400, Doug Hellmann wrote:
> >
> >
> >
> > On Mon, Apr 29, 2013 at 7:00 AM, Mark McLoughlin <markmc at redhat.com>
> wrote:
> >         On Fri, 2013-04-26 at 15:18 -0400, Doug Hellmann wrote:
> >
> >         > We've gone around a few times with ideas for having better
> driver-parity in
> >         > the rpc library, so maybe the best thing to do is start by
> making sure we
> >         > have all of the requirements lined up. Here's a list of some
> that I came up
> >         > with based on existing features and my understanding of the
> shortcomings
> >         > (numbered for reference, but in no particular order):
> >
> >
> >         Thanks for doing this. We definitely need to be stepping back and
> >         thinking about this at a high level. I've attempted to step a
> little
> >         further back in my writeup:
> >
> >           https://wiki.openstack.org/wiki/Oslo/Messaging
> >
> >         > 1. Application code using RPC connections must not be required
> to know to
> >         > pass different arguments to establish the connection (i.e.,
> the app
> >         > shouldn't have to care if it is using Rabbit or ZMQ).
> >
> >
> >         Yes.
> >
> >         > 2. An application must be able to direct a message to a
> specific peer
> >         > (i.e., call() with a host specified).
> >
> >
> >         s/direct a message to/invoke a method on/
> >
> >
> > RPC and notification aren't the only usage patterns for messaging. If
> > we're going to design a general purpose library, we should not limit
> > ourselves to RPC semantics. RPC can be built on top of messaging, but
> > it has been painful to try to make it work the other way around.
>
> I don't think the goal is to design a general purpose library, it's to
> clean up the APIs we already have to support our current usage patterns.
> The library can grow new APIs to support new usage patterns over time.
>
> I really don't want to design generic APIs in a vacuum when we have the
> much more pressing concern of our current usage patterns. I also don't
> want to unnecessarily restrict what messaging systems could be used to
> support our current patterns.
>

You've already, mostly, separated the transport stuff from the RPC
semantics. I think that's all I was asking for, but I want the way we
describe it to not say "invoke a method on" but just stick with message
delivery. The dispatcher for method invocation is one server-side detail.


>
> >         But, yes.
> >
> >         > 3. An application must be able to direct a message to a pool
> of peers
> >         > (i.e., cast()).
> >
> >
> >         ... and for it to be delivered to one of those peers, yes.
> >
> >         > 4. An application must be able to direct a message to a
> specific peer (or
> >         > unknown? or both?) using a different rpc endpoint than the
> originating
> >         > app's default (i.e., calling into cells).
> >
> >
> >         I've tried to separate the notion of a transport from the
> target. The
> >         properties of the target is what's known to the application
> code, the
> >         properties of the transport are target specific and come from
> >         configuration.
> >
> >         So, I'd say "an application must be able to invoke a method on a
> >         specific target using a supplied transport configuration".
> >
> >
> > Something has to know how to map the target to the configuration. What
> > does that, and how much does that code know about the transport?
>
> Ok, on the client side:
>
>   https://wiki.openstack.org/wiki/Oslo/Messaging#Client_Side_API
>
> for the simple case, you'd have something like:
>
>   rpc_driver = rpc.get_transport_driver()
>
>   base_rpcapi = BaseAPIClient(rpc_driver)
>
>   base_rpcapi.ping(context, 'foo')
>
> for the more complex case, you'd have:
>
>   class MeteringAPIClient(rpc.RPCClient):
>
>       target = rpc.Target(exchange='ceilometer',
>                           topic='metering',
>                           version='1.0')
>
>       def __init__(self, driver):
>           # FIXME: need some way to override with exchange from URL
>

Ceilometer knows which exchanges to listen on based on its plugins and
configuration (we know to look for the glance notification config option
when the glance-related plugins are activated, for example).

Cells will know because they will have some configuration setting(s) per
cell they want to talk to in the database.

Are those the only cases we have now? Are they the only cases we anticipate?


>   ---
>
>   rpc_driver = rpc.get_transport_driver(url='kombu://broker//ceilometer')
>
>   metering_rpcapi = MeteringAPIClient(rpc_driver)
>   metering_rpc_api.record_metering_data(...)
>
> The annoying bit here is the application code should know what the
> default exchange is, but there is a use case for it to be overridden by
> configuration.
>

If the application gets the URL to pass to get_transport_driver() from a
config file or the database, does it even need to know there is such a
thing as an "exchange" any more?


>
> >         > 5. An application must be able to listen for messages without
> interfering
> >         > with others receiving those same messages (i.e.,
> join_consumer_pool()).
> >
> >
> >         For notifications, yes - and we should have an API for consuming
> >         notifications.
> >
> >         But for RPC (i.e. create_worker()), I don't really see it. See
> here:
> >
> >
> https://wiki.openstack.org/wiki/Oslo/Messaging#Ceilometer_Metering_Messages
> >
> >         Should ceilometer be using notifications instead of
> record_metering_data()
> >
> >
> > Probably.
>
> Hmm, ok :)
>

See the other message where I explained that in a little more detail. :-)

Doug


>
> Cheers,
> Mark.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130429/66c433a9/attachment.html>


More information about the OpenStack-dev mailing list