[openstack-dev] [OSLO][RPC] AMQP / ZeroMQ control_exchange vs port numbers

Doug Hellmann doug.hellmann at dreamhost.com
Mon Apr 29 15:52:10 UTC 2013


On Mon, Apr 29, 2013 at 11:45 AM, Mark McLoughlin <markmc at redhat.com> wrote:

> On Mon, 2013-04-29 at 10:55 -0400, Doug Hellmann wrote:
> >
> >
> >
> > On Mon, Apr 29, 2013 at 10:23 AM, Mark McLoughlin <markmc at redhat.com>
> wrote:
> >         On Mon, 2013-04-29 at 09:39 -0400, Doug Hellmann wrote:
> >         >
> >         >
> >         >
> >         > On Mon, Apr 29, 2013 at 7:00 AM, Mark McLoughlin <
> markmc at redhat.com> wrote:
> >         >         On Fri, 2013-04-26 at 15:18 -0400, Doug Hellmann wrote:
> >         >
> >         >         > We've gone around a few times with ideas for having
> better driver-parity in
> >         >         > the rpc library, so maybe the best thing to do is
> start by making sure we
> >         >         > have all of the requirements lined up. Here's a list
> of some that I came up
> >         >         > with based on existing features and my understanding
> of the shortcomings
> >         >         > (numbered for reference, but in no particular order):
> >         >
> >         >
> >         >         Thanks for doing this. We definitely need to be
> stepping back and
> >         >         thinking about this at a high level. I've attempted to
> step a little
> >         >         further back in my writeup:
> >         >
> >         >           https://wiki.openstack.org/wiki/Oslo/Messaging
> >         >
> >         >         > 1. Application code using RPC connections must not
> be required to know to
> >         >         > pass different arguments to establish the connection
> (i.e., the app
> >         >         > shouldn't have to care if it is using Rabbit or ZMQ).
> >         >
> >         >
> >         >         Yes.
> >         >
> >         >         > 2. An application must be able to direct a message
> to a specific peer
> >         >         > (i.e., call() with a host specified).
> >         >
> >         >
> >         >         s/direct a message to/invoke a method on/
> >         >
> >         >
> >         > RPC and notification aren't the only usage patterns for
> messaging. If
> >         > we're going to design a general purpose library, we should not
> limit
> >         > ourselves to RPC semantics. RPC can be built on top of
> messaging, but
> >         > it has been painful to try to make it work the other way
> around.
> >
> >
> >         I don't think the goal is to design a general purpose library,
> it's to
> >         clean up the APIs we already have to support our current usage
> patterns.
> >         The library can grow new APIs to support new usage patterns over
> time.
> >
> >         I really don't want to design generic APIs in a vacuum when we
> have the
> >         much more pressing concern of our current usage patterns. I also
> don't
> >         want to unnecessarily restrict what messaging systems could be
> used to
> >         support our current patterns.
> >
> >
> > You've already, mostly, separated the transport stuff from the RPC
> > semantics. I think that's all I was asking for, but I want the way we
> > describe it to not say "invoke a method on" but just stick with
> > message delivery. The dispatcher for method invocation is one
> > server-side detail.
>
> I think about it more in terms of the payload - I don't want users of
> the API ever to see this as "sending a message with 'method' and 'args'
> parameters'" ... it should be "invoke a method with the given name and
> args".
>

OK, I see that perspective.


>
> If we want to support messaging with alternative or free-form payloads,
> then we can do that later - right now, we don't have a use case for it.
>
> >         >         But, yes.
> >         >
> >         >         > 3. An application must be able to direct a message
> to a pool of peers
> >         >         > (i.e., cast()).
> >         >
> >         >
> >         >         ... and for it to be delivered to one of those peers,
> yes.
> >         >
> >         >         > 4. An application must be able to direct a message
> to a specific peer (or
> >         >         > unknown? or both?) using a different rpc endpoint
> than the originating
> >         >         > app's default (i.e., calling into cells).
> >         >
> >         >
> >         >         I've tried to separate the notion of a transport from
> the target. The
> >         >         properties of the target is what's known to the
> application code, the
> >         >         properties of the transport are target specific and
> come from
> >         >         configuration.
> >         >
> >         >         So, I'd say "an application must be able to invoke a
> method on a
> >         >         specific target using a supplied transport
> configuration".
> >         >
> >         >
> >         > Something has to know how to map the target to the
> configuration. What
> >         > does that, and how much does that code know about the
> transport?
> >
> >
> >         Ok, on the client side:
> >
> >           https://wiki.openstack.org/wiki/Oslo/Messaging#Client_Side_API
> >
> >         for the simple case, you'd have something like:
> >
> >           rpc_driver = rpc.get_transport_driver()
> >
> >           base_rpcapi = BaseAPIClient(rpc_driver)
> >
> >           base_rpcapi.ping(context, 'foo')
> >
> >         for the more complex case, you'd have:
> >
> >           class MeteringAPIClient(rpc.RPCClient):
> >
> >               target = rpc.Target(exchange='ceilometer',
> >                                   topic='metering',
> >                                   version='1.0')
> >
> >               def __init__(self, driver):
> >                   # FIXME: need some way to override with exchange from
> URL
> >
> >
> > Ceilometer knows which exchanges to listen on based on its plugins and
> > configuration (we know to look for the glance notification config
> > option when the glance-related plugins are activated, for example).
>
> Right, and this is relevant to the notifications consumption API.
>
> > Cells will know because they will have some configuration setting(s)
> > per cell they want to talk to in the database.
>
> Yes, and this is why I want to support it in the transport URL.
>

Right. What I'd like to do is move entirely to URLs, with a
backwards-compatibility layer for getting the default URL from the config
settings if there is no URL value set. So if the user sets rpc_broker_url,
we use that by default. If they don't, we combine the values of the
existing driver options to make a URL and then use that.


>
> > Are those the only cases we have now? Are they the only cases we
> > anticipate?
>
> Well, we also have the control_exchange config varibale. So we need to
> default to control_exchange, but allow the transport driver URL to
> override it?
>
> The MeteringAPIClient example I had in mind above was the code for a
> Nova notifications plugin - the code runs in Nova, so control_exchange
> will be 'nova' but we know the default should be 'ceilometer' even if it
> gets overridden by the transport URL.
>

I thought the ceilometer notifier would instantiate its own RPCClient,
using its own configuration option to set the transport stuff up. The
plugin won't even look at nova's "global" settings for RPC.


>
> >           ---
> >
> >           rpc_driver =
> rpc.get_transport_driver(url='kombu://broker//ceilometer')
> >
> >           metering_rpcapi = MeteringAPIClient(rpc_driver)
> >           metering_rpc_api.record_metering_data(...)
> >
> >         The annoying bit here is the application code should know what
> the
> >         default exchange is, but there is a use case for it to be
> overridden by
> >         configuration.
> >
> >
> > If the application gets the URL to pass to get_transport_driver() from
> > a config file or the database, does it even need to know there is such
> > a thing as an "exchange" any more?
>
> It needs to know what the default is - i.e. Nova needs to set it to
> 'nova' process-wide and Ceilometer's notification plugin needs it to be
> 'ceilometer'
>

See above.

Doug


>
> Cheers,
> Mark.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130429/04e2bf90/attachment.html>


More information about the OpenStack-dev mailing list