[openstack-dev] [OSLO][RPC] AMQP / ZeroMQ control_exchange vs port numbers
markmc at redhat.com
Mon Apr 29 16:16:55 UTC 2013
On Mon, 2013-04-29 at 11:52 -0400, Doug Hellmann wrote:
> On Mon, Apr 29, 2013 at 11:45 AM, Mark McLoughlin <markmc at redhat.com> wrote:
> On Mon, 2013-04-29 at 10:55 -0400, Doug Hellmann wrote:
> > On Mon, Apr 29, 2013 at 10:23 AM, Mark McLoughlin <markmc at redhat.com> wrote:
> > On Mon, 2013-04-29 at 09:39 -0400, Doug Hellmann wrote:
> > > > 4. An application must be able to direct a message to a specific peer (or
> > > > unknown? or both?) using a different rpc endpoint than the originating
> > > > app's default (i.e., calling into cells).
> > >
> > >
> > > I've tried to separate the notion of a transport from the target. The
> > > properties of the target is what's known to the application code, the
> > > properties of the transport are target specific and come from
> > > configuration.
> > >
> > > So, I'd say "an application must be able to invoke a method on a
> > > specific target using a supplied transport configuration".
> > >
> > >
> > > Something has to know how to map the target to the configuration. What
> > > does that, and how much does that code know about the transport?
> > Ok, on the client side:
> > https://wiki.openstack.org/wiki/Oslo/Messaging#Client_Side_API
> > for the simple case, you'd have something like:
> > rpc_driver = rpc.get_transport_driver()
> > base_rpcapi = BaseAPIClient(rpc_driver)
> > base_rpcapi.ping(context, 'foo')
> > for the more complex case, you'd have:
> > class MeteringAPIClient(rpc.RPCClient):
> > target = rpc.Target(exchange='ceilometer',
> > topic='metering',
> > version='1.0')
> > def __init__(self, driver):
> > # FIXME: need some way to override with exchange from URL
> > Ceilometer knows which exchanges to listen on based on its plugins and
> > configuration (we know to look for the glance notification config
> > option when the glance-related plugins are activated, for example).
> Right, and this is relevant to the notifications consumption API.
> > Cells will know because they will have some configuration setting(s)
> > per cell they want to talk to in the database.
> Yes, and this is why I want to support it in the transport URL.
> Right. What I'd like to do is move entirely to URLs, with a
> backwards-compatibility layer for getting the default URL from the
> config settings if there is no URL value set. So if the user sets
> rpc_broker_url, we use that by default. If they don't, we combine the
> values of the existing driver options to make a URL and then use that.
Yeah, but I'm not sure I'd ever remove this layer - it is very
convenient to tell people they just need to set qpid_hostname in
nova.conf for the most straightforward scenariou.
I can imagine us having e.g.
# Full transport URL, the default for qpid is below but you shouldn't need to change this
Where this gets tricky is the clustered broker setup - right now, you
just need to set qpid_hosts to a list of hostnames. To make it work with
this setup, we'd need cfg to have a crazy interpolation scheme where
then referencing foo returns [foo:blaa, foo:foobar]
> > Are those the only cases we have now? Are they the only cases we
> > anticipate?
> Well, we also have the control_exchange config varibale. So we need to
> default to control_exchange, but allow the transport driver URL to
> override it?
> The MeteringAPIClient example I had in mind above was the code for a
> Nova notifications plugin - the code runs in Nova, so control_exchange
> will be 'nova' but we know the default should be 'ceilometer' even if it
> gets overridden by the transport URL.
> I thought the ceilometer notifier would instantiate its own RPCClient,
> using its own configuration option to set the transport stuff up. The
> plugin won't even look at nova's "global" settings for RPC.
Yes. But it's nice to know that 'ceilometer' is the default exchange, so
we can use that if the user doesn't specify it in the URL they supply
for connecting to ceilometer. We know that 'ceilometer' is a reasonable
More information about the OpenStack-dev