[openstack-dev] [oslo.messaging]Optimize RPC performance by reusing callback queue

Ken Giusti kgiusti at gmail.com
Thu Jun 8 17:13:22 UTC 2017


Hi,

Keep in mind the rabbit driver creates a single reply queue per *transport*
- that is per call to oslo.messaging's
get_transport/get_rpc_transport/get_notification_transport.

If you have multiple RPCClients sharing the same transport, then all
clients issuing RPC calls over that transport will use the same reply queue
(and multiplex incoming replies using a unique id in the reply itself).
See
https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/_drivers/amqpdriver.py?h=stable/newton#n452
for all the details.

But it cannot share the reply queue across transports - and certainly not
across processes :)

-K



On Wed, Jun 7, 2017 at 10:29 PM, int32bit <krystism at gmail.com> wrote:

> Hi,
>
> Currently, I find our RPC client always need create a new callback queue
> for every call requests to track the reply belongs, at least in Newton.
> That's pretty inefficient and lead to poor performance. I also  find some
> RPC implementations no need to create a new queue, they track the request
> and response by correlation id in message header(rabbitmq well supports,
> not sure is it AMQP standard?). The rabbitmq official document provide a
> simple demo, see [1].
>
> So I am confused that why our oslo.messaging doesn't use this way
> to optimize RPC performance. Is it for any consideration or do I miss
> some potential cases?
>
> Thanks for any reply and discussion!
>
>
> [1] https://www.rabbitmq.com/tutorials/tutorial-six-python.html.
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ken Giusti  (kgiusti at gmail.com)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170608/ed4a84d2/attachment.html>


More information about the OpenStack-dev mailing list