[openstack-dev] [oslo.messaging]Optimize RPC performance by reusing callback queue
sileht at sileht.net
Thu Jun 8 08:06:26 UTC 2017
On Thu, Jun 08, 2017 at 10:29:16AM +0800, int32bit wrote:
>Currently, I find our RPC client always need create a new callback queue
>for every call requests to track the reply belongs, at least in Newton.
>That's pretty inefficient and lead to poor performance. I also find some
>RPC implementations no need to create a new queue, they track the request
>and response by correlation id in message header(rabbitmq well supports,
>not sure is it AMQP standard?). The rabbitmq official document provide a
>simple demo, see .
>So I am confused that why our oslo.messaging doesn't use this way
>to optimize RPC performance. Is it for any consideration or do I miss
>some potential cases?
I think that was designing like this from the beginning unfortunately.
The main issue is not the feature itself. It's easy to implement I wrote a PoC
some times ago. But some projects allow what we call 'Rolling Upgrade'.
That means an older (N-1) application should be allowed to talk to a
newer one and the reverse. So a RPC server have to known if it should
send the message to the old callback queue or to the new one (even RPC
Server from version N-1 should be able to do that). Also a new RPC
client should be able to talk to an old RPC server.
So implementing this feature would take many cycles of patches to
implement and babysit. With this kind of steps:
* for version N+1: allows RPC server and RPC client to read/send to the future queue
topology but continue to use the old topology by default.
* for version N+2: switch to the new topology by default but continue to
support to talk to RPC client/server from previous version
* for version N+3: remove code of the old topology.
Any issue encounters by an application have good chance extends each step
to more than one cycle.
So finally, this is not as easy as the feature alone itself and this
issue is known since at least 2015 , and oslo.messaging have basic no
very active contributor so nobody is going to fix this kind of technical
debt (obviously everybody is welcome to fix that).
mail: sileht at sileht.net
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 884 bytes
Desc: not available
More information about the OpenStack-dev