[openstack-dev] blueprint AMQP RPC single per process response queue performance improvement

Ray Pekowski pekowski at gmail.com
Thu Jan 17 06:34:32 UTC 2013


Jay,

Thanks for reviewing the code.  I've answered inline.

On Wed, Jan 16, 2013 at 11:45 PM, Jay Pipes <jaypipes at gmail.com> wrote:

> Interesting code. Ray, would you mind elaborating on a couple things?
>
> 1) Why have this be a configurable option if this method of response
> queue is much faster than the existing implementation?
>

I have tried hard to be backward compatible, but an RPC call made with this
single reply queue to a downlevel callee would not respond with the
reply_id that uniquely identifies the waiting caller thread to unblock.
Actually, as long as there is only one caller thread blocked it obviously
must be the one, but it won't work in the case of concurrent outstanding
calls.

In the case of mixed levels of OpenStack, for example in a migration
scenario, the feature can be disabled to allow RPC compatibility.

>
> 2) How are you benchmarking your performance improvements?
>

Funny you should ask.  You wrote an article or blog on how to create a
dummy OpenStack service, so I imagine you would have done it the same way.
I can't find the link right now and I didn't actually use your article.  I
just came across it after I had done my work.  I created a bare minimum
services that simply exposed a few RPC calls for test purposes.  I then
wrote a load driver to drive those RPC calls.  I replicated it to 9
services (kind of arbitrarily chosen and due to time constraints) on 10 VMs
and added 3 RabbitMQ server VMs.  Then I did a series of tests maxing out
throughput for each service and adding more services over time until all
load generator/services were running at maximum rate.  I tested for 1, 2
and 3 RabbitMQ servers, with and without mirroring.

It was a Dell internal study.  I will check if to see if I can share the
results.


>
> Thanks!
> -jay
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130117/f044d4f5/attachment.html>


More information about the OpenStack-dev mailing list