[openstack-dev] RabbitMQ Scaling

Ray Pekowski pekowski at gmail.com
Thu Nov 15 21:05:27 UTC 2012


Andrea, Thanks for commenting on my results.

On Thu, Nov 15, 2012 at 9:32 AM, Rosa, Andrea (HP Cloud Services) <
andrea.rosa at hp.com> wrote:

> Hi
>
> >My volume of RPC calls is artificial.  It is a test program that just
> does RPCs as fast as it can.  The performance was strikingly bad for
> clustered RabbitMQ.
> >. A single 8 processor RabbitMQ with no clustering achieved 410 RPC
> calls/sec at 580% CPU utilization
> >. A cluster of two RabbitMQs servers achieved 68 RPC calls/sec at 140%
> CPU utilization on each
> >. A cluster of three RabbitMQ servers achieved 55 RPC calls/sec at 80%
> CPU utilization on each
>
> Can you give me more information about the cluster configuration?
> Especially I am interested to know how many DISK nodes you have and how
> many RAM nodes.
> The performance in the RPC scenario you described (a lot of RPC reqs in a
> short period of time)  could seriously affected if all nodes are configured
> as DISK nodes.
>

In the measurements I reported, all RabbitMQ nodes were DISK nodes.  I
failed to mention that I used a rampup methodology whereby I added one new
load generating client (RPC caller) every 60 seconds, so concurrency built
up over time.  I also monitored CPU, disk I/O, network and memory
utilization during the runs.  Particularly, disk I/O was less than 1% and
did not increase, confirming the DISK nodes were not using the disk.

In any case, I just reran the 3 node cluster measurement with one node as a
DISK and management database node and the the other two being RAM nodes and
the results were identical in all respects, namely RPC throughput and and
CPU utilization, to the all DISK node results mentioned above.

I am feeling the dynamic creation of queues and exchanges is not a good
idea.

Ray
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121115/84b14397/attachment.html>


More information about the OpenStack-dev mailing list