[openstack-dev] RabbitMQ Scaling

Ray Pekowski pekowski at gmail.com
Wed Nov 14 00:22:02 UTC 2012


On Tue, Nov 13, 2012 at 1:06 PM, Russell Bryant <rbryant at redhat.com> wrote:

> On 11/13/2012 01:24 PM, Vishvananda Ishaya wrote:
> >
> > On Nov 13, 2012, at 10:14 AM, Ray Pekowski <pekowski at gmail.com
> > <mailto:pekowski at gmail.com>> wrote:
> >>
> >> My question is whether anyone thinks it is a worthwhile effort to
> >> investigate code changes to the Openstack RPC that would make the RPC
> >> responses flow on static exchanges and queues?
> >>
> >
> > Yes I think this is worthwhile. A few people have tried this out and
> > there doesn't seem to be any reason why this wouldn't work. This should
> > be done in oslo-incubator (formerly openstack-common)
>
> The dynamic creation is of *queues*, right?  Not of exchanges and queues
> as specified before?  It only seems valuable to have dynamic queues for
> responses, which I'm pretty sure is how it works.
>

I don't know for sure if exchanges are being created or not, but here is
the output from a kombu log an RPC and you will see an exchange_declare on
the cleint and server side for e8e85bf237944223994eecd283a582da.

[Kombu channel:1]
message_to_python(<amqplib.client_0_8.basic_message.Message object at
0x4be99d0>)
[Kombu connection:0x4a37d50] establishing connection...
[Kombu connection:0x4a37d50] connection established:
<kombu.transport.pyamqplib.Connection object at 0x4be1c10>
[Kombu connection:0x4a37d50] create channel
[Kombu channel:1] exchange_declare(nowait=False,
exchange=u'e8e85bf237944223994eecd283a582da', durable=False,
arguments=None, type='direct', auto_delete=True)
[Kombu channel:1] prepare_message('{"failure": null, "result": ["000000",
"000001", "000002", "000003", "000004", "000005", "000006", "000007",
"000008", "000009", "000010", "000011", "000012", "000013", "000014",
"000015", "000016", "000017", "000018", "000019"]}', priority=0,
headers={}, properties={'delivery_mode': 2},
content_type='application/json', content_encoding='utf-8')
[Kombu channel:1] basic_publish(<amqplib.client_0_8.basic_message.Message
object at 0x4cb1550>, mandatory=False,
routing_key=u'e8e85bf237944223994eecd283a582da', immediate=False,
exchange=u'e8e85bf237944223994eecd283a582da')
[Kombu channel:1] close()
[Kombu connection:0x4a37d50] create channel
[Kombu channel:1] exchange_declare(nowait=False,
exchange=u'e8e85bf237944223994eecd283a582da', durable=False,
arguments=None, type='direct', auto_delete=True)
[Kombu channel:1] prepare_message('{"failure": null, "result": null,
"ending": true}', priority=0, headers={}, properties={'delivery_mode': 2},
content_type='application/json', content_encoding='utf-8')
[Kombu channel:1] basic_publish(<amqplib.client_0_8.basic_message.Message
object at 0x4cb1e90>, mandatory=False,
routing_key=u'e8e85bf237944223994eecd283a582da', immediate=False,
exchange=u'e8e85bf237944223994eecd283a582da')
[Kombu channel:1] close()
[Kombu connection:0x4a37d50] create channel


[Kombu channel:1]
message_to_python(<amqplib.client_0_8.basic_message.Message object at
0x4be99d0>)
[Kombu connection:0x4a37d50] establishing connection...
[Kombu connection:0x4a37d50] connection established:
<kombu.transport.pyamqplib.Connection object at 0x4be1c10>
[Kombu connection:0x4a37d50] create channel
[Kombu channel:1] exchange_declare(nowait=False,
exchange=u'e8e85bf237944223994eecd283a582da', durable=False,
arguments=None, type='direct', auto_delete=True)
[Kombu channel:1] prepare_message('{"failure": null, "result": ["000000",
"000001", "000002", "000003", "000004", "000005", "000006", "000007",
"000008", "000009", "000010", "000011", "000012", "000013", "000014",
"000015", "000016", "000017", "000018", "000019"]}', priority=0,
headers={}, properties={'delivery_mode': 2},
content_type='application/json', content_encoding='utf-8')
[Kombu channel:1] basic_publish(<amqplib.client_0_8.basic_message.Message
object at 0x4cb1550>, mandatory=False,
routing_key=u'e8e85bf237944223994eecd283a582da', immediate=False,
exchange=u'e8e85bf237944223994eecd283a582da')
[Kombu channel:1] close()
[Kombu connection:0x4a37d50] create channel
[Kombu channel:1] exchange_declare(nowait=False,
exchange=u'e8e85bf237944223994eecd283a582da', durable=False,
arguments=None, type='direct', auto_delete=True)
[Kombu channel:1] prepare_message('{"failure": null, "result": null,
"ending": true}', priority=0, headers={}, properties={'delivery_mode': 2},
content_type='application/json', content_encoding='utf-8')
[Kombu channel:1] basic_publish(<amqplib.client_0_8.basic_message.Message
object at 0x4cb1e90>, mandatory=False,
routing_key=u'e8e85bf237944223994eecd283a582da', immediate=False,
exchange=u'e8e85bf237944223994eecd283a582da')
[Kombu channel:1] close()
[Kombu connection:0x4a37d50] create channel



>
> The use of temporary queues for responses is discussed in the AMQP spec
> (is at least in the 0-9-1 version I just pulled up), so what we're doing
> seems to be recommended in general.
>

>From the above log data you see amqplib.client_0_8.basic.  I think I read
somewhere that kombu dumbs the protocol down to 0.8.


> My concern with using static reply queues is the uncertainty that comes
> along with that.  When you create a temporary queue *only* for the
> purpose of receiving a response, you know that the only thing you're
> going to get is the response.  When you start leaving queues around and
> reusing them, it's not guaranteed anymore.  That would be my biggest
> concern.
>
> It may be doable in a way that's safe enough.  We just need to be *very*
> careful.
>
> I'm also curious about the volume of rpc calls you were doing in your
> testing, and how that may relate to the size of a deployment (or the
> size of a single cell in the upcoming cells feature).
>

My volume of RPC calls is artificial.  It is a test program that just does
RPCs as fast as it can.  The performance was strikingly bad for clustered
RabbitMQ.

   - A single 8 processor RabbitMQ with no clustering achieved 410 RPC
   calls/sec at 580% CPU utilization
   - A cluster of two RabbitMQs servers achieved 68 RPC calls/sec at 140%
   CPU utilization on each
   - A cluster of three RabbitMQ servers schieved 55 RPC calls/sec at 80%
   CPU utilization on each

This is with 5 multiple simulated services an 10 load generating callers
per service all driving the highest rate they could.  Note that both of the
cluster cases had already maxes out at 2 services (20 load generating
clients).

It is not realistic, but it does show an upper bound.  The hard part would
be figuring what level of RPC throughput would be expected from some number
of compute nodes, tenants, VMs, projects or some other unit of size.  Do
you have any insights into the answer to that question?


>
> --
> Russell Bryant
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks,
Ray
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121113/7cb7b0c6/attachment.html>


More information about the OpenStack-dev mailing list