[openstack-dev] RabbitMQ Scaling

Ray Pekowski pekowski at gmail.com
Mon Nov 26 19:00:59 UTC 2012


Chris,

Thanks for the input.

On Mon, Nov 26, 2012 at 11:36 AM, Chris Behrens <cbehrens at codestud.com>wrote:

> This is what cells does, and I was planning on porting this to
> openstack-common for general RPC stuff, too.. so this is great.


If you were already planning on doing this, then isn't my work duplicate
and I should just wait for yours?  Or should we compare notes and pick some
merged version of our two solutions?  And if so, the "cells" code and the
RPC code should be as similar as possible, right?


> However, I'd like to see the # of greenthreads for receiving responses be
> configurable…. definitely not locked to 1.
>

I'm open to discussing this, but it seems to me that if a service gets by
with a single thread for receiving all requests, that the callers should be
able to get by with a single thread for receiving all responses.  Perhaps
there is something in your solution that could block the receiving thread.
Or there is something I have not thought of in my solution.  In any case,
we do need more discussion.  Is this mailing list the place for further
discussion or is there some other mechanism?


>
> Might be a big enough change to how rpc.calls work to warrant a blueprint.
>

Sounds like a blueprint is in order.  Do you want me to open the
blueprint?  And would a single blueprint be OK?  I've heard that it is
common to open multiple blueprints for a single general idea.  For example,
one for adding a reply queue ID, one for returning the msg_id to the
caller, and one for offloading the callback to a single receiver (or small
set of receivers).  And perhaps one for adding an option for choosing
backward compatible RPC, since this new RPC model will clearly not be
backward compatible with the dynamic queue/exchange model.  I'm all for a
single blueprint, but just want to follow the process.

>
> - Chris
>

Ray
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121126/cea0c607/attachment.html>


More information about the OpenStack-dev mailing list