[openstack-dev] [oslo][messaging] Further improvements and refactoring

Ihar Hrachyshka ihrachys at redhat.com
Fri Jun 13 13:06:50 UTC 2014

Hash: SHA512

On 10/06/14 15:40, Alexei Kornienko wrote:
> On 06/10/2014 03:59 PM, Gordon Sim wrote:
>> On 06/10/2014 12:03 PM, Dina Belova wrote:
>>> Hello, stackers!
>>> Oslo.messaging is future of how different OpenStack components 
>>> communicate with each other, and really I’d love to start
>>> discussion about how we can make this library even better then
>>> it’s now and how can we refactor it make more
>>> production-ready.
>>> As we all remember, oslo.messaging was initially inspired to be
>>> created as a logical continuation of nova.rpc - as a separated
>>> library, with lots of transports supported, etc. That’s why
>>> oslo.messaging inherited not only advantages of now did the
>>> nova.rpc work (and it were lots of them), but also some
>>> architectural decisions that currently sometimes lead to the
>>> performance issues (we met some of them while Ceilometer 
>>> performance testing [1] during the Icehouse).
>>> For instance, simple testing messaging server (with connection
>>> pool and eventlet) can process 700 messages per second. The
>>> same functionality implemented using plain kombu (without
>>> connection pool and eventlet) driver is processing ten times
>>> more - 7000-8000 messages per second.
>>> So we have the following suggestions about how we may make this
>>> process better and quicker (and really I’d love to collect your
>>> feedback, folks):
>>> 1) Currently we have main loop running in the Executor class,
>>> and I guess it’ll be much better to move it to the Server
>>> class, as it’ll make relationship between the classes easier
>>> and will leave Executor only one task - process the message and
>>> that’s it (in blocking or eventlet mode). Moreover, this will
>>> make further refactoring much easier.
>>> 2) Some of the drivers implementations (such as impl_rabbit
>>> and impl_qpid, for instance) are full of useless separated
>>> classes that in reality might be included to other ones. There
>>> are already some changes making the whole structure easier [2],
>>> and after the 1st issue will be solved Dispatcher and Listener
>>> also will be able to be refactored.
>>> 3) If we’ll separate RPC functionality and messaging
>>> functionality it’ll make code base clean and easily reused.
>>> 4) Connection pool can be refactored to implement more
>>> efficient connection reusage.
>>> Folks, are you ok with such a plan? Alexey Kornienko already
>>> started some of this work [2], but really we want to be sure
>>> that we chose the correct vector of development here.
>> For the impl_qpid driver, I think there would need to be quite 
>> significant changes to make it efficient. At present there are
>> several synchronous roundtrips for every RPC call made[1].
>> Notifications are not treated any differently than RPCs (and
>> sending a call is no different to sending a cast).
>> I agree the connection pooling is not efficient. For qpid at
>> least it creates too many connections for no real benefit[2].
>> I think this may be a result of trying to fit the same
>> high-level design to two entirely different underlying APIs.
>> For me at least, this also makes it hard to spot issues by
>> reading the code. The qpid specific 'unit' tests for
>> oslo.messaging also fail for me everytime when an actual qpidd
>> broker is running (I haven't yet got to the bottom of that).
>> I'm personally not sure that the changes to impl_qpid you linked
>> to have much impact on either efficiency or readability, safety
>> of the code.
> Indeed it was only to remove some of the unnecessary complexity of
> the code. We'll see more improvement after we'll implement points
> 1,2 from the original email (cause the will allow us to proceed to
> further improvement)
>> I think there could be a lot of work required to significantly
>> improve that driver, and I wonder if that would be better spent
>> on e.g. the AMQP 1.0 driver which I believe will perform much
>> better and will offer more choice in deployment.
> I agree with you on this. However I'm not sure that we can do such
> a decision. If we focus on amqp driver only we should mention it 
> explicitly and deprecate qpid driver completely. There is no point
> in keeping driver that is not really functional.

The driver is functional. It may be not that efficient as
alternatives, but that's not a valid reason to deprecate it.

>> --Gordon
>> [1] For both the request and the response, the sender is created
>> every time, which results in at least one roundtrip to the
>> broker. Again, for both the request and the response, the message
>> is then sent with a blocking send, meaning a further synchronous
>> round trip for each. So for an RPC call, instead of just one
>> roundtrip, there are at least four.
>> [2] In my view, what matters more than per-connection throughput
>> for olso.messaging, is the scalability of the system as you add
>> many RPC clients and servers. Excessive creation of connections
>> by each process will have a negative impact on this. I don't
>> believe the current code gets anywhere close to the limits of the
>> underlying connection and suspect it would be more efficient and
>> faster to multiplex different streams down the same connection.
>> This would be especially true where using eventlet I suspect.
>> _______________________________________________ OpenStack-dev
>> mailing list OpenStack-dev at lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________ OpenStack-dev
> mailing list OpenStack-dev at lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/


More information about the OpenStack-dev mailing list