[openstack-dev] [oslo.messaging] Performance testing. Initial steps.

Flavio Percoco flavio at redhat.com
Wed Jan 28 09:39:13 UTC 2015

On 28/01/15 10:23 +0200, Denis Makogon wrote:
>On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim <gsim at redhat.com> wrote:
>    On 01/27/2015 06:31 PM, Doug Hellmann wrote:
>        On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
>            I'd like to build tool that would be able to profile messaging over
>            various deployments. This "tool" would give me an ability to
>            compare
>            results of performance testing produced by native tools and
>            oslo.messaging-based tool, eventually it would lead us into digging
>            into
>            code and trying to figure out where "bad things" are happening
>            (that's
>            the
>            actual place where we would need to profile messaging code).
>            Correct me
>            if
>            i'm wrong.
>        It would be interesting to have recommendations for deployment of
>        rabbit
>        or qpid based on performance testing with oslo.messaging. It would also
>        be interesting to have recommendations for changes to the
>        implementation
>        of oslo.messaging based on performance testing. I'm not sure you want
>        to
>        do full-stack testing for the latter, though.
>        Either way, I think you would be able to start the testing without any
>        changes in oslo.messaging.
>    I agree. I think the first step is to define what to measure and then
>    construct an application using olso.messaging that allows the data of
>    interest to be captured using different drivers and indeed different
>    configurations of a given driver.
>    I wrote a very simple test application to test one aspect that I felt was
>    important, namely the scalability of the RPC mechanism as you increase the
>    number of clients and servers involved. The code I used is https://
>    github.com/grs/ombt, its probably stale at the moment, I only link to it as
>    an example of approach.
>    Using that test code I was then able to compare performance in this one
>    aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based drivers
>    _ I wanted to try zmq, but couldn't figure out how to get it working at the
>    time), and for different deployment options using a given driver (amqp 1.0
>    using qpidd or qpid dispatch router in either standalone or with multiple
>    connected routers).
>    There are of course several other aspects that I think would be important
>    to explore: notifications, more specific variations in the RPC 'topology'
>    i.e. number of clients on given server number of servers in single group
>    etc, and a better tool (or set of tools) would allow all of these to be
>    explored.
>    From my experimentation, I believe the biggest differences in scalability
>    are going to come not from optimising the code in oslo.messaging so much as
>    choosing different patterns for communication. Those choices may be
>    constrained by other aspects as well of course, notably approach to
>    reliability.
>After couple internal discussions and hours of investigations, i think i've
>foung the most applicabale solution 
>that will accomplish performance testing approach and will eventually be
>evaluated as messaging drivers 
>configuration and AMQP service deployment recommendataion.
>Solution that i've been talking about is already pretty well-known across
>OpenStack components - Rally and its scenarios.
>Why it would be the best option? Rally scenarios would not touch messaging
> core part. Scenarios are gate-able. 
>Even if we're talking about internal testing, scenarios are very useful in this
>since they are something that can be tuned/configured taking into account
>environment needs.
>Doug, Gordon, what do you think about bringing scenarios into messaging? 

I personally wouldn't mind having them but I'd like us to first
discuss what kind of scenarios we want to test.

I'm assuming these scenarios would be pure oslo.messaging scenarios
and they won't require any of the openstack services. Therefore, I
guess these scenarios would test things like performance with many
consumers, performance with several (a)synchronous calls, etc. What
performance means in this context will have to be discussed as well.

In addition to the above, it'd be really interesting if we could have
tests for things like reconnects delays, which I think is doable with
Rally. Am I right?


>    __________________________________________________________________________
>    OpenStack Development Mailing List (not for usage questions)
>    Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>Kind regards,
>Denis M.

>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe

Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150128/7fe08847/attachment.pgp>

More information about the OpenStack-dev mailing list