[openstack-dev] [oslo.messaging] Performance testing. Initial steps.

Doug Hellmann doug at doughellmann.com
Thu Jan 15 18:56:06 UTC 2015

> On Jan 15, 2015, at 1:30 PM, Denis Makogon <dmakogon at mirantis.com> wrote:
> Good day to All,
> The question that i’d like to raise here is not simple one, so i’d like to involve as much readers as i can. I’d like to speak about oslo.messaging performance testing. As community we’ve put lots of efforts in making oslo.messaging widely used drivers stable as much as possible. Stability is a good thing, but is it enough for saying “works well”? I’d say that it’s not.
> Since oslo.messaging uses driver-based messaging workflow, it makes sense to dig into each driver and collect all required/possible performance metrics.
> First of all, it does make sense to figure out how to perform performance testing, first that came into my mind is to simulate high load on one of corresponding drivers. Here comes the question of how it can be accomplished withing available oslo.messaging tools - high load on any driver can perform an application that:
> 	• can populate multiple emitters(rpc clients) and consumers (rpc servers).
> 	• can force clients to send messages of pre-defined number of messages of any length.

That makes sense.

> Another thing is why do we need such thing. Profiling, performance testing can improve the way in which our drivers were implemented. It can show us actual “bottlenecks” in messaging process, in general. In some cases it does make sense to figure out where problem takes its place - whether AMQP causes messaging problems or certain driver that speaks to AMQP fails.
> Next thing that i want to discuss the architecture of profiling/performance testing. As i can see it seemed to be a “good” way to add profiling code to each driver. If there’s any objection or better solution, please bring them to the light.

What sort of extra profiling code do you anticipate needing?

> Once we’d have final design for profiling we would need to figure out tools for profiling. After searching over the web, i found pretty interesting topic related to python profiling [1]. After certain investigations it does makes sense discuss next profiling options(apply one or both):
> 	• Line-by-line timing and execution frequency with a profiler (there are possible Pros and Cons, but i would say the per-line statistics is more than appreciable at initial performance testing steps)
> 	• Memory/CPU consumption
> Metrics. The most useful metric for us is time, any time-based metric, since it is very useful to know at which step or/and by whom delay/timeout caused, for example, so as it said, we would be able to figure out whether AMQP or driver fails to do what it was designed for.
> Before proposing spec i’d like to figure out any other requirements, use cases and restrictions for messaging performance testing. Also, if there any stories of success in boosting python performance - feel free to share it.

The metrics to measure depend on the goal. Do we think the messaging code is using too much memory? Is it too slow? Or is there something else causing concern?

> [1] http://www.huyng.com/posts/python-performance-analysis/
> Kind regards,
> Denis Makogon
> IRC: denis_makogon
> dmakogon at mirantis.com
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list