[openstack-dev] [oslo.messaging] Performance testing. Initial steps.

Denis Makogon dmakogon at mirantis.com
Tue Jan 27 15:56:00 UTC 2015


On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann <doug at doughellmann.com>
wrote:

>
> > On Jan 15, 2015, at 1:30 PM, Denis Makogon <dmakogon at mirantis.com>
> wrote:
> >
> > Good day to All,
> >
> > The question that i’d like to raise here is not simple one, so i’d like
> to involve as much readers as i can. I’d like to speak about oslo.messaging
> performance testing. As community we’ve put lots of efforts in making
> oslo.messaging widely used drivers stable as much as possible. Stability is
> a good thing, but is it enough for saying “works well”? I’d say that it’s
> not.
> > Since oslo.messaging uses driver-based messaging workflow, it makes
> sense to dig into each driver and collect all required/possible performance
> metrics.
> > First of all, it does make sense to figure out how to perform
> performance testing, first that came into my mind is to simulate high load
> on one of corresponding drivers. Here comes the question of how it can be
> accomplished withing available oslo.messaging tools - high load on any
> driver can perform an application that:
> >       • can populate multiple emitters(rpc clients) and consumers (rpc
> servers).
> >       • can force clients to send messages of pre-defined number of
> messages of any length.
>
> That makes sense.
>
> > Another thing is why do we need such thing. Profiling, performance
> testing can improve the way in which our drivers were implemented. It can
> show us actual “bottlenecks” in messaging process, in general. In some
> cases it does make sense to figure out where problem takes its place -
> whether AMQP causes messaging problems or certain driver that speaks to
> AMQP fails.
> > Next thing that i want to discuss the architecture of
> profiling/performance testing. As i can see it seemed to be a “good” way to
> add profiling code to each driver. If there’s any objection or better
> solution, please bring them to the light.
>
> What sort of extra profiling code do you anticipate needing?
>
>
As i can foresee (taking into account [1]) couple decorators, possibly one
that handles metering process. The biggest part of code will take highload
tool that'll be a part of messaging. But another question adding certain
dependecies to the project.


> > Once we’d have final design for profiling we would need to figure out
> tools for profiling. After searching over the web, i found pretty
> interesting topic related to python profiling [1]. After certain
> investigations it does makes sense discuss next profiling options(apply one
> or both):
> >       • Line-by-line timing and execution frequency with a profiler
> (there are possible Pros and Cons, but i would say the per-line statistics
> is more than appreciable at initial performance testing steps)
> >       • Memory/CPU consumption
> > Metrics. The most useful metric for us is time, any time-based metric,
> since it is very useful to know at which step or/and by whom delay/timeout
> caused, for example, so as it said, we would be able to figure out whether
> AMQP or driver fails to do what it was designed for.
> > Before proposing spec i’d like to figure out any other requirements, use
> cases and restrictions for messaging performance testing. Also, if there
> any stories of success in boosting python performance - feel free to share
> it.
>
> The metrics to measure depend on the goal. Do we think the messaging code
> is using too much memory? Is it too slow? Or is there something else
> causing concern?
>
> It does make sense to have profiling for cases when trying to upscale
cluster and it'll be a good thing to have an ability to figure out if
scaled AMQP service has it's best configuration (i guess here would come
the question about doing performance testing using well-known tools), and
the most interesting question is about how messaging driver decreases (or
leaves untouched) throughput between RPC client and server. This metering
results can be compared to those tools that were designed for performance
testing. And that's why it'll be good step forward having
profiling/performance testing using high load technic.


> >
> >
> >
> > [1] http://www.huyng.com/posts/python-performance-analysis/
> >
> > Kind regards,
> > Denis Makogon
> > IRC: denis_makogon
> > dmakogon at mirantis.com
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Kind regards,
Denis M.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150127/02fd55bf/attachment.html>


More information about the OpenStack-dev mailing list