[Openstack-operators] Performance Team summit session results
Dina Belova
dbelova at mirantis.com
Thu Oct 29 14:30:03 UTC 2015
Hey folks!
On Tuesday we had great summit session about performance team kick-off and
yesterday it was a great LDT session as well and I’m really glad to see how
much does the OpenStack performance topic is important for all of us. 40
minutes session surely was not enough to analyse everyone’s feedback and
bottlenecks people usually see, so I’ll try to finalise what have been
discussed and the next steps in this email.
Performance team kick-off session (
https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off)
can be shortly described with the following points:
- IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others were
taking part in the session
- Various tools are used right now for OpenStack benchmarking and
profiling right now:
- Rally (IBM, HP, Mirantis, Yahoo!)
- Shaker (Mirantis, merging its functionality to Rally right now)
- Gatling (Rackspace)
- Zipkin (Yahoo!)
- JMeter (Yandex)
- and others…
- Various issues have been seen during the OpenStack cloud operating
(full list can be found here -
https://etherpad.openstack.org/p/openstack-performance-issues). Most
mentioned issues were the following:
- performance of DB-related layers (DB itself and oslo.db) - it is about
7 abstraction DB layers in Nova; performance of Nova conductor was
mentioned several times
- performance of MQ-related layers (MQ itself and oslo.messaging)
- Different companies are using different standards for performance
benchmarking (both control plane and data plane testing)
- The most wished output from the team due to the comments will be:
- agree on the “performance testing standard”, including answers on the
following questions:
- what tools need to be used for OpenStack performance
benchmarking?
- what benchmarking meters need to be covered? what we would like
to compare?
- what scenarios need to be covered?
- how can we compare performance of different cloud deployments?
- what performance deployment patterns can be used for various
workloads?
- share test plans and perform benchmarking tests
- create methodologies and documentation about best OpenStack
deployment and performance testing practices
We’re going to cover all these topics further. First of all IRC channel for
the discussions was created: *#openstack-performance*. We’re going to have
weekly meeting related to current progress on that channel, doodle with the
voting can be found here: http://doodle.com/poll/wv6qt8eqtc3mdkuz#table
(I was brave enough not to include timeslots that were overlapping with
some of mine really hard-to-move activities :))
Let’s have next week as a voting time, and have first IRC meeting in our
channel the week after next. We can start our further discussions with
“performance” and “performance testing” terms definition and benchmarking
tools analysis.
Cheers,
Dina
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20151029/311fdd7e/attachment.html>
More information about the OpenStack-operators
mailing list