<div dir="ltr"><div>Hey folks!</div><div><br></div><div>On Tuesday we had great summit session about performance team kick-off and yesterday it was a great LDT session as well and I’m really glad to see how much does the OpenStack performance topic is important for all of us. 40 minutes session surely was not enough to analyse everyone’s feedback and bottlenecks people usually see, so I’ll try to finalise what have been discussed and the next steps in this email.</div><div><br></div><div>Performance team kick-off session (<a href="https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off">https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off</a>) can be shortly described with the following points:</div><div><br></div><div><ul><li>IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others were taking part in the session<br></li><li>Various tools are used right now for OpenStack benchmarking and profiling right now:<br></li><ul><li>Rally (IBM, HP, Mirantis, Yahoo!)</li><li>Shaker (Mirantis, merging its functionality to Rally right now)</li><li>Gatling (Rackspace)</li><li>Zipkin (Yahoo!)</li><li>JMeter (Yandex)</li><li>and others…</li></ul><li>Various issues have been seen during the OpenStack cloud operating (full list can be found here - <a href="https://etherpad.openstack.org/p/openstack-performance-issues">https://etherpad.openstack.org/p/openstack-performance-issues</a>). Most mentioned issues were the following:<br></li><ul><li>performance of DB-related layers (DB itself and oslo.db) - it is about 7 abstraction DB layers in Nova; performance of Nova conductor was mentioned several times</li><li>performance of MQ-related layers (MQ itself and oslo.messaging)</li></ul><li>Different companies are using different standards for performance benchmarking (both control plane and data plane testing)<br></li><li>The most wished output from the team due to the comments will be:<br></li><ul><li>agree on the “performance testing standard”, including answers on the following questions:</li><ul><li>what tools need to be used for OpenStack performance benchmarking?</li><li>what benchmarking meters need to be covered? what we would like to compare?</li><li>what scenarios need to be covered?</li><li>how can we compare performance of different cloud deployments?</li><li>what performance deployment patterns can be used for various workloads?</li></ul><li>share test plans and perform benchmarking tests</li><li>create methodologies and documentation about best OpenStack deployment and performance testing practices</li></ul></ul></div><div><br></div><div>We’re going to cover all these topics further. First of all IRC channel for the discussions was created: <b>#openstack-performance</b>. We’re going to have weekly meeting related to current progress on that channel, doodle with the voting can be found here: <a href="http://doodle.com/poll/wv6qt8eqtc3mdkuz#table">http://doodle.com/poll/wv6qt8eqtc3mdkuz#table</a></div><div> (I was brave enough not to include timeslots that were overlapping with some of mine really hard-to-move activities :))</div><div><br></div><div>Let’s have next week as a voting time, and have first IRC meeting in our channel the week after next. We can start our further discussions with “performance” and “performance testing” terms definition and benchmarking tools analysis.</div><div><br></div><div>Cheers,</div><div>Dina</div>
</div>