[openstack-dev] [keystone][all] Incorporating performance feedback into the review process

Lance Bragstad lbragstad at gmail.com
Fri Jun 3 19:35:44 UTC 2016


Hey all,

I have been curious about impact of providing performance feedback as part
of the review process. From what I understand, keystone used to have a
performance job that would run against proposed patches (I've only heard
about it so someone else will have to keep me honest about its timeframe),
but it sounds like it wasn't valued.

I think revisiting this topic is valuable, but it raises a series of
questions.

Initially it probably only makes sense to test a reasonable set of
defaults. What do we want these defaults to be? Should they be determined
by DevStack, openstack-ansible, or something else?

What does the performance test criteria look like and where does it live?
Does it just consist of running tempest?

>From a contributor and reviewer perspective, it would be nice to have the
ability to compare performance results across patch sets. I understand that
keeping all performance results for every patch for an extended period of
time is unrealistic. Maybe we take a daily performance snapshot against
master and use that to map performance patterns over time?

Have any other projects implemented a similar workflow?

I'm open to suggestions and discussions because I can't imagine there
aren't other folks out there interested in this type of pre-merge data
points.

Thanks!

Lance
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160603/c2168508/attachment.html>


More information about the OpenStack-dev mailing list