[openstack-dev] [keystone][all] Incorporating performance feedback into the review process

Lance Bragstad lbragstad at gmail.com
Fri Jun 10 21:11:07 UTC 2016


Ok - here is what I have so far [0], and I admit there is still a bunch of
work to do [1]. I encourage folks to poke through the code and suggest
improvements via Github Issues. I've never really stood up third-party
testing before so this is completely new to me and I'm open to feedback,
and working together. I'm trying to document everything I want to do in
Github Issues [2]. Currently - this is only opt-in testing and it will not
vote on a patch. Here is an example of what it looks like in use [3].

This is very much still in the PoC stage. I'm happy to review pull requests
and manage deployments to the testing infrastructure for others who want to
make this better or leverage it to do their own performance testing locally.

Thanks,

Lance
irc: lbragstad

[0] https://github.com/lbragstad/keystone-performance
[1] https://github.com/lbragstad/keystone-performance/issues
[2]
https://github.com/lbragstad/keystone-performance/issues?utf8=%E2%9C%93&q=is%3Aissue
[3] https://review.openstack.org/#/c/326246/

On Mon, Jun 6, 2016 at 12:45 PM, Clint Byrum <clint at fewbar.com> wrote:

> Excerpts from Brant Knudson's message of 2016-06-03 15:16:20 -0500:
> > On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad <lbragstad at gmail.com>
> wrote:
> >
> > > Hey all,
> > >
> > > I have been curious about impact of providing performance feedback as
> part
> > > of the review process. From what I understand, keystone used to have a
> > > performance job that would run against proposed patches (I've only
> heard
> > > about it so someone else will have to keep me honest about its
> timeframe),
> > > but it sounds like it wasn't valued.
> > >
> > >
> > We had a job running rally for a year (I think) that nobody ever looked
> at
> > so we decided it was a waste and stopped running it.
> >
> > > I think revisiting this topic is valuable, but it raises a series of
> > > questions.
> > >
> > > Initially it probably only makes sense to test a reasonable set of
> > > defaults. What do we want these defaults to be? Should they be
> determined
> > > by DevStack, openstack-ansible, or something else?
> > >
> > >
> > A performance test is going to depend on the environment (the machines,
> > disks, network, etc), the existing data (tokens, revocations, users,
> etc.),
> > and the config (fernet, uuid, caching, etc.). If these aren't consistent
> > between runs then the results are not going to be usable. (This is the
> > problem with running rally on infra hardware.) If the data isn't
> realistic
> > (1000s of tokens, etc.) then the results are going to be at best not
> useful
> > or at worst misleading.
> >
>
> That's why I started the counter-inspection spec:
>
>
> http://specs.openstack.org/openstack/qa-specs/specs/devstack/counter-inspection.html
>
> It just tries to count operations, and graph those. I've, unfortunately,
> been pulled off to other things of late, but I do intend to loop back
> and hit this hard over the next few months to try and get those graphs.
>
> What we'd get initially is just graphs of how many messages we push
> through RabbitMQ, and how many rows/queries/transactions we push through
> mysql. We may also want to add counters like how many API requests
> happened, and how many retries happen inside the code itself.
>
> There's a _TON_ we can do now to ensure that we know what the trends are
> when something gets "slow", so we can look for a gradual "death by 1000
> papercuts" trend or a hockey stick that can be tied to a particular
> commit.
>
> > What does the performance test criteria look like and where does it live?
> > > Does it just consist of running tempest?
> > >
> > >
> > I don't think tempest is going to give us numbers that we're looking for
> > for performance. I've seen a few scripts and have my own for testing
> > performance of token validation, token creation, user creation, etc.
> which
> > I think will do the exact tests we want and we can get the results
> > formatted however we like.
> >
>
> Agreed that tempest will only give a limited view. Ideally one would
> also test things like "after we've booted 1000 vms, do we end up reading
> 1000 more rows, or 1000 * 1000 more rows.
>
> > From a contributor and reviewer perspective, it would be nice to have the
> > > ability to compare performance results across patch sets. I understand
> that
> > > keeping all performance results for every patch for an extended period
> of
> > > time is unrealistic. Maybe we take a daily performance snapshot against
> > > master and use that to map performance patterns over time?
> > >
> > >
> > Where are you planning to store the results?
> >
>
> Infra has a graphite/statsd cluster which is made for collecting metrics
> on tests. It might need to be expanded a bit, but it should be
> relatively cheap to do so given the benefit of having some of these
> numbers.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160610/ab5307c6/attachment.html>


More information about the OpenStack-dev mailing list