[openstack-dev] [keystone][all] Incorporating performance feedback into the review process

Morgan Fainberg morgan.fainberg at gmail.com
Fri Jun 3 22:44:23 UTC 2016


On Jun 3, 2016 13:16, "Brant Knudson" <blk at acm.org> wrote:
>
>
>
> On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad <lbragstad at gmail.com>
wrote:
>>
>> Hey all,
>>
>> I have been curious about impact of providing performance feedback as
part of the review process. From what I understand, keystone used to have a
performance job that would run against proposed patches (I've only heard
about it so someone else will have to keep me honest about its timeframe),
but it sounds like it wasn't valued.
>>
>
> We had a job running rally for a year (I think) that nobody ever looked
at so we decided it was a waste and stopped running it.
>
>>
>> I think revisiting this topic is valuable, but it raises a series of
questions.
>>
>> Initially it probably only makes sense to test a reasonable set of
defaults. What do we want these defaults to be? Should they be determined
by DevStack, openstack-ansible, or something else?
>>
>
> A performance test is going to depend on the environment (the machines,
disks, network, etc), the existing data (tokens, revocations, users, etc.),
and the config (fernet, uuid, caching, etc.). If these aren't consistent
between runs then the results are not going to be usable. (This is the
problem with running rally on infra hardware.) If the data isn't realistic
(1000s of tokens, etc.) then the results are going to be at best not useful
or at worst misleading.
>
>> What does the performance test criteria look like and where does it
live? Does it just consist of running tempest?
>>
>
> I don't think tempest is going to give us numbers that we're looking for
for performance. I've seen a few scripts and have my own for testing
performance of token validation, token creation, user creation, etc. which
I think will do the exact tests we want and we can get the results
formatted however we like.
>
>> From a contributor and reviewer perspective, it would be nice to have
the ability to compare performance results across patch sets. I understand
that keeping all performance results for every patch for an extended period
of time is unrealistic. Maybe we take a daily performance snapshot against
master and use that to map performance patterns over time?
>>
>
> Where are you planning to store the results?
>
>>
>> Have any other projects implemented a similar workflow?
>>
>> I'm open to suggestions and discussions because I can't imagine there
aren't other folks out there interested in this type of pre-merge data
points.
>>
>>
>> Thanks!
>>
>> Lance
>>
>
> Since the performance numbers are going to be very dependent on the
machines I think the only way this is going to work is if somebody's
willing to set up dedicated hardware to run the tests on. If you're doing
that then set it up to mimic how you deploy keystone, deploy the patch
under test, run the performance tests, and report the results. I'd be fine
with something like this commenting on keystone changes. None of this has
to involve openstack infra. Gerrit has a REST API to get the current
patches.
>
> Everyone that's got performance requirements should do the same. Maybe I
can get the group I'm in to try it sometime.
>
> - Brant
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

You have outlined everything I was asking for fr rally as a useful metric,
but simply getting the resources was a problem.

Unfortunately I have not seen anyone willing to offer these dedicated
resources and/or reporting the delta over time or per patchset.

There is only so much we can do without consistent / reliably the same test
environments.

I would be very happy to see this type of testing consistently reported
especially if it mimics real workloads as well as synthetic like rally/what
Matt and Dolph use.

--Morgan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160603/a598f35c/attachment.html>


More information about the OpenStack-dev mailing list