[openstack-dev] [keystone][all] Incorporating performance feedback into the review process
Lance Bragstad
lbragstad at gmail.com
Fri Jun 3 20:57:32 UTC 2016
Dedicated and isolated infrastructure is a must if we want consistent
performance numbers. If we can come up with a reasonable plan, I'd be happy
to ask for resources. Even with dedicated infrastructure we would still
have to keep in mind that it's a data point from a single provider that
hopefully highlights a general trend about performance.
Here is a list of focus points as I see them so far:
- Dedicated hardware is a requirement in order to achieve somewhat
consistent results
- Tight loop micro benchmarks
- Tests highlighting the performance cases we care about
- The ability to determine a sane control
- The ability to tests proposed patches, compare them to the control,
and leave comments on reviews
- Reproducible setup and test runner so that others can run these
against a dedicated performance environment
- Daily snapshots of performance published publicly (nice to have)
On Fri, Jun 3, 2016 at 3:16 PM, Brant Knudson <blk at acm.org> wrote:
>
>
> On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad <lbragstad at gmail.com>
> wrote:
>
>> Hey all,
>>
>> I have been curious about impact of providing performance feedback as
>> part of the review process. From what I understand, keystone used to have a
>> performance job that would run against proposed patches (I've only heard
>> about it so someone else will have to keep me honest about its timeframe),
>> but it sounds like it wasn't valued.
>>
>>
> We had a job running rally for a year (I think) that nobody ever looked at
> so we decided it was a waste and stopped running it.
>
>
>> I think revisiting this topic is valuable, but it raises a series of
>> questions.
>>
>> Initially it probably only makes sense to test a reasonable set of
>> defaults. What do we want these defaults to be? Should they be determined
>> by DevStack, openstack-ansible, or something else?
>>
>>
> A performance test is going to depend on the environment (the machines,
> disks, network, etc), the existing data (tokens, revocations, users, etc.),
> and the config (fernet, uuid, caching, etc.). If these aren't consistent
> between runs then the results are not going to be usable. (This is the
> problem with running rally on infra hardware.) If the data isn't realistic
> (1000s of tokens, etc.) then the results are going to be at best not useful
> or at worst misleading.
>
> What does the performance test criteria look like and where does it live?
>> Does it just consist of running tempest?
>>
>>
> I don't think tempest is going to give us numbers that we're looking for
> for performance. I've seen a few scripts and have my own for testing
> performance of token validation, token creation, user creation, etc. which
> I think will do the exact tests we want and we can get the results
> formatted however we like.
>
> From a contributor and reviewer perspective, it would be nice to have the
>> ability to compare performance results across patch sets. I understand that
>> keeping all performance results for every patch for an extended period of
>> time is unrealistic. Maybe we take a daily performance snapshot against
>> master and use that to map performance patterns over time?
>>
>>
> Where are you planning to store the results?
>
>
>> Have any other projects implemented a similar workflow?
>>
>> I'm open to suggestions and discussions because I can't imagine there
>> aren't other folks out there interested in this type of pre-merge data
>> points.
>>
>>
> Thanks!
>>
>> Lance
>>
>>
> Since the performance numbers are going to be very dependent on the
> machines I think the only way this is going to work is if somebody's
> willing to set up dedicated hardware to run the tests on. If you're doing
> that then set it up to mimic how you deploy keystone, deploy the patch
> under test, run the performance tests, and report the results. I'd be fine
> with something like this commenting on keystone changes. None of this has
> to involve openstack infra. Gerrit has a REST API to get the current
> patches.
>
> Everyone that's got performance requirements should do the same. Maybe I
> can get the group I'm in to try it sometime.
>
> - Brant
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160603/d9cc2622/attachment.html>
More information about the OpenStack-dev
mailing list