[openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

Jay Pipes jaypipes at gmail.com
Mon Apr 20 13:30:42 UTC 2015


On 04/20/2015 07:13 AM, Sean Dague wrote:
> On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
>> Hi stackers,
>>
>> Code coverage is one of the very important metric of overall code
>> quality especially in case of Python. It's quite important to ensure
>> that code is covered fully with well written unit tests.
>>
>> One of the nice thing is coverage job.
>>
>> In Rally we are running it against every check which allows us to get
>> detailed information about
>> coverage before merging patch:
>> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
>>
>> This helped Rally core team to automate checking that new/changed code
>> is covered by unit tests and we raised unit test coverage from ~78% to
>> almost 91%.
>>
>> But it produces few issues:
>> 1) >9k nitpicking - core reviewers have to put -1 if something is not
>> covered by unit tests
>> 2) core team scaling issues - core team members spend a lot of time just
>> checking that whole code is covered by unit test and leaving messages
>> like this should be covered by unit test
>> 3) not friendly community - it's not nice to get on your code -1 from
>> somebody that is asking just to write unit tests
>> 4) coverage regressions - sometimes we accidentally accept patches that
>> reduce coverage
>>
>> To resolve this issue I improved a bit coverage job in Rally project,
>> and now it compares master vs master + patch coverage. If new coverage
>> is less than master job is marked as -1.
>>
>> Here is the patch for job enhancement:
>> https://review.openstack.org/#/c/174645/
>>
>> Here is coverage job in action:
>> patch https://review.openstack.org/#/c/174677/
>> job message
>> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
>
> Honestly, I'm suspicious of approaches like this. While 90% coverage is
> clearly better than 0% coverage, it's not clear that 91% coverage is
> always better than 90% coverage. Especially when you talk about larger
> and more complex code bases.

Well, I think there are very few cases where *less* coverage is better.

> I actually firmly feel that #3 is wrong. I think it's a lot better to
> have an early conversation with people about unit tests that need to be
> written when they don't submit any. Because I think it's a lot less
> friendly to have someone iterate 10 patches to figure out how to pass a
> bot's idea of good tests, and then have a reviewer say it's wrong.

This I completely agree with. Asking for unit tests is a common thing, 
and if done early in the review process, is not a "non-friendly" thing. 
It's just matter of fact. And if the comment is given with some example 
unit test code -- or a pointer to a unit test that could be used as an 
example -- even better. It grows the submitter.

> Honestly, coverage for projects seems to me to be more of an analog
> measure that would be good to graph over time and make sure it's not
> regression, but personally I think the brownian motion of individual
> patches seems like it's not a great idea to gate on every single patch.
> I personally would be -1 for adding this to projects that I have +2 on.

I certainly am not opposed to introducing coverage regression jobs that 
produce a history of coverage. I would not personally support them being 
a blocking/gate job, though.

Best,
-jay



More information about the OpenStack-dev mailing list