[openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

Boris Pavlovic boris at pavlovic.me
Tue Apr 21 00:44:37 UTC 2015


Ian,



If you were thinking instead to provide coverage *tools* that were easy for
> developers to use,


Hm, seems like you missed the point. This "gate job" can be run like unit
tests "tox -e cover". That will point you on the missing lines that are
introduced in your patch.

  As a dev, I would not be terribly interested in finding that I've
> improved overall test coverage from 90.1% to 90.2%


It is not the goal of the job that I add. Job checks that your patch don't
introduce code that is not covered by unit test (that is all).


but I might be *very* interested to know that I got 100% decision (or even
> boolean) coverage on the specific lines of the feature I just added by
> running just the unit tests that exercise it.


And this is exactly what "tox -e cover" does and job that run tox -e cover
in gates.

Best regards,
Boris Pavlovic


On Tue, Apr 21, 2015 at 3:28 AM, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:

> On 20 April 2015 at 07:40, Boris Pavlovic <boris at pavlovic.me> wrote:
>
>> Dan,
>>
>> IMHO, most of the test coverage we have for nova's neutronapi is more
>>> than useless. It's so synthetic that it provides no regression
>>> protection, and often requires significantly more work than the change
>>> that is actually being added. It's a huge maintenance burden with very
>>> little value, IMHO. Good tests for that code would be very valuable of
>>> course, but what is there now is not.
>>> I think there are cases where going from 90 to 91% mean adding a ton of
>>> extra spaghetti just to satisfy a bot, which actually adds nothing but
>>> bloat to maintain.
>>
>>
>> Let's not mix the bad unit tests in Nova with the fact that code should
>> be fully covered by well written unit tests.
>> This big task can be split into 2 smaller tasks:
>> 1) Bot that will check that we are covering new code by tests and don't
>> introduce regressions
>>
>
> http://en.wikipedia.org/wiki/Code_coverage
>
> You appear to be talking about statement coverage, which is one of the
> weaker coverage metrics.
>
>     if a:
>         thing
>
> gets 100% statement coverage if a is true, so I don't need to test when a
> is false (which would be at a minimum decision coverage).
>
> I wonder if the focus is wrong.  Maybe helping devs is better than making
> more gate jobs, for starters; and maybe overall coverage is not a great
> metric when you're changing 100 lines in 100,000.  If you were thinking
> instead to provide coverage *tools* that were easy for developers to use,
> that would be a different question.  As a dev, I would not be terribly
> interested in finding that I've improved overall test coverage from 90.1%
> to 90.2%, but I might be *very* interested to know that I got 100% decision
> (or even boolean) coverage on the specific lines of the feature I just
> added by running just the unit tests that exercise it.
> --
> Ian.
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150421/bcd0ef4c/attachment.html>


More information about the OpenStack-dev mailing list