[openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)
ijw.ubuntu at cack.org.uk
Tue Apr 21 00:28:04 UTC 2015
On 20 April 2015 at 07:40, Boris Pavlovic <boris at pavlovic.me> wrote:
> IMHO, most of the test coverage we have for nova's neutronapi is more
>> than useless. It's so synthetic that it provides no regression
>> protection, and often requires significantly more work than the change
>> that is actually being added. It's a huge maintenance burden with very
>> little value, IMHO. Good tests for that code would be very valuable of
>> course, but what is there now is not.
>> I think there are cases where going from 90 to 91% mean adding a ton of
>> extra spaghetti just to satisfy a bot, which actually adds nothing but
>> bloat to maintain.
> Let's not mix the bad unit tests in Nova with the fact that code should be
> fully covered by well written unit tests.
> This big task can be split into 2 smaller tasks:
> 1) Bot that will check that we are covering new code by tests and don't
> introduce regressions
You appear to be talking about statement coverage, which is one of the
weaker coverage metrics.
gets 100% statement coverage if a is true, so I don't need to test when a
is false (which would be at a minimum decision coverage).
I wonder if the focus is wrong. Maybe helping devs is better than making
more gate jobs, for starters; and maybe overall coverage is not a great
metric when you're changing 100 lines in 100,000. If you were thinking
instead to provide coverage *tools* that were easy for developers to use,
that would be a different question. As a dev, I would not be terribly
interested in finding that I've improved overall test coverage from 90.1%
to 90.2%, but I might be *very* interested to know that I got 100% decision
(or even boolean) coverage on the specific lines of the feature I just
added by running just the unit tests that exercise it.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev