[openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)
boris at pavlovic.me
Tue Apr 21 00:44:37 UTC 2015
If you were thinking instead to provide coverage *tools* that were easy for
> developers to use,
Hm, seems like you missed the point. This "gate job" can be run like unit
tests "tox -e cover". That will point you on the missing lines that are
introduced in your patch.
As a dev, I would not be terribly interested in finding that I've
> improved overall test coverage from 90.1% to 90.2%
It is not the goal of the job that I add. Job checks that your patch don't
introduce code that is not covered by unit test (that is all).
but I might be *very* interested to know that I got 100% decision (or even
> boolean) coverage on the specific lines of the feature I just added by
> running just the unit tests that exercise it.
And this is exactly what "tox -e cover" does and job that run tox -e cover
On Tue, Apr 21, 2015 at 3:28 AM, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
> On 20 April 2015 at 07:40, Boris Pavlovic <boris at pavlovic.me> wrote:
>> IMHO, most of the test coverage we have for nova's neutronapi is more
>>> than useless. It's so synthetic that it provides no regression
>>> protection, and often requires significantly more work than the change
>>> that is actually being added. It's a huge maintenance burden with very
>>> little value, IMHO. Good tests for that code would be very valuable of
>>> course, but what is there now is not.
>>> I think there are cases where going from 90 to 91% mean adding a ton of
>>> extra spaghetti just to satisfy a bot, which actually adds nothing but
>>> bloat to maintain.
>> Let's not mix the bad unit tests in Nova with the fact that code should
>> be fully covered by well written unit tests.
>> This big task can be split into 2 smaller tasks:
>> 1) Bot that will check that we are covering new code by tests and don't
>> introduce regressions
> You appear to be talking about statement coverage, which is one of the
> weaker coverage metrics.
> if a:
> gets 100% statement coverage if a is true, so I don't need to test when a
> is false (which would be at a minimum decision coverage).
> I wonder if the focus is wrong. Maybe helping devs is better than making
> more gate jobs, for starters; and maybe overall coverage is not a great
> metric when you're changing 100 lines in 100,000. If you were thinking
> instead to provide coverage *tools* that were easy for developers to use,
> that would be a different question. As a dev, I would not be terribly
> interested in finding that I've improved overall test coverage from 90.1%
> to 90.2%, but I might be *very* interested to know that I got 100% decision
> (or even boolean) coverage on the specific lines of the feature I just
> added by running just the unit tests that exercise it.
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev