[openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

Boris Pavlovic boris at pavlovic.me
Mon Apr 20 14:40:52 UTC 2015


Dan,

IMHO, most of the test coverage we have for nova's neutronapi is more
> than useless. It's so synthetic that it provides no regression
> protection, and often requires significantly more work than the change
> that is actually being added. It's a huge maintenance burden with very
> little value, IMHO. Good tests for that code would be very valuable of
> course, but what is there now is not.
> I think there are cases where going from 90 to 91% mean adding a ton of
> extra spaghetti just to satisfy a bot, which actually adds nothing but
> bloat to maintain.


Let's not mix the bad unit tests in Nova with the fact that code should be
fully covered by well written unit tests.
This big task can be split into 2 smaller tasks:
1) Bot that will check that we are covering new code by tests and don't
introduce regressions
2) Core and other reviewers ensures that tests are well written.

P.S. Unit tests are the first criteria of code quality: if it is hard to
cover code by unit tests, code is bad written
and should be refactored.

Best regards,
Boris Pavlovic

On Mon, Apr 20, 2015 at 5:26 PM, Dan Smith <dms at danplanet.com> wrote:

> > Well, I think there are very few cases where *less* coverage is better.
>
> IMHO, most of the test coverage we have for nova's neutronapi is more
> than useless. It's so synthetic that it provides no regression
> protection, and often requires significantly more work than the change
> that is actually being added. It's a huge maintenance burden with very
> little value, IMHO. Good tests for that code would be very valuable of
> course, but what is there now is not.
>
> I think there are cases where going from 90 to 91% mean adding a ton of
> extra spaghetti just to satisfy a bot, which actually adds nothing but
> bloat to maintain.
>
> > This I completely agree with. Asking for unit tests is a common thing,
> > and if done early in the review process, is not a "non-friendly" thing.
> > It's just matter of fact. And if the comment is given with some example
> > unit test code -- or a pointer to a unit test that could be used as an
> > example -- even better. It grows the submitter.
>
> +1
>
> > I certainly am not opposed to introducing coverage regression jobs that
> > produce a history of coverage. I would not personally support them being
> > a blocking/gate job, though.
>
> As Gordon said elsewhere in this thread, I'm not even sure I want to see
> it reporting as PASS/FAIL. It sounds like this would end up being like
> our pylint job, which was utterly confused by a lot of things and
> started to be something that wasn't even reliable enough to use as an
> advisory test.
>
> Recording the data for long-term analysis would be excellent though.
> It'd be nice to see that we increased coverage over a cycle.
>
> --Dan
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150420/4c848b6a/attachment.html>


More information about the OpenStack-dev mailing list