[openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)
Dan Smith
dms at danplanet.com
Mon Apr 20 14:26:15 UTC 2015
> Well, I think there are very few cases where *less* coverage is better.
IMHO, most of the test coverage we have for nova's neutronapi is more
than useless. It's so synthetic that it provides no regression
protection, and often requires significantly more work than the change
that is actually being added. It's a huge maintenance burden with very
little value, IMHO. Good tests for that code would be very valuable of
course, but what is there now is not.
I think there are cases where going from 90 to 91% mean adding a ton of
extra spaghetti just to satisfy a bot, which actually adds nothing but
bloat to maintain.
> This I completely agree with. Asking for unit tests is a common thing,
> and if done early in the review process, is not a "non-friendly" thing.
> It's just matter of fact. And if the comment is given with some example
> unit test code -- or a pointer to a unit test that could be used as an
> example -- even better. It grows the submitter.
+1
> I certainly am not opposed to introducing coverage regression jobs that
> produce a history of coverage. I would not personally support them being
> a blocking/gate job, though.
As Gordon said elsewhere in this thread, I'm not even sure I want to see
it reporting as PASS/FAIL. It sounds like this would end up being like
our pylint job, which was utterly confused by a lot of things and
started to be something that wasn't even reliable enough to use as an
advisory test.
Recording the data for long-term analysis would be excellent though.
It'd be nice to see that we increased coverage over a cycle.
--Dan
More information about the OpenStack-dev
mailing list