[openstack-dev] Code review study

Clint Byrum clint at fewbar.com
Tue Aug 20 18:21:06 UTC 2013


Excerpts from Mark McLoughlin's message of 2013-08-20 03:26:01 -0700:
> On Thu, 2013-08-15 at 14:12 +1200, Robert Collins wrote:
> > This may interest data-driven types here.
> > 
> > https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
> > 
> > Note specifically the citation of 200-400 lines as the knee of the review
> > effectiveness curve: that's lower than I thought - I thought 200 was
> > clearly fine - but no.
> 
> The full study is here:
> 
> http://support.smartbear.com/resources/cc/book/code-review-cisco-case-study.pdf
> 
> This is an important subject and I'm glad folks are studying it, but I'm
> sceptical about whether the "Defect density vs LOC" is going to help us
> come up with better guidelines than we have already.
> 
> Obviously, a metric like LOC hides some serious subtleties. Not all
> changes are of equal complexity. We see massive refactoring patches
> (like s/assertEquals/assertEqual/) that are actually safer than gnarly,
> single-line, head-scratcher bug-fixes. The only way the report addresses
> that issue with the underlying data is by eliding >10k LOC patches.
> 

I'm not so sure that it is obvious what these subtleties are, or they
would not be subtleties, they would be glaring issues.

I agree that LOC changed is an imperfect measure. However, so are the
hacking rules. They, however, have allowed us to not spend time on these
things. We whole-heartedly embrace an occasional imperfection by deferring
to something that can be measured by automation and thus free up valuable
time for other activities more suited to limited reviewer/developer time.

I'd like to see automation enforce change size. And just like with
hacking, it would not be possible without a switch like "#noqa" that
one can put in the commit message that says "hey automation, this is
a trivial change".  That is something a reviewer can also see as a cue
that this change, while big, should be trivial.

> The "one logical change per commit" is a more effective guideline than
> any LOC based guideline:
> 
> https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes
> 
> IMHO, the number of distinct logical changes in a patch has a more
> predictable impact on review effectiveness than the LOC metric.

Indeed, however, automating a check for that may be very difficult. I
have seen tools like PMD[1] that try very hard to measure the complexity
of code and the risk of change, and those might be interesting to see,
but I'm not sure they are reliable enough to put in the OpenStack gate.

[1] http://pmd.sourceforge.net/



More information about the OpenStack-dev mailing list