[openstack-dev] Code review study

Mark McLoughlin markmc at redhat.com
Tue Aug 20 10:26:01 UTC 2013


On Thu, 2013-08-15 at 14:12 +1200, Robert Collins wrote:
> This may interest data-driven types here.
> 
> https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
> 
> Note specifically the citation of 200-400 lines as the knee of the review
> effectiveness curve: that's lower than I thought - I thought 200 was
> clearly fine - but no.

The full study is here:

http://support.smartbear.com/resources/cc/book/code-review-cisco-case-study.pdf

This is an important subject and I'm glad folks are studying it, but I'm
sceptical about whether the "Defect density vs LOC" is going to help us
come up with better guidelines than we have already.

Obviously, a metric like LOC hides some serious subtleties. Not all
changes are of equal complexity. We see massive refactoring patches
(like s/assertEquals/assertEqual/) that are actually safer than gnarly,
single-line, head-scratcher bug-fixes. The only way the report addresses
that issue with the underlying data is by eliding >10k LOC patches.

The "one logical change per commit" is a more effective guideline than
any LOC based guideline:

https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes

IMHO, the number of distinct logical changes in a patch has a more
predictable impact on review effectiveness than the LOC metric.

Cheers,
Mark.




More information about the OpenStack-dev mailing list