[openstack-dev] Code review study

Joe Gordon joe.gordon0 at gmail.com
Mon Aug 26 14:01:18 UTC 2013


On Tue, Aug 20, 2013 at 2:21 PM, Clint Byrum <clint at fewbar.com> wrote:

> Excerpts from Mark McLoughlin's message of 2013-08-20 03:26:01 -0700:
> > On Thu, 2013-08-15 at 14:12 +1200, Robert Collins wrote:
> > > This may interest data-driven types here.
> > >
> > >
> https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
> > >
> > > Note specifically the citation of 200-400 lines as the knee of the
> review
> > > effectiveness curve: that's lower than I thought - I thought 200 was
> > > clearly fine - but no.
> >
> > The full study is here:
> >
> >
> http://support.smartbear.com/resources/cc/book/code-review-cisco-case-study.pdf
> >
> > This is an important subject and I'm glad folks are studying it, but I'm
> > sceptical about whether the "Defect density vs LOC" is going to help us
> > come up with better guidelines than we have already.
> >
> > Obviously, a metric like LOC hides some serious subtleties. Not all
> > changes are of equal complexity. We see massive refactoring patches
> > (like s/assertEquals/assertEqual/) that are actually safer than gnarly,
> > single-line, head-scratcher bug-fixes. The only way the report addresses
> > that issue with the underlying data is by eliding >10k LOC patches.
> >
>
> I'm not so sure that it is obvious what these subtleties are, or they
> would not be subtleties, they would be glaring issues.
>
> I agree that LOC changed is an imperfect measure. However, so are the
> hacking rules. They, however, have allowed us to not spend time on these
> things. We whole-heartedly embrace an occasional imperfection by deferring
> to something that can be measured by automation and thus free up valuable
> time for other activities more suited to limited reviewer/developer time.
>
> I'd like to see automation enforce change size. And just like with
> hacking, it would not be possible without a switch like "#noqa" that
> one can put in the commit message that says "hey automation, this is
> a trivial change".  That is something a reviewer can also see as a cue
> that this change, while big, should be trivial.
>

++

I put this into a bug: https://bugs.launchpad.net/hacking/+bug/1216928, and
added a related bug about making hacking git-checks run as post-commit
hooks (https://bugs.launchpad.net/hacking/+bug/1216925).


>
> > The "one logical change per commit" is a more effective guideline than
> > any LOC based guideline:
> >
> >
> https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes
> >
> > IMHO, the number of distinct logical changes in a patch has a more
> > predictable impact on review effectiveness than the LOC metric.
>
> Indeed, however, automating a check for that may be very difficult. I
> have seen tools like PMD[1] that try very hard to measure the complexity
> of code and the risk of change, and those might be interesting to see,
> but I'm not sure they are reliable enough to put in the OpenStack gate.
>
> [1] http://pmd.sourceforge.net/
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130826/a658a5a8/attachment.html>


More information about the OpenStack-dev mailing list