[openstack-dev] Please stop reviewing code while asking questions

Robert Collins robertc at robertcollins.net
Sat Apr 25 03:59:53 UTC 2015


On 25 April 2015 at 01:13, Zane Bitter <zbitter at redhat.com> wrote:
> On 24/04/15 07:21, Amrith Kumar wrote:
>>
>> Julien,
>>
>> We had a similar discussion within Trove several months ago and agreed to
>> a convention that if you have a question, that should not warrant a -1
>> unless, as you indicate there's a strong reason to believe that the code is
>> wrong and the question is leading.
>
>
> +1. I'm kind of shocked that this even needed to be discussed, but well done
> :)
>
>> We discussed this at a mid-cycle and agreed to put our conventions in
>> CONTRIBUTING.rst[1].
>>
>> We had a hypothesis about why +0 was rarely used (never conclusively
>> proved). Our hypothesis was that since Stackalytics didn't count +0's it led
>> to an increased propensity to -1 something. It would be wonderful if we
>> could try the experiment of giving credit for 0's and seeing if it changes
>> behavior.
>
>
> IIRC the problem here is the Gerrit API - it doesn't count +0 as a 'review',
> so they just don't show up in any automated tools. (This isn't easily solved
> either, even assuming you're willing to modify Gerrit.)

The comments are definitely accessible over the new JSON API that
gertty uses, so we can solve this now, however fugly it might look in
reviewstats.


> Individualised closed-loop metrics *always* drive bad behaviour, because
> they're necessarily only a sample of the behaviour we care about and to the
> extent that sample is representative of the whole, it can only remain so in
> the open-loop case. So we can, and should, tweak metrics to reduce bad
> behaviour and encourage good behaviour, but we shouldn't kid ourselves that
> we can eliminate unintended consequences - we can only exchange them for
> _different_ unintended consequences.
>
> This is an open community, so we can't (and shouldn't want to) prevent
> people from publishing stats. The best case is that we use them only to
> inform us how we're doing in the aggregate, and discourage companies in
> particular from attaching individual incentives to game the metrics. Members
> of the TC, at least, (I don't know that there was ever an official edict or
> anything) have expressed this in the past, and I think it's one of those
> things that requires vigilance and periodic reminders.

We could publish a document of common bad metrics and why we as a
community reject them. That might give folk inside contributing
companies some additional support in convincing metric-users not to
look at some of the metrics out there.

-Rob


-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



More information about the OpenStack-dev mailing list