[openstack-dev] [TripleO] Review metrics - what do we want to measure?

Ben Nemec openstack at nemebean.com
Wed Sep 10 14:20:59 UTC 2014


On 09/10/2014 03:57 AM, Steven Hardy wrote:
> On Thu, Sep 04, 2014 at 01:54:20PM +0000, Jeremy Stanley wrote:
>> On 2014-09-04 11:01:55 +0100 (+0100), Derek Higgins wrote:
>> [...]
>>> How would people feel about turning [auto-abandon] back on?
>>
>> A lot of reviewers (myself among them) feel auto-abandon was a
>> cold and emotionless way to provide feedback on a change. Especially
>> on high-change-volume projects where core reviewers may at times get
>> sucked into triaging other problems for long enough that the
>> auto-abandoner kills lots of legitimate changes (possibly from
>> new contributors who will get even more disgusted by this than the
>> silence itself and walk away indefinitely with the impression that
>> we really aren't a welcoming development community at all).
> 
> I can understand this argument, and perhaps an auto-abandon with a long
> period like say a month for the submitter to address comments and reset the
> clock would be a reasonable compromise?
> 
> The problem we have now, is there is a constantly expanding queue of zombie
> reviews, where the submitter got some negative feedback and, for whatever
> reason, has chosen not to address it.
> 
> For example, in my "Incoming reviews" queue on the gerrit dashboard, I've
> got reviews I (or someone) -1'd over four months ago, with zero feedback
> from the submitter, what value is there to these reviews cluttering up the
> dashboard for every reviewer?
> 
> To make matters worse Jenkins comes along periodically and rechecks or
> figures out the patch merge failed and bumps the zombie back up to the top
> of the queue - obviously I don't realise that it's not a human comment I
> need to read until I've expended effort clicking on the review and reading
> it :(
> 
> From a reviewer perspective, it's impossible, and means the review
> dashboard is basically unusable without complex queries to weed out the
> active reviews from the zombies.
> 
>>> Can it be done on a per project basis?
>>
>> It can, by running your own... but again it seems far better for
>> core reviewers to decide if a change has potential or needs to be
>> abandoned--that way there's an accountable human making that
>> deliberate choice rather than the review team hiding behind an
>> automated process so that no one is to blame for hurt feelings
>> besides the infra operators who are enforcing this draconian measure
>> for you.
> 
> With all the threads about core-reviewer overload, I think it's
> unreasonable to expect us to scrub the review queue making daily judgements
> on whether a patch should be abandoned, and I'd argue that a human
> abandoning another human's patch has much more demotivating impact on
> contributors than a clearly documented automated process that you must
> address negative review comments within a set period or your review will
> expire.
> 
> If you think about mailing list patch workflow, it's similar - you post
> your patch, get email review feedback, then post new reviews with fixes.
> If you fail to post new reviews with fixes, your review falls out the
> bottom of people's inboxes and you don't get your patch merged.
> 
>>
>>> To make the whole process a little friendlier we could increase
>>> the time frame from 1 week to 2.
>>
>> <snark>How about just automatically abandon any new change as soon
>> as it's published, and if the contributor really feels it's
>> important they'll unabandon it.</snark>
> 
> I think that's a pretty unfair comment - all reviewers and most
> contributors are working really hard, all we're asking for is that
> contributors work with reviewers to get their patch into shape withing a
> reasonable time. :(
> 
> As someone who's drinking from the firehose every day with a seemingly
> insurmountable review queue, I'd rather we worked towards processes which
> help us manage the workload in a sustainable way, rather than turning the
> review queue into the zombie-soup-of-dispair it currently is.
> 
> All we need is two things:
> 
> 1. A way to automatically escalate reviews which have no feedback at all
> from core reviewers within a reasonable time (say a week or two)
> 
> 2. A way to automatically remove reviews from the queue which have core
> negative feedback with no resolution within a reasonable time (say a month
> or several weeks, so it's not percieved contributor-hostile).

A suggestion Robert made during that discussion was to have auto-WIP
instead of auto-abandon.  That should be less hostile to contributors
and still makes it easy to filter out the reviews that aren't ready to
merge anyway.  For me personally, and for the sake of tracking the
stats, that would be sufficient to address the problem.

> 
> Note (2) still allows "core reviewers to decide if a change has potential",
> it just becomes opt-in, e.g we have to address the issues which prevent us
> giving it positive feedback, either by directly working with the
> contributor, or delegating the work to an active contributor if the
> original patch author has decided not to continue work on the patch.
> 
> Ultimately, it's not just about reviews - who's going to maintain all these
> features if the author is not committed enough to post just one patch a month
> while getting it through the review process? Oh wait, that'll be another
> job for the core team won't it?
> 
> Steve
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




More information about the OpenStack-dev mailing list