[openstack-dev] [TRIPLEO] tripleo-core update october
Ben Nemec
openstack at nemebean.com
Tue Oct 8 14:41:35 UTC 2013
On 2013-10-08 05:03, Robert Collins wrote:
> On 8 October 2013 22:44, Martyn Taylor <mtaylor at redhat.com> wrote:
>> On 07/10/13 20:03, Robert Collins wrote:
>
>
>> Whilst I can see that deciding on who is Core is a difficult task, I
>> do feel
>> that creating a competitive environment based on no. reviews will be
>> detrimental to the project.
>
> I'm not sure how it's competitive : I'd be delighted if every
> contributor was also a -core reviewer: I'm not setting, nor do I think
> we need to think about setting (at this point anyhow), a cap on the
> number of reviewers.
>
>> I do feel this is going to result in quantity over quality.
>> Personally, I'd
>> like to see every commit properly reviewed and tested before getting a
>> vote
>> and I don't think these stats are promoting that.
>
> I think thats a valid concern. However Nova has been running a (very
> slightly less mechanical) form of this for well over a year, and they
> are not drowning in -core reviewers. yes, reviewing is hard, and folk
> should take it seriously.
>
> Do you have an alternative mechanism to propose? The key things for me
> are:
> - folk who are idling are recognised as such and gc'd around about
> the time their growing staleness will become an issue with review
> correctness
> - folk who have been putting in consistent reading of code + changes
> get given the additional responsibility of -core around about the time
> that they will know enough about whats going on to review effectively.
This is a discussion that has come up in the other projects (not
surprisingly), and I thought I would mention some of the criteria that
are being used in those projects. The first, and simplest, is from
Dolph Mathews:
'Ultimately, "core contributor" to me simply means that this person's
downvotes on code reviews are consistently well thought out and
meaningful, such that an upvote by the same person shows a lot of
confidence in the patch.'
I personally like this definition because it requires a certain volume
of review work (which benefits the project), but it also takes into
account the quality of those reviews. Obviously both are important.
Note that the +/- and disagreements columns in Russell's stats are
intended to help with determining review quality. Nothing can replace
the judgment of the current cores of course, but if someone has been
+1'ing in 95% of their reviews it's probably a sign that they aren't
doing quality reviews. Likewise if they're -1'ing everything but are
constantly disagreeing with cores.
An expanded version of that can be found in this post to the list:
http://lists.openstack.org/pipermail/openstack-dev/2013-June/009876.html
To me, that is along the same lines as what Dolph said, just a bit more
specific as to how "quality" should be demonstrated and measured.
Hope this is helpful.
-Ben
More information about the OpenStack-dev
mailing list