[openstack-dev] [Hyper-V] Havana status

Joe Gordon joe.gordon0 at gmail.com
Fri Oct 11 02:31:40 UTC 2013


On Thu, Oct 10, 2013 at 5:43 PM, Tim Smith <tsmith at gridcentric.com> wrote:

> On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant <rbryant at redhat.com>wrote:
>
>
>> Please understand that I only want to help here.  Perhaps a good way for
>> you to get more review attention is get more karma in the dev community
>> by helping review other patches.  It looks like you don't really review
>> anything outside of your own stuff, or patches that touch hyper-v.  In
>> the absence of significant interest in hyper-v from others, the only way
>> to get more attention is by increasing your karma.
>>
>
> NB: I don't have any vested interest in this discussion except that I want
> to make sure OpenStack stays "Open", i.e. inclusive. I believe the concept
> of "reviewer karma", while seemingly sensible, is actually subtly counter
> to the goals of openness, innovation, and vendor neutrality, and would also
> lead to overall lower commit quality.
>
>
The way I see it there are a few parts to 'karma' including:

* The ratio of reviewers to open patches is way off. In nova there are only
21 reviewers who have done on average two reviews a day for the past 30
days [1], and there are 226 open reviews, 125 of which are waiting for a
reviewer.  So one part of the karma is helping out the team as a whole with
the review work load (and the more insightful the review the better).  If
we have more reviewers, more patches get looked at faster.
* The more I see someone being active, through reviews or through patches,
the more I trust there +1/-1s and patches.


While there are some potentially negative sides to karma, I don't see how
the properties above, which to me are the major elements of karma, can be
considered negative.


[1] http://www.russellbryant.net/openstack-stats/nova-reviewers-30.txt
[2] http://www.russellbryant.net/openstack-stats/nova-openreviews.html



> Brian Kernighan famously wrote: "Debugging is twice as hard as writing the
> code in the first place." A corollary is that constructing a mental model
> of code is hard; perhaps harder than writing the code in the first place.
> It follows that reviewing code is not an easy task, especially if one has
> not been intimately involved in the original development of the code under
> review. In fact, if a reviewer is not intimately familiar with the code
> under review, and therefore only able to perform the functions of human
> compiler and style-checker (functions which can be and typically are
> performed by automatic tools), the rigor of their review is at best
> less-than-ideal, and at worst purely symbolic.
>
>
FWIW, we have automatic style-checking.



> It is logical, then, that a reviewer should review changes to code that
> he/she is familiar with. Attempts to gamify the implicit review
> prioritization system through a "karma" scheme are sadly doomed to fail, as
> contributors hoping to get their patches reviewed will have no option but
> to "build karma" reviewing patches in code they are unfamiliar with,
> leading to a higher number of low quality reviews.
>
> So, if a cross-functional "karma" system won't accomplish the desired
> result (high-quality reviews of commits across all functional units), what
> will it accomplish (besides overall lower commit quality)?
>
> Because the "karma" system inherently favors entrenched (read: heavily
> deployed) code, it forms a slippery slope leading to a mediocre
> "one-size-fits-all" stack, where contributors of new technologies,
> approaches, and hardware/software drivers will see their contributions die
> on the vine due to lack of core reviewer attention. If the driver team for
> a widely deployed hypervisor (outside of the OpenStack space - they can't
> really be expected to have wide OpenStack deployment without a mature
> driver) is having difficulty with reviews due to an implicit "karma"
> deficit, imagine the challenges that will be faced by the future
> SDN/SDS/SDx innovators of the world hoping to find a platform for their
> innovation in OpenStack.
>
> Again, I don't have any vested interest in this discussion, except that I
> believe the concept of "reviewer karma" to be counter to both software
> quality and openness. In this particular case it would seem that the
> simplest solution to this problem would be to give one of the hyper-v team
> members core reviewer status, but perhaps there are consequences to that
> that elude me.
>

> Regards,
> Tim
>
>
>
>> https://review.openstack.org/#/q/reviewer:3185+project:openstack/nova,n,z
>>
>> --
>> Russell Bryant
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131010/74f3d234/attachment.html>


More information about the OpenStack-dev mailing list