[openstack-dev] Bad review patterns
openstack at sheep.art.pl
Thu Nov 7 09:54:24 UTC 2013
On 06/11/13 21:23, Eugene Nikanorov wrote:
> I would like do disagree with some of the points.
> First of all, '-1' mark may have a wrong perception especially among new
> -1 doesn't mean reviewers don't want your code (which is -2), it means
> they either not sure it is good enough or they see how you could make it
>> Not sure if that works
> If you as a reviewer have a gut feeling ('not sure') that it may not
> work - put a -1.
> Let submitter explain it to you. You can change your score later without
> requiring to change the patch.
> Also, there are plenty of cases when the code could not be tested
> because it requires specific environment to run.
> I think it's ok to put -1 in case of uncertainty. The only thing you
> should care about is that such -1 is not 'finished' because you don't
> require submitter to change anything, so you need to check if your
> concerns were addressed and potentially change the score.
In theory, a -1 can work as a "gee, I'm not sure" response, but you have
to be careful and consider what effects it has in practice. On top of
that, it's very easy to give a "I'm not sure" -1, but it doesn't really
add much value to the patch. That's why I use that pattern to remind
myself that when I'm reviewing somebody's code, I'm wearing a subject
matter expert's hat, and that it's my work to put in effort to figure
things out. If I don't do that, I will look like a victim of the
Dunnig-Kruger effect to the author of the patch.
Moreover, the author gets punished for writing smart code. Or at least
they feel like that, which amounts to the same. Back then I had in my
team so very technically talented and knowledgable members, who don't
enjoy politics or arguing or having to explain themselves in words. What
they love best is to build something and show that it works -- you can't
possible argue with a working prototype, can you? But by demanding that
they prove to you in words that a certain piece of code is in fact valid
is removing them from the area where they shine, and forcing them to do
a job that they don't enjoy and are not good at. After all, they have
already build it, and it works, why don't you just look at how it works?
Why do they have to explain it to you?
Of course, they won't tell you this. (Actually, I was very lucky that
they did, after work, after a couple of beers). They will just make a
mental note: "That guy doesn't understand list comprehensions, avoid
using list comprehensions so that you don't have to explain it to him".
Finally, if you don't understand it, and don't have access to the
specialized environment required to test it, why are you reviewing this?
What do you hope to contribute to the patch like that? Are you sure your
questions are actually helping the project, and not just your curiosity?
Remember, that all the bad patterns that I listed are not strict "I
never do something like that" for me. They are more like "if I catch
myself doing something like this, I should stop and reconsider". I'm
sure there are some justified situations when you actually *should* ask
for clarification -- possibly also ask to put a comment explaining it in
the code, because if it confused you, it may confuse others in the
future, and also maybe write a unit test that tests it.
> On point #2:
>> Let something repeat a couple of times just to be sure it actually is
> I think code repetition is not desirable in most of the cases and only
> occasionally repeating code gets merged. That happens especially when
> someone didn't put reusable code in a common place so others unaware of
> its existance. Instead of doing refactoring later on (who will do this?)
> just rely on reviewer's opinion and check if generalization could be done.
> And this can grow as a snowball (oh, he has copy-pasted his stuff, why
> shouldn't I do the same?) so the earlier it is caught - the better.
So, as I already replied to Robert Collins, obvious mistakes like
blatant copy-pasting of code should immediately be pointed out, as
should be places where an existing utility function could be used.
As to how to deal with the other cases, and who will do the refactoring
-- I view this is as a great opportunity to either do some cleanup
myself (it really feels good to do it!) or create some "low-hanging
fruit" bugs for the newcomers, or for myself for one of my worse days,
when I don't feel like working on anything new. The worst thing I can do
it just let it go and forget about it.
> On other points: the patch is required to be self-consistent. It needs
> to solve something that is stated in the bug/commit message and no more
> (and hopefully not less) than that. So the scope really matters. I
> haven't see much comments from reviewers requesting to make occasional
> fixes. Although sometimes it make sense to ask for such, especially if
> they are related, reasonable and the patch is going to be improved
> anyways (due to some other issues). It also may happen that submitter
> has missed certain things that also fall in the scope of the work he is
> Having said that, I believe that it's not very difficult to get through
> a review where you get -1 for code & style issues.
> When you have your code, IDE, git and gerrit at your hands it is a
> matter of minutes (hours rarely) to resolve those and resubmit.
Now this differs widely depending on the team and the location. It can
take up to four days (assuming the reviewer came back to his comment and
didn't go on PTO) to resolve a trivial -1:
* you commit your patch and wait for reviews
* next day morning you see a -1 comment with a nitpick, you fix it
* next day the reviewer has removed his -1
* next day the core reviewers see there is no -1 so they do reviews of
> The real issue with the review process is that reviewers should back up
> his mark once they has put it and they not usually do that for various
> reasons. The worst thing to do is to put -1 and go away without leaving
> a chance for submitter to explain what he meant and wanted.
> So once a reviewer started a review (put a mark) - it is his
> responsibility to work further on this review.
> Persistent -1 should indicate that reviewer and submitter could not
> agree on fundamentals of the patch (design, workflow, etc), which, in
> turn, means, that a discussion should be held within a community before
> the work on the patch is continued.
This is a very good point. In my team I was on the review duty all the
time, so it wasn't much of an issue, but you are right that it is very
important in the more distributed environment of OpenStack. I will keep
it in mind for the future, thank you.
I'm also not sure whether it makes sense to stop reviews as soon as the
patch has a -1 on it. I noticed that it's what happens -- it's a little
as if people are waiting for the author to defend themselves, unwilling
to step between the two people involved. But I think it would be
sometimes very beneficial to defend the author sometimes -- give a +1 or
even +2 on top of an existing -1. I never saw that happen, what do you
More information about the OpenStack-dev