[openstack-dev] [fuel][puppet] The state of collaboration: 9 weeks

Dmitry Borodaenko dborodaenko at mirantis.com
Tue Aug 18 07:33:33 UTC 2015


Two weeks ago, I flagged the patch sets to commits ratio as the biggest
problem that Fuel contributors to Puppet OpenStack should address, and
over the past two weeks the situation has improved dramatically. In
first and second week of August, 19 of our commits were merged in
upstream, bringing our patch sets per commit ratio from 19 down to 5.6
(while average for Puppet OpenStack during that period was 6.5). With
the share of patch sets pushed by Fuel developers remaining at roughly
the same level (15.9% vs 17.4%), I think it's safe to call this problem
solved. Simply awesome!

Comparing last 30 days contribution stats vs same numbers two weeks ago:

Bogdan Dobrelia (#3 reviewer!): 67.2% -> 66.3% (disagreements 4.9% -> 3.6%)
Denis Egorenko: 97% -> 87.5% (disagreements 12.1% -> 13.9%)
Vasyl Saienko: 100% -> 96.4% (disagreements 16.7% -> 10.7%)
Ivan Berezovskiy: 100% -> 92.3% (disagreements 0% -> 3.8%)
Sergey Kolekonov: 91.7% -> 95.7% (disagreements 8.3% -> 13%)
Max Yatsenko: n/a -> 100% (disagreements n/a -> 17.4%)
Alex Schultz: 80% -> 80% (disagreements 20% -> 26.7%)
Bartlomiej Piotrowski: n/a -> 100% (disagreements n/a -> 12.5%)
Sergii Golovatiuk: 100% -> 100% (disagreements 33.3% -> 0%)

Bogdan continues to set the example and improve his numbers, which is
not surprising considering that he's also the top reviewer in
fuel-library. I think puppet-nova and puppet-neutron teams should
seriously consider nominating him for core, he already tops reviewers
lists for these modules.

Denis, Vasyl, and Ivan are not there yet, but they all have noticeably
increased both number and quality of their reviews, keep it up guys!

Numbers for other top reviewers are uneven and small enough for noise to
overtake meaningful data, all I can recommend here is to watch your
disagreements and learn from them. A ratio above 10% can mean one of
three things: 1) you're not doing enough reviews so even a handful of
disagreements sticks out -- do more reviews and this will improve on its
own; 2) you're missing problems with the code that other reviewers find
unacceptable -- try to be more attentive and watch for the things that
you've been missing; 3) you disagree with majority on how some things
should be done -- discuss your differences on IRC or ML and figure out a
consensus.

Moving on to other numbers, weekly IRC meetings participation remains
good:

Aug-4: 4 of 15 participants, 22 of 162 lines
Aug-11: 6 of 16 participants, 62 of 192 lines

Unfortunately it's not all roses and unicorns, last week I flagged a
stuck review that I think has been mishandled by Puppet OpenStack team
[0][1], and it still remains in the same state.

[0] https://review.openstack.org/198695
[1] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-11-15.00.log.html#l-140

On July 8, patch set 2 has passed CI. It received 2 +1 votes from other
Fuel developers on July 10 and 29, but remained ignored by upstream
reviewers until August 10, more than a month after the current patch set
was posted. Then, a Swift core reviewer left a -1 disagreeing with the
intent of the patch, and even though patch author posted a rebuttal a
day later, the patch remains stuck and untouched for yet another week.

It's a one-off case that does not outweigh the positive trends I've
outlined above, but even one stuck patch that fixes a critical bug is
enough to justify a fork.

Speaking of forks, we managed to un-fork 9 upstream modules [2] with
puppet-librarian-simple before Fuel 7.0 soft code freeze has kicked in.

[2] https://github.com/stackforge/fuel-library/blob/master/deployment/Puppetfile

That's 9 times more than my conservative estimate of converting at least
1 module to librarian in 7.0, but these were the easiest and least
deviated from upstream. We've still got 50 more modules to convert in
Fuel 8.0, many will require commits to be merged in upstream before they
can be completely un-forked. Having such commits wait for a month at a
time only to be summarily rejected puts this effort at risk, lets figure
out what went wrong this time and come up with a way to prevent this
from happening again.

Thanks,

-- 
Dmitry Borodaenko



More information about the OpenStack-dev mailing list