[openstack-dev] [reviews] putting numbers on -core team load

Robert Collins robertc at robertcollins.net
Thu Nov 14 21:03:34 UTC 2013


On 15 November 2013 03:03, Daniel P. Berrange <berrange at redhat.com> wrote:
> On Thu, Nov 14, 2013 at 08:29:36PM +1300, Robert Collins wrote:

>> One confounding factor here is that this counts (AIUI) pushed changes,
>> not change ids - so we don't need two +2's for every push, we need two
>> +2's for every changeid - we should add that as a separate metric I
>> think, as the needed +2 count will be a bit lower.
>
> NB, this analysis seems to assume that we actally /want/ to eventually
> approve ever submitted patch. There's going to be some portion that are
> abandoned or rejected. It is probably a reasonably small portion of the
> total so won't change the order of magnitude, but I'd be interested in
> stats on how many patches we reject in one way or another.

That is true: some concepts will be broken and not fixable via a
review discussion. As you say, we should track that - and identify -2
vs abandoned-due-to{submitter went away, submitter clicked abandoned}.

>> Why such a big number of volunteers? Because we need a big number of
>> people to spread load, because Nova has a high incoming patch rate.
>
> One concern I have is that this exercise could turn out to be a bit like
> building new roads to solve traffic jams. The more roads you build, the
> more car usage you trigger. In some ways the limited rate of approvals
> can be seen as acting as a natural break on the rate of patch submissions
> getting out of control. I'm not saying we've reached that point currently,
> but at some point I think it is wise to ask what is the acceptable rate
> of code churn we should try to support as a project.

We might trigger a Nash equililbrium? Potentially indeed!.

> One of the problems of a project as large as Nova is that it is hard for
> one person to be expert at reviewing all areas of the code. As you increase
> the size of core review team we have to be careful we don't get a situation
> where we have too many patches being reviewed & approved by people who are
> not the natural experts in an area. At the same time you don't want people
> to be strictly siloed to just one area of the codebase, because you want
> people seeing the big picture. If we were grow the core review team I think
> it may be worth trying to be a bit clearer about who the experts are for
> different areas of the code, so they can focus reviews on areas where it
> would be most valuable. We do this a little informally currently, but
> I think it'd be useful to make it a bit more explicit.

I don't have a considered view on this, but given the large volume of
changes (using Chris's excellent updated stats! - nice stuff Chris)
unless we keep on top of it it will be very hard for subject matter
experts to cherry pick the things they need to, and conversely hard
for generalists to not look at the SME territory.

One thing we might consider when the time comes - and I know it's
contentious - is splitting the tree, *not* for 'supported vs other'
reasons, but to make the review boundaries very clear. But we should
/only/ do that if thats the specific problem we have: just having 'too
many reviews' is a different problem, and splitting because of that
would exacerbate things, not make it better. IMNSHO.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



More information about the OpenStack-dev mailing list