[nova][ptg] Ussuri scope containment

Tom Barron tpb at dyncloud.net
Tue Oct 1 12:38:50 UTC 2019


On 01/10/19 07:30 +0000, Balázs Gibizer wrote:
>
>
>On Tue, Oct 1, 2019 at 1:09 AM, Eric Fried <openstack at fried.cc> wrote:
>> Nova developers and maintainers-
>>
>> Every cycle we approve some number of blueprints and then complete a
>> low
>> percentage [1] of them. Which blueprints go unfinished seems to be
>> completely random (notably, it appears to have nothing to do with our
>> declared cycle priorities). This is especially frustrating for
>> consumers
>> of a feature, who (understandably) interpret blueprint/spec approval
>> as
>> a signal that they can reasonably expect the feature to land [2].
>>
>> The cause for non-completion usually seems to fall into one of several
>> broad categories:
>>
>> == Inadequate *developer* attention ==
>> - There's not much to be done about the subset of these where the
>> contributor actually walks away.
>>
>> - The real problem is where the developer thinks they're ready for
>> reviewers to look, but reviewers don't. Even things that seem obvious
>> to
>> experienced reviewers, like failing CI or "WIP" in the commit title,
>> will cause patches to be completely ignored -- but unseasoned
>> contributors don't necessarily understand even that, let alone more
>> subtle issues. Consequently, patches will languish, with each side
>> expecting the other to take the next action. This is a problem of
>> culture: contributors don't understand nova reviewer procedures and
>> psychology.
>>
>> == Inadequate *reviewer* attention ==
>> - Upstream maintainer time is limited.
>>
>> - We always seem to have low review activity until the last two or
>> three
>> weeks before feature freeze, when there's a frantic uptick and lots
>> gets
>> done.
>>
>> - But there's a cultural rift here as well. Getting maintainers to
>> care
>> about a blueprint is hard if they don't already have a stake in it.
>> The
>> "squeaky wheel" concept is not well understood by unseasoned
>> contributors. The best way to get reviews is to lurk in IRC and beg.
>> Aside from not being intuitive, this can also be difficult
>> logistically
>> (time zone pain, knowing which nicks to ping and how) as well as
>> interpersonally (how much begging is enough? too much? when is it
>> appropriate?).
>
>When I joined I was taught that instead of begging go and review open
>patches which a) helps the review load of dev team b) makes you known
>in the community. Both helps getting reviews on your patches. Does it
>always work? No. Do I like begging for review? No. Do I like to get
>repatedly pinged to review? No. So I would suggest not to declare that
>the only way to get review is to go and beg.

+1

In projects I have worked on there is no need to encourage extra 
begging and squeaky wheel prioritization has IMO not been a healthy 
thing.

There is no better way to get ones reviews stalled than to beg for 
reviews with patches that are not close to ready for review and at the 
same time contribute no useful reviews oneself.

There is nothing wrong with pinging to get attention to a review if it 
is ready and languishing, or if it solves an urgent issue, but even in 
these cases a ping from someone who doesn't "cry wolf" and who has 
built a reputation as a contributor carries more weight.

>
>>
>> == Multi-release efforts that we knew were going to be multi-release
>> ==
>> These may often drag on far longer than they perhaps should, but I'm
>> not
>> going to try to address that here.
>>
>> ========
>>
>> There's nothing new or surprising about the above. We've tried to
>> address these issues in various ways in the past, with varying degrees
>> of effectiveness.
>>
>> I'd like to try a couple more.
>>
>> (A) Constrain scope, drastically. We marked 25 blueprints complete in
>> Train [3]. Since there has been no change to the core team, let's
>> limit
>> Ussuri to 25 blueprints [4]. If this turns out to be too few, what's
>> the
>> worst thing that happens? We finish everything, early, and wish we had
>> done more. If that happens, drinks are on me, and we can bump the
>> number
>> for V.
>
>I support the ide that we limit our scope. But it is pretty hard to
>select which 25 (or whathever amount we agree on) bp we approve out of
>possible ~50ish. What will be the method of selection?
>
>>
>> (B) Require a core to commit to "caring about" a spec before we
>> approve
>> it. The point of this "core liaison" is to act as a mentor to mitigate
>> the cultural issues noted above [5], and to be a first point of
>> contact
>> for reviews. I've proposed this to the spec template here [6].
>
>I proposed this before and I still think this could help. And partially
>answer my question above, this could be one of the way to limit the
>approved bps. If each core only commits to "care about" the
>implementation of 2 bps, then we already have a limit for the number of
>approved bps.
>
>Cheers,
>gibi
>
>>
>> Thoughts?
>>
>> efried
>>
>> [1] Like in the neighborhood of 60%. This is anecdotal; I'm not aware
>> of
>> a good way to go back and mine actual data.
>> [2] Stuff happens, sure, and nobody expects 100%, but 60%? Come on, we
>> have to be able to do better than that.
>> [3] https://blueprints.launchpad.net/nova/train
>> [4] Recognizing of course that not all blueprints are created equal,
>> this is more an attempt at a reasonable heuristic than an actual
>> expectation of total size/LOC/person-hours/etc. The theory being that
>> constraining to an actual number, whatever the number may be, is
>> better
>> than not constraining at all.
>> [5] If you're a core, you can be your own liaison, because presumably
>> you don't need further cultural indoctrination or help begging for
>> reviews.
>> [6] https://review.opendev.org/685857
>>
>
>



More information about the openstack-discuss mailing list