[nova][ptg] Ussuri scope containment
balazs.gibizer at est.tech
Tue Oct 1 07:30:57 UTC 2019
On Tue, Oct 1, 2019 at 1:09 AM, Eric Fried <openstack at fried.cc> wrote:
> Nova developers and maintainers-
> Every cycle we approve some number of blueprints and then complete a
> percentage  of them. Which blueprints go unfinished seems to be
> completely random (notably, it appears to have nothing to do with our
> declared cycle priorities). This is especially frustrating for
> of a feature, who (understandably) interpret blueprint/spec approval
> a signal that they can reasonably expect the feature to land .
> The cause for non-completion usually seems to fall into one of several
> broad categories:
> == Inadequate *developer* attention ==
> - There's not much to be done about the subset of these where the
> contributor actually walks away.
> - The real problem is where the developer thinks they're ready for
> reviewers to look, but reviewers don't. Even things that seem obvious
> experienced reviewers, like failing CI or "WIP" in the commit title,
> will cause patches to be completely ignored -- but unseasoned
> contributors don't necessarily understand even that, let alone more
> subtle issues. Consequently, patches will languish, with each side
> expecting the other to take the next action. This is a problem of
> culture: contributors don't understand nova reviewer procedures and
> == Inadequate *reviewer* attention ==
> - Upstream maintainer time is limited.
> - We always seem to have low review activity until the last two or
> weeks before feature freeze, when there's a frantic uptick and lots
> - But there's a cultural rift here as well. Getting maintainers to
> about a blueprint is hard if they don't already have a stake in it.
> "squeaky wheel" concept is not well understood by unseasoned
> contributors. The best way to get reviews is to lurk in IRC and beg.
> Aside from not being intuitive, this can also be difficult
> (time zone pain, knowing which nicks to ping and how) as well as
> interpersonally (how much begging is enough? too much? when is it
When I joined I was taught that instead of begging go and review open
patches which a) helps the review load of dev team b) makes you known
in the community. Both helps getting reviews on your patches. Does it
always work? No. Do I like begging for review? No. Do I like to get
repatedly pinged to review? No. So I would suggest not to declare that
the only way to get review is to go and beg.
> == Multi-release efforts that we knew were going to be multi-release
> These may often drag on far longer than they perhaps should, but I'm
> going to try to address that here.
> There's nothing new or surprising about the above. We've tried to
> address these issues in various ways in the past, with varying degrees
> of effectiveness.
> I'd like to try a couple more.
> (A) Constrain scope, drastically. We marked 25 blueprints complete in
> Train . Since there has been no change to the core team, let's
> Ussuri to 25 blueprints . If this turns out to be too few, what's
> worst thing that happens? We finish everything, early, and wish we had
> done more. If that happens, drinks are on me, and we can bump the
> for V.
I support the ide that we limit our scope. But it is pretty hard to
select which 25 (or whathever amount we agree on) bp we approve out of
possible ~50ish. What will be the method of selection?
> (B) Require a core to commit to "caring about" a spec before we
> it. The point of this "core liaison" is to act as a mentor to mitigate
> the cultural issues noted above , and to be a first point of
> for reviews. I've proposed this to the spec template here .
I proposed this before and I still think this could help. And partially
answer my question above, this could be one of the way to limit the
approved bps. If each core only commits to "care about" the
implementation of 2 bps, then we already have a limit for the number of
>  Like in the neighborhood of 60%. This is anecdotal; I'm not aware
> a good way to go back and mine actual data.
>  Stuff happens, sure, and nobody expects 100%, but 60%? Come on, we
> have to be able to do better than that.
>  https://blueprints.launchpad.net/nova/train
>  Recognizing of course that not all blueprints are created equal,
> this is more an attempt at a reasonable heuristic than an actual
> expectation of total size/LOC/person-hours/etc. The theory being that
> constraining to an actual number, whatever the number may be, is
> than not constraining at all.
>  If you're a core, you can be your own liaison, because presumably
> you don't need further cultural indoctrination or help begging for
>  https://review.opendev.org/685857
More information about the openstack-discuss