[stable][requirements][neutron] Capping pip in stable branches or not

Julia Kreger juliaashleykreger at gmail.com
Fri Dec 11 15:23:19 UTC 2020


On Fri, Dec 11, 2020 at 6:41 AM Jeremy Stanley <fungi at yuggoth.org> wrote:
>
> On 2020-12-11 10:20:35 +0100 (+0100), Bernard Cafarelli wrote:
> > now that master branches have recovered from pip new resolver use,
> > I started looking at stable branches status. tl;dr for those with
> > open pending backports, all branches are broken at the moment so
> > please do not recheck.
>
> Was there significant breakage on master branches (aside from
> lower-constraints jobs, which I've always argued are inherently
> broken for this very reason)? If so, it didn't come to my attention.
> Matthew did some fairly extensive testing with the new algorithm
> across our coordinated dependency set well in advance of the pip
> release to actually turn it on by default.
>
> > Thinking about fixing gates for these branches, older EM branches
> > may be fine once the bandit 1.6.3 issue [1] is sorted out, but
> > most need a fix against the new pip resolver.
> >
> > pip has a flag to switch back to old resolver, but this is a
> > temporary one that will only be there for a few weeks [2]
> >
> > From a quick IRC chat, the general guidance for us was always to
> > leave pip uncapped, and the new resolver issues are actually
> > broken requirements.
> [...]
>
> Yes, it bears repeating that anywhere the new dep solver is breaking
> represents a situation where we were previously testing/building
> things with a different version of some package than we meant to.
> This is exposing latent bugs in our declared dependencies within
> those branches. If we decide to use "older" pip, that's basically
> admitting we don't care because it's easier to ignore those problems
> than actually fix them (which, yes, might turn out to be effectively
> impossible). I'm not trying to be harsh, it's certainly a valid
> approach, but let's be clear that this is the compromise we're
> making in that case.
>
> My proposal: actually come to terms with the reality that
> lower-constraints jobs are a fundamentally broken concept, unless
> someone does the hard work to implement an inverse version sort in
> pip itself. If pretty much all the struggle is with those jobs, then
> dropping them can't hurt because they failed at testing exactly the
> thing they were created for in the first place.

I completely agree with Jeremy's proposal. And sentiment in ironic
seems to be leaning in this direction as well. The bottom line is WE
as a community have one of two options: Constantly track and increment
l-c, or try to roll forward with the most recent and attempt to
identify issues as we go. The original push of g-r updates out seemed
to be far less painful and gave us visibility to future breakages. Now
we're looking at yet another round where we need to fix CI jobs on
every repository and branch we maintain. This impinges on our ability
to deliver new features and cripples our ability to deliver upstream
bug fixes when we are constantly fighting stable CI breakages.

I guess it is kind of obvious that I'm frustrated with breaking stable
CI as it seems to be a giant time sink for myself.

-Julia

> --
> Jeremy Stanley



More information about the openstack-discuss mailing list