[openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

Doug Hellmann doug at doughellmann.com
Wed Feb 18 15:14:14 UTC 2015



On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
> 
> > On Feb 18, 2015, at 10:00 AM, Doug Hellmann <doug at doughellmann.com> wrote:
> > 
> > 
> > 
> > On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
> >> On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague <sean at dague.net> wrote:
> >> 
> >>> On 02/16/2015 08:50 PM, Ian Cordasco wrote:
> >>>> On 2/16/15, 16:08, "Sean Dague" <sean at dague.net> wrote:
> >>>> 
> >>>>> On 02/16/2015 02:08 PM, Doug Hellmann wrote:
> >>>>>> 
> >>>>>> 
> >>>>>> On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
> >>>>>>> Hey everyone,
> >>>>>>> 
> >>>>>>> The os-ansible-deployment team was working on updates to add support
> >>>>>>> for
> >>>>>>> the latest version of juno and noticed some interesting version
> >>>>>>> specifiers
> >>>>>>> introduced into global-requirements.txt in January. It introduced some
> >>>>>>> version specifiers that seem a bit impossible like the one for
> >>> requests
> >>>>>>> [1]. There are others that equate presently to pinning the versions of
> >>>>>>> the
> >>>>>>> packages [2, 3, 4].
> >>>>>>> 
> >>>>>>> I understand fully and support the commit because of how it improves
> >>>>>>> pretty much everyone’s quality of life (no fires to put out in the
> >>>>>>> middle
> >>>>>>> of the night on the weekend). I’m also aware that a lot of the
> >>>>>>> downstream
> >>>>>>> redistributors tend to work from global-requirements.txt when
> >>>>>>> determining
> >>>>>>> what to package/support.
> >>>>>>> 
> >>>>>>> It seems to me like there’s room to clean up some of these
> >>> requirements
> >>>>>>> to
> >>>>>>> make them far more explicit and less misleading to the human eye (even
> >>>>>>> though tooling like pip can easily parse/understand these).
> >>>>>> 
> >>>>>> I think that's the idea. These requirements were generated
> >>>>>> automatically, and fixed issues that were holding back several
> >>> projects.
> >>>>>> Now we can apply updates to them by hand, to either move the lower
> >>>>>> bounds down (as in the case Ihar pointed out with stevedore) or clean
> >>> up
> >>>>>> the range definitions. We should not raise the limits of any Oslo
> >>>>>> libraries, and we should consider raising the limits of third-party
> >>>>>> libraries very carefully.
> >>>>>> 
> >>>>>> We should make those changes on one library at a time, so we can see
> >>>>>> what effect each change has on the other requirements.
> >>>>>> 
> >>>>>>> 
> >>>>>>> I also understand that stable-maint may want to occasionally bump the
> >>>>>>> caps
> >>>>>>> to see if newer versions will not break everything, so what is the
> >>>>>>> right
> >>>>>>> way forward? What is the best way to both maintain a stable branch
> >>> with
> >>>>>>> known working dependencies while helping out those who do so much work
> >>>>>>> for
> >>>>>>> us (downstream and stable-maint) and not permanently pinning to
> >>> certain
> >>>>>>> working versions?
> >>>>>> 
> >>>>>> Managing the upper bounds is still under discussion. Sean pointed out
> >>>>>> that we might want hard caps so that updates to stable branch were
> >>>>>> explicit. I can see either side of that argument and am still on the
> >>>>>> fence about the best approach.
> >>>>> 
> >>>>> History has shown that it's too much work keeping testing functioning
> >>>>> for stable branches if we leave dependencies uncapped. If particular
> >>>>> people are interested in bumping versions when releases happen, it's
> >>>>> easy enough to do with a requirements proposed update. It will even run
> >>>>> tests that in most cases will prove that it works.
> >>>>> 
> >>>>> It might even be possible for someone to build some automation that did
> >>>>> that as stuff from pypi released so we could have the best of both
> >>>>> worlds. But I think capping is definitely something we want as a
> >>>>> project, and it reflects the way that most deployments will consume this
> >>>>> code.
> >>>>> 
> >>>>>     -Sean
> >>>>> 
> >>>>> --
> >>>>> Sean Dague
> >>>>> http://dague.net
> >>>> 
> >>>> Right. No one is arguing the very clear benefits of all of this.
> >>>> 
> >>>> I’m just wondering if for the example version identifiers that I gave in
> >>>> my original message (and others that are very similar) if we want to make
> >>>> the strings much simpler for people who tend to work from them (i.e.,
> >>>> downstream re-distributors whose jobs are already difficult enough). I’ve
> >>>> offered to help at least one of them in the past who maintains all of
> >>>> their distro’s packages themselves, but they refused so I’d like to help
> >>>> them anyway possible. Especially if any of them chime in as this being
> >>>> something that would be helpful.
> >>> 
> >>> Ok, your links got kind of scrambled. Can you next time please inline
> >>> the key relevant content in the email, because I think we all missed the
> >>> original message intent as the key content was only in footnotes.
> >>> 
> >>> From my point of view, normalization patches would be fine.
> >>> 
> >>> requests>=1.2.1,!=2.4.0,<=2.2.1
> >>> 
> >>> Is actually an odd one, because that's still there because we're using
> >>> Trusty level requests in the tests, and my ability to have devstack not
> >>> install that has thus far failed.
> >>> 
> >>> Things like:
> >>> 
> >>> osprofiler>=0.3.0,<=0.3.0 # Apache-2.0
> >>> 
> >>> Can clearly be normalized to osprofiler==0.3.0 if you want to propose
> >>> the patch manually.
> >>> 
> >> 
> >> global-requirements for stable branches serves two uses:
> >> 
> >> 1. Specify the set of dependencies that we would like to test against
> >> 2.  A tool for downstream packagers to use when determining what to
> >> package/support.
> >> 
> >> For #1, Ideally we would like a set of all dependencies, including
> >> transitive, with explicit versions (very similar to the output of
> >> pip-freeze). But for #2 the standard requirement file with a range is
> >> preferred. Putting an upper bound on each dependency, instead of using a
> >> '==' was a compromise between the two use cases.
> >> 
> >> Going forward I propose we have a requirements.in and a requirements.txt
> >> file. The requirements.in file would contain the range of dependencies,
> >> and
> >> requirements.txt would contain the pinned set, and eventually the pinned
> >> set including transitive dependencies.
> >> 
> >> Thoughts?
> > 
> > I'm interested in seeing what that list looks like. I suspect we have
> > some libraries listed in the global requirements now that aren't
> > actually used, and I'm sure there is a long list of transitive
> > dependencies to add to it.
> > 
> > I'm not entirely comfortable with the idea of pinning completely, but I
> > guess it's the best of two bad options. It solves the "we don't have
> > enough people around to manage stable branches" problem in one way (by
> > not letting releases outside our control break our test jobs), but if we
> > don't have people around now to fix things who is going to keep up with
> > updating that requirements list as new versions of projects come out? We
> > can write a job to automatically detect new packages and test them, but
> > who is going to review patches submitted by that bot? Maybe that's a
> > small enough amount of work that it will be easier to find help.
> > 
> > We've been playing whack-a-mole with issues because we made changes to
> > the way we deal with versions and dependencies without fully
> > understanding the consequences of some of the changes. They looked
> > innocent at first, but because of assumptions in other jobs or other
> > parts of the system they caused problems. So I think we should be
> > careful about making this decision and think about some of the other
> > things that might fall out before pushing more changes up.
> > 
> > For example, if we're syncing requirements into stable branches of
> > projects based on requirements.txt, and that becomes a set of pins
> > instead of a set of ranges with caps, how do we update projects? Should
> > we sync from requirements.in instead of requirements.txt, to allow
> > projects to maintain the ranges in their own requirements files? Or do
> > we want those requirements files to reflect the pins from the global
> > list?
> 
> I'm not sure I fully understand what folks are proposing here with two
> different files, but if you’re putting ``==`` specifiers into the
> install_requires of various projects, then I believe that is going to
> cause a
> fairly large amount of pain.

As I understand it, Joe's idea is to have an input file
("requirements.in") that uses >=, <=, and != to specify a range of valid
versions. Those ranges would guide packagers about what requirements we
think work. The list would also be "compiled" into a list of
requirements using only == to create a requirements.txt file that would
be used for the tests on our CI systems.

> 
> ---
> Donald Stufft
> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Email had 1 attachment:
> + signature.asc
>   1k (application/pgp-signature)



More information about the OpenStack-dev mailing list