[openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

Doug Hellmann doug at doughellmann.com
Fri Feb 20 22:24:52 UTC 2015



On Fri, Feb 20, 2015, at 03:36 PM, Joe Gordon wrote:
> On Fri, Feb 20, 2015 at 12:10 PM, Doug Hellmann <doug at doughellmann.com>
> wrote:
> 
> >
> >
> > On Fri, Feb 20, 2015, at 02:07 PM, Joe Gordon wrote:
> > > On Fri, Feb 20, 2015 at 7:27 AM, Doug Hellmann <doug at doughellmann.com>
> > > wrote:
> > >
> > > >
> > > >
> > > > On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
> > > > > On 02/20/2015 12:26 AM, Adam Gandelman wrote:
> > > > > > Its more than just the naming.  In the original proposal,
> > > > > > requirements.txt is the compiled list of all pinned deps (direct
> > and
> > > > > > transitive), while requirements.in <http://requirements.in>
> > reflects
> > > > > > what people will actually use.  Whatever is in requirements.txt
> > affects
> > > > > > the egg's requires.txt. Instead, we can keep requirements.txt
> > unchanged
> > > > > > and have it still be the canonical list of dependencies, while
> > > > > > reqiurements.out/requirements.gate/requirements.whatever is an
> > upstream
> > > > > > utility we produce and use to keep things sane on our slaves.
> > > > > >
> > > > > > Maybe all we need is:
> > > > > >
> > > > > > * update the existing post-merge job on the requirements repo to
> > > > produce
> > > > > > a requirements.txt (as it does now) as well the compiled version.
> > > > > >
> > > > > > * modify devstack in some way with a toggle to have it process
> > > > > > dependencies from the compiled version when necessary
> > > > > >
> > > > > > I'm not sure how the second bit jives with the existing devstack
> > > > > > installation code, specifically with the libraries from
> > git-or-master
> > > > > > but we can probably add something to warm the system with
> > dependencies
> > > > > > from the compiled version prior to calling pip/setup.py/etc
> > > > > > <http://setup.py/etc>
> > > > >
> > > > > It sounds like you are suggesting we take the tool we use to ensure
> > that
> > > > > all of OpenStack is installable together in a unified way, and change
> > > > > it's installation so that it doesn't do that any more.
> > > > >
> > > > > Which I'm fine with.
> > > > >
> > > > > But if we are doing that we should just whole hog give up on the idea
> > > > > that OpenStack can be run all together in a single environment, and
> > just
> > > > > double down on the devstack venv work instead.
> > > >
> > > > I don't disagree with your conclusion, but that's not how I read what
> > he
> > > > proposed. :-)
> > > >
> > > >
> > > Sean was reading between the lines here. We are doing all this extra work
> > > to make sure OpenStack can be run together in a single environment, but
> > > it
> > > seems like more and more people are moving away from deploying with that
> > > model anyway. Moving to this model would require a little more then just
> > > installing everything in separate venvs.  We would need to make sure we
> > > don't cap oslo libraries etc. in order to prevent conflicts inside a
> > > single
> >
> > Something I've noticed in this discussion: We should start talking about
> > "our" libraries, not just "Oslo" libraries. Oslo isn't the only project
> > managing libraries used by more than one other team any more. It never
> > really was, if you consider the clients, but we have PyCADF and various
> > middleware and other things now, too. We can base our policies on what
> > we've learned from Oslo, but we need to apply them to *all* libraries,
> > no matter which team manages them.
> >
> >
> My mistake, you are correct. I was incorrectly using oslo as a shorthand
> for all openstack libraries.

Yeah, I've been doing it, too, but the thing with neutronclient today
made me realize we shouldn't. :-)

> 
> 
> > > service. And we would still need a story around what to do with stable
> > > branches, how do we make sure new libraries don't break stable branches
> > > --
> > > which in turn can break master via grenade and other jobs.
> >
> > I'm comfortable using simple caps based on minor number increments for
> > stable branches. New libraries won't end up in the stable branches
> > unless they are a patch release. We can set up test jobs for stable
> > branches of libraries to run tempest just like we do against master, but
> > using all stable branch versions of the source files (AFAIK, we don't
> > have a job like that now, but I could be wrong).
> >
> 
> In general I agree, this is the right way forward for openstack
> libraries.
> But as made clear this week, we will have to be a little more careful
> about
> what is a valid patch release.

Sure. With caps in place, and incrementing the minor version at the
start of each cycle, I think the issues that come up can be minimized
though.

> 
> 
> >
> > I'm less confident that we have identified all of the issues with more
> > limited pins, so I'm reluctant to back that approach for now. That may
> > be an excess of caution on my part, though.
> >
> > >
> > >
> > >
> > > > Joe wanted requirements.txt to be the pinned requirements computed from
> > > > the list of all global requirements that work together. Pinning to a
> > > > single version works in our gate, but makes installing everything else
> > > > together *outside* of the gate harder because if the projects don't all
> > > > sync all requirements changes pretty much at the same time they won't
> > be
> > > > compatible.
> > > >
> > > > Adam suggested leaving requirements.txt alone and creating a different
> > > > list of pinned requirements that is *only* used in our gate. That way
> > we
> > > > still get the pinning for our gate, and the values are computed from
> > the
> > > > requirements used in the projects but they aren't propagated back out
> > to
> > > > the projects in a way that breaks their PyPI or distro packages.
> > > >
> > > > Another benefit of Adam's proposal is that we would only need to keep
> > > > the list of pins in the global requirements repository, so we would
> > have
> > > > fewer tooling changes to make.
> > > >
> > > > Doug
> > > >
> > > > >
> > > > >       -Sean
> > > > >
> > > > > --
> > > > > Sean Dague
> > > > > http://dague.net
> > > > >
> > > > >
> > > >
> > __________________________________________________________________________
> > > > > OpenStack Development Mailing List (not for usage questions)
> > > > > Unsubscribe:
> > > > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > >
> > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list