[openstack-dev] [all][pbr] splitting our deployment vs install dependencies
robertc at robertcollins.net
Sun Apr 12 22:43:57 UTC 2015
Right now we do something that upstream pip considers wrong: we make
our requirements.txt be our install_requires.
Upstream there are two separate concepts.
install_requirements, which are meant to document what *must* be
installed to import the package, and should encode any mandatory
version constraints while being as loose as otherwise possible. E.g.
if package A depends on package B version 1.5 or above, it should say
B>=1.5 in A's install_requires. They should not specify maximum
versions except when that is known to be a problem: they shouldn't
deploy requirements - requirements.txt - which are meant to be *local
to a deployment*, and are commonly expected to specify very narrow (or
even exact fit) versions.
What pbr, which nearly if not all OpenStack projects use, does, is to
map the contents of requirements.txt into install_requires. And then
we use the same requirements.txt in our CI to control whats deployed
in our test environment[*]. and there we often have tight constraints
like seen here -
I'd like to align our patterns with those of upstream, so that we're
not fighting our tooling so much.
Concretely, I think we need to:
- teach pbr to read in install_requires from setup.cfg, not requirements.txt
- when there are requirements in setup.cfg, stop reading requirements.txt
- separate out the global intall_requirements from the global CI
requirements, and update our syncing code to be aware of this
Then, setup.cfg contains more open requirements suitable for being on
PyPI, requirements.txt is the local CI set we know works - and can be
much more restrictive as needed.
Thoughts? If there's broad apathy-or-agreement I can turn this into a
spec for fine coverage of ramifications and corner cases.
Robert Collins <rbtcollins at hp.com>
HP Converged Cloud
More information about the OpenStack-dev