[openstack-dev] [all][pbr] splitting our deployment vs install dependencies

Monty Taylor mordred at inaugust.com
Mon Apr 13 00:53:50 UTC 2015


On 04/12/2015 08:01 PM, James Polley wrote:
> On Mon, Apr 13, 2015 at 9:12 AM, Monty Taylor <mordred at inaugust.com> wrote:
> 
>> On 04/12/2015 06:43 PM, Robert Collins wrote:
>>> Right now we do something that upstream pip considers wrong: we make
>>> our requirements.txt be our install_requires.
>>>
>>> Upstream there are two separate concepts.
>>>
>>> install_requirements, which are meant to document what *must* be
>>> installed to import the package, and should encode any mandatory
>>> version constraints while being as loose as otherwise possible. E.g.
>>> if package A depends on package B version 1.5 or above, it should say
>>> B>=1.5 in A's install_requires. They should not specify maximum
>>> versions except when that is known to be a problem: they shouldn't
>>> borrow trouble.
>>>
>>> deploy requirements - requirements.txt - which are meant to be *local
>>> to a deployment*, and are commonly expected to specify very narrow (or
>>> even exact fit) versions.
>>
> 
> That sounds, to me, very similar to a discussion we had a few weeks ago in
> the context of our stable branches.
> 
> In that context, we have two competing requirements. One requirement is
> that our CI system wants a very tightly pinned requirements, as do
> downstream CI systems and deployers that want to test and deploy exact
> known-tested versions of things. On the other hand, downstream distributors
> (including OS packagers) need to balance OpenStack's version requirements
> with version requirements from all the other packages in their
> distribution; the tighter the requirements we list are, the harder it is
> for the requirements to work with the requirements of other packages in the
> distribution.

This is not accurate. During distro packaging activities, pbr does not
process dependencies at all. So no matter how we pin things in
OpenStack, it does not make it harder for the distros.

>> tl;dr - I'm mostly in support of what you're saying - but I'm going to
>> bludgeon it some.
>>
>> To be either fair or unfair, depending on how you look at it - some
>> people upstream consider those two to be a pattern, but it is not
>> encoded anywhere except in hidden lore that is shared between secret
>> people. Upstream's tools have bumpkiss for support for this, and if we
>> hadn't drawn a line in the sand encoding SOME behavior there would still
>> be nothing.
>>
>> Nor, btw, is it the right split. It optimizes for the wrong thing.
>>
>> rust gets it wright. There is a Cargo.toml and a Cargo.lock, which are
>> understood by the tooling in a manner similar to what you have
>> described, and it is not just understood but DOCUMENTED that an
>> _application_ should ship with a Cargo.lock and a _library_ should not.
>>
> 
> This sounds similar to a solution that was proposed for the stable
> branches: a requirements.in with mandatory version constraints while being
> as loose as otherwise possible, which is used to generate a
> requirements.txt which has the "local to deployment" exact versions that
> are used in our CI. The details of the proposal are in
> https://review.openstack.org/#/c/161047/

I disagree with this proposal. It's also not helping any users. Because
of what I said above, there is no flexibility that we lose downstream by
being strict and pedantic with our versions. So, having the "lose" and
the "strict" file just gets us two files and doubles the confusion.
Having a list of what we know the state to be is great. We should give
that to users. If they want to use something other than pip to install,
awesome - the person in charge of curating that content can test the
version interactions in their environment.

What we have in the gate is the thing that produces the artifacts that
someone installing using the pip tool would get. Shipping anything with
those artifacts other that a direct communication of what we tested is
just mean to our end users.

> 
>> Without the library/application distinction, the effort in
>> differentiating is misplaced, I believe - because it's libraries that
>> need flexible depends - and applications where the specific set of
>> depends that were tested in CI become important to consumers.
>>
>>> What pbr, which nearly if not all OpenStack projects use, does, is to
>>> map the contents of requirements.txt into install_requires. And then
>>> we use the same requirements.txt in our CI to control whats deployed
>>> in our test environment[*]. and there we often have tight constraints
>>> like seen here -
>>>
>> http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n63
>>
>> That is, btw, because that's what the overwhelming majority of consumers
>> assume those files mean. I take "overwhelming majority" from the days
>> when we had files but did not process them automatically and everyone
>> was confused.
>>
>>> I'd like to align our patterns with those of upstream, so that we're
>>> not fighting our tooling so much.
>>
>> Ok. I mean, they don't have a better answer, but if it makes "python"
>> hate us less, sweet.
>>
>>> Concretely, I think we need to:
>>>  - teach pbr to read in install_requires from setup.cfg, not
>> requirements.txt
>>>  - when there are requirements in setup.cfg, stop reading
>> requirements.txt
>>>  - separate out the global intall_requirements from the global CI
>>> requirements, and update our syncing code to be aware of this
>>>
>>> Then, setup.cfg contains more open requirements suitable for being on
>>> PyPI, requirements.txt is the local CI set we know works - and can be
>>> much more restrictive as needed.
>>>
>>> Thoughts? If there's broad apathy-or-agreement I can turn this into a
>>> spec for fine coverage of ramifications and corner cases.
>>
>> I'm concerned that it adds a layer of difference that is confusing to
>> people for the sole benefit of pleasing someone else's pedantic worldview.
>>
>> I'm also concerned that dstufft is actively wanting to move towards a
>> world where the build tooling is not needed or used as part of the
>> install pipeline (metadata 2.0) -- so I'm concerned that we're aligning
>> with a pattern that isn't very robust and isn't supported by tooling
>> directly and that we're going to need to change understood usage
>> patterns across a large developer based to chase something that _still_
>> isn't going to be "how people do it"
>>
>> I'm concerned that "how people do it" is a myth not worth chasing.
>>
>> I'm not _opposed_ to making this richer and more useful for people. I
>> just don't know what's broken currently for us.
>>
> 
> To be clear, I don't mean to suggest that the solution proposed in
> https://review.openstack.org/#/c/161047/ is necessarily the correct
> solution to this problem - but I do think that it is illustrative of at
> last one thing that's currently broken for us.

I disagree that anything is broken for us that is not caused by our
inability to remember that distro packaging concerns are not the same as
our concerns, and that the mechanism already exists for distro pacakgers
to do what they want. Furthermore, it is caused by an insistence that we
need to keep versions "open" for some ephemeral reason such as "upstream
might release a bug fix" Since we all know that "if it's not tested,
it's broken" - any changes to upstream software should be considered
broken until proven otherwise. History over the last 5 years has shown
this to be accurate more than the other thing.

If we pin the stable branches with hard pins of direct and indirect
dependencies, we can have our stable branch artifacts be installable.
Thats awesome. IF there is a bugfix release or a security update to a
dependent library - someone can propose it. Otherwise, the stable
release should not be moving.

Monty



More information about the OpenStack-dev mailing list