[openstack-dev] [all][pbr] splitting our deployment vs install dependencies

Robert Collins robertc at robertcollins.net
Mon Apr 13 01:02:13 UTC 2015

On 13 April 2015 at 12:01, James Polley <jp at jamezpolley.com> wrote:

> That sounds, to me, very similar to a discussion we had a few weeks ago in
> the context of our stable branches.
> In that context, we have two competing requirements. One requirement is that
> our CI system wants a very tightly pinned requirements, as do downstream CI
> systems and deployers that want to test and deploy exact known-tested
> versions of things. On the other hand, downstream distributors (including OS
> packagers) need to balance OpenStack's version requirements with version
> requirements from all the other packages in their distribution; the tighter
> the requirements we list are, the harder it is for the requirements to work
> with the requirements of other packages in the distribution.

They are analogous yes.
>> rust gets it wright. There is a Cargo.toml and a Cargo.lock, which are
>> understood by the tooling in a manner similar to what you have
>> described, and it is not just understood but DOCUMENTED that an
>> _application_ should ship with a Cargo.lock and a _library_ should not.
> This sounds similar to a solution that was proposed for the stable branches:
> a requirements.in with mandatory version constraints while being as loose as
> otherwise possible, which is used to generate a requirements.txt which has
> the "local to deployment" exact versions that are used in our CI. The
> details of the proposal are in https://review.openstack.org/#/c/161047/

That proposal is still under discussion, and seems stuck between the
distro and -infra folk. *if* it ends up doing the transitive thing, I
think that that would make a sensible requirements.txt, yes. However
see the spec for that thread of discussion.
>> I'm also concerned that dstufft is actively wanting to move towards a
>> world where the build tooling is not needed or used as part of the
>> install pipeline (metadata 2.0) -- so I'm concerned that we're aligning
>> with a pattern that isn't very robust and isn't supported by tooling
>> directly and that we're going to need to change understood usage
>> patterns across a large developer based to chase something that _still_
>> isn't going to be "how people do it"

So wheels are already in that space. metadata-2.0 is about bringing
that declarative stuff forward in the pipeline, from binary packages
to source packages. I'm currently using frustration based development
to bring it in at the start - for developers, in the lead-in to source

So you're concerned - but about what specifically? What goes wrong
with wheels (not wheels with C code). Whats not robust about the
pattern? The cargo package manager you referred to is entirely

James: I don't think the binary distribution stuff is relevant to my
discussion, since I'm talking entirely about 'using-pip' use cases,
whereas dpkg and rpm packages don't use that at all.


Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud

More information about the OpenStack-dev mailing list