[openstack-dev] [all] concrete proposal on changing the library testing model with devstack-gate

Robert Collins robertc at robertcollins.net
Sun Sep 28 21:00:57 UTC 2014


On 27 September 2014 10:07, Robert Collins <robertc at robertcollins.net> wrote:

> TripleO has been running pip releases of clients in servers from the
> get go, and I've lost track of the number of bad dependency bugs we've
> encounted. We've hit many more of those than bad releases that crater
> the world [though those have happened].
>
> And yes, alpha dependencies are a mistake - if we depend on it, its a
> release. Quod erat demonstratum.

Doug has pointed out to me that this is perhaps a little shallow :).

So let me try and sketch a little deeper.

TripleO's upstream CI looks similar (if more aggressive) to the CD
deploy process some of our major contributors are using: we take
trunk, wrap it up into a production config and deploy it. There are
two major differences vis-a-vis what HP Cloud or Rackspace are doing.
Firstly, they're running off a fork which they sync at high frequency
- which exists to deal with the second thing, which is that they run
deployment specific tests against their tree before deployment, and
when that fails, they fix it simultaneously upstream and in the fork.

So, more or less *every commit* in Nova, Cinder, etc is going into a
production release in a matter of days. From our hands to our users.

What TripleO does - and I don't know the exact detail for Rackspace or
HP Cloud to say if this is the same) - is that we're driven by
requirements.txt files + what happens when things break.

So if requirements.txt specifies a thing thats not on PyPI, that
causes things to break : we -really- don't like absolute URLs in
requirements.txt, and we -really- don't like having stale requirements
in requirements.txt.

The current situation where (most) everything we consume is on PyPI is
a massive step forwards. Yay.

But if requirements.txt specifies a library thats not released, thats
a little odd. It's odd because we're saying that each commit of the
API servers is at release quality (but we don't release because for
these big projects a release is much more than just quality - its
ecosystem, its documentation, its deployment support)...

The other way things can be odd is if requirements.txt is stale: e.g.
say cinderclient adds an API to make life easier in Nova. If Nova
wants to use that, they could just use it - it will pass the
integrated gate which tests tip vs tip. But it will fail if any user
just 'pip install' installs Nova. It will fail for developers too. So
I think its far better to publish that cinderclient on PyPI so that
things do work.

And here is where the discussion about alphas comes in.

Should we publish that cinderclient as a release, or as a pre-release preview?

If we publish it as a pre-release preview, we're saying that we
reserve the right to change the API anyway we want. We're not saying
that its not release quality: because the API servers can't be at
release quality if they depend on non-release quality components.

And yet, we /cannot/ change that new cinderclient API anyway we want.
Deployers have already deployed the API server that depends on it;
they are pulling from pypi: if we push up a breaking cinderclient
alpha-2 or whatever, it will break our deployers.

Thats why I say that if we depend on it, its released: because in all
ways that matter, the constraints that one expects to see on a full
release, apply to *every commit* we do within the transitive
dependency tree that is the integrated gate.

And this isn't because we test together - its because the top level
drivers for that gate are the quality of the API server trees, which
are CD deployed. The testing strategy doesn't matter so much compared
to that basic starting point.

To summarise the requirements I believe we have are:
 - every commit of API servers is production quality and release-ready
[for existing features]
 - we don't break public APIs in projects at the granularity of consumption
   - Thats per commit for API servers, and
per-whatever-pip-grabs-when-installing-api-servers for library
projects
      (e.g. per-release)
 - requirements.txt should be pip installable at all times
 - and faithfully represent actual dependencies: nothing should break
if one pip installs e.g. nova from git

Requirements we *do not have*:
 - new features within a project have to be production quality on day one
   That is, projects are allowed to have code that isn't yet
supported, though for instance we don't have a good way to annotate
that X 'will be a public API but is not yet'.


So the alpha thing is a mistake IMO not because we're pushing to PyPI,
but because we're calling them alphas, which I don't believe
represents the actual state of the code nor our ability to alter
things.

Right now we test tip vs tip, so some of this is hidden until it
breaks TripleO [which is more often than we'd like!] but the
incremental, don't break things approach is enforced by the gate,
which is good :).

My proposal, FWIW, would be that we just call the things we push up to
PyPI releases. We've already tested the code in fair anger in the
gate.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



More information about the OpenStack-dev mailing list