[Openstack] Integration test gating on trunk
James E. Blair
corvus at inaugust.com
Thu Dec 29 22:51:49 UTC 2011
A few weeks ago, I wrote about turning on an integration test gating job
for the stable/diablo branch. That's been running for a while now, and
during that time with help from Anthony Young and Jesse Andrews, we've
been able to address the issues we saw and make the job fairly reliable.
At the last design summit, we agreed that we should gate trunk
development of at least nova and its immediate dependencies on some kind
of integration test. The biggest change this introduces to developer
workflow is how to handle changes that affect more than one project. At
the design summit, it was decided that such changes should be authored
so that the system continues to function as each is merged in order. In
other words, if you need to modify nova and glance, you might make a
change to nova that accepts old and new behaviors from glance, then
The job we've been developing uses devstack to set up nova, glance, and
keystone, and then runs the relevant exercise.sh tests. Obviously
that's not a lot of testing, but it does at least ensure that nova can
perform its basic functions, which, again, was an important milestone
identified at the summit. Once tempest is ready for this, we'll start
At this point, I believe the testing infrastructure is stable enough for
us to turn on gating for all branches of nova, glance, and keystone
(also python-novaclient, devstack, and openstack-ci, which are involved
in the setup and running of the tests).
I would not be surprised if we run into some problems. We might see
transient network errors in the test setup, in which case you can just
re-trigger the job (you can vote "Approved" again), and we can see if
there's some caching or local mirroring we can do to reduce that risk.
We might encounter non-deterministic behavior in the setup and running
of OpenStack, in which case it would be best to treat that as a bug in
devstack or the affected component and improve the software. I think
that kind of problem is the sort of thing that our CI system should be
uncovering, so even though it's annoying if it affects landing a patch
you're working on, I think it's a net positive to the effort overall.
Also, we just might catch real bugs.
Having said that, the Jenkins job has been running in silent mode on
master for several days with few false errors. My feeling from the
design summit was that it was generally understood there would be a
shakedown period, and people are willing to accept some risk and some
extra work for the benefits an integration test gating job will brink.
I think we're at that point, so I'd like to turn this job on Tuesday,
More information about the Openstack