[openstack-dev] [Infra][Nova] Running Nova DB API tests on different backends
Roman Podolyaka
rpodolyaka at mirantis.com
Fri Jun 21 11:40:09 UTC 2013
Hi, all!
In Nova we've got a DB access layer known as "DB API" and tests for it.
Currently, those tests are run only for SQLite in-memory DB, which is great
for speed, but doesn't allow us to spot backend-specific errors.
There is a blueprint (
https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends)
by Boris Pavlovic, which goal is to run the DB API tests on all DB backends
(e. g. SQLite, MySQL and PosgreSQL). Recently, I've been working on
implementation of this BP (https://review.openstack.org/#/c/33236/).
The chosen approach for implementing this is best explained by going
through a list of problems which must be solved:
1. Tests should be executed concurrently by testr.
testr creates a few worker processes each running a portion of test cases.
When SQLite in-memory DB is used for testing, each of those processes has
it's own DB in its address space, so no race conditions are possible. If we
used a shared MySQL/PostgreSQL DB, the test suite would fail due to various
race conditions. Thus, we must create a separate DB for each of test
running processes and drop those, when all tests end.
The question is, where we should create/drop those DBs? There are a few
possible places in our code:
1) setUp()/tearDown() methods of test cases. These are executed for each
test case (there are ~370 tests in test_db_api). So it must be a bad idea
to create/apply migrations/drop DB 370 times, if MySQL or PostgreSQL are
used instead of SQLite in-memory DB
2) testr supports creation of isolated test environments (
https://testrepository.readthedocs.org/en/latest/MANUAL.html#remote-or-isolated-test-environments).
Long story short: we can specify commands to execute before tests are run,
after test have ended and how to run tests
3) module/package level setUp()/tearDown(), but these are probably
supported only in nosetest
So:
1) before tests are run, a few test DBs are created (the number of
created DBs is equal to the used concurrency level value)
2) for each test running process an env variable, containing the
connection string to the created DB, is set;
3) after all test running processes have ended, the created DBs are
dropped.
2. Tests cleanup should be fast.
For SQLite in-memory DB we use "create DB/apply migrations/run test/drop
DB" pattern, but that would be too slow for running tests on MySQL or
PostgreSQL.
Another option would be to create DB only once for each of test running
processes, apply DB migrations and then run each test case within a DB
transaction which is rolled back after a test ends. Combining with
something like "fsync = off" option of PostgreSQL this approach works
really fast (on my PC it takes ~5 s to run DB API tests on SQLite and ~10 s
on PostgreSQL).
3. Tests should be easy to run for developers as well as for Jenkins.
DB API tests are the only tests which should be run on different backends.
All other test cases can be run on SQLite. The convenient way to do this is
to create a separate tox env, running only DB API tests. Developers specify
the DB connection string which effectively defines the backend that should
be used for running tests.
I'd rather not run those tests 'opportunistically' in py26 and py27 as we
do for migrations, because they are going to be broken for some time (most
problems are described here
https://docs.google.com/a/mirantis.com/document/d/1H82lIxd54CRmy-22oPRUS1sBkEtiguMU8N0whtye-BE/edit).
So it would be really nice to have a separate non-voting gate test.
I would really like to receive some comments from Nova and Infra guys
on whether this is an acceptable approach of running DB API tests and how
we can improve this.
Thanks,
Roman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130621/f5046fbb/attachment.html>
More information about the OpenStack-dev
mailing list