[openstack-dev] [Infra][Nova] Running Nova DB API tests on different backends

Roman Podolyaka rpodolyaka at mirantis.com
Fri Jun 21 12:54:13 UTC 2013


Hello Sean, all,

Currently there are ~30 test classes in DB API tests, containing ~370 test
cases. setUpClass()/tearDownClass() would be definitely an improvement, but
applying of all DB schema migrations for MySQL 30 times is going to take a
long time...

Thanks,
Roman


On Fri, Jun 21, 2013 at 3:02 PM, Sean Dague <sean at dague.net> wrote:

> On 06/21/2013 07:40 AM, Roman Podolyaka wrote:
>
>> Hi, all!
>>
>> In Nova we've got a DB access layer known as "DB API" and tests for it.
>> Currently, those tests are run only for SQLite in-memory DB, which is
>> great for speed, but doesn't allow us to spot backend-specific errors.
>>
>> There is a blueprint
>> (https://blueprints.launchpad.**net/nova/+spec/db-api-tests-**
>> on-all-backends<https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends>
>> )
>> by Boris Pavlovic, which goal is to run the DB API tests on all DB
>> backends (e. g. SQLite, MySQL and PosgreSQL). Recently, I've been
>> working on implementation of this BP
>> (https://review.openstack.org/**#/c/33236/<https://review.openstack.org/#/c/33236/>
>> ).
>>
>> The chosen approach for implementing this is best explained by going
>> through a list of problems which must be solved:
>>
>> 1. Tests should be executed concurrently by testr.
>>
>> testr creates a few worker processes each running a portion of test
>> cases. When SQLite in-memory DB is used for testing, each of those
>> processes has it's own DB in its address space, so no race conditions
>> are possible. If we used a shared MySQL/PostgreSQL DB, the test suite
>> would fail due to various race conditions. Thus, we must create a
>> separate DB for each of test running processes and drop those, when all
>> tests end.
>>
>> The question is, where we should create/drop those DBs? There are a few
>> possible places in our code:
>>     1) setUp()/tearDown() methods of test cases. These are executed for
>> each test case (there are ~370 tests in test_db_api). So it must be a
>> bad idea to create/apply migrations/drop DB 370 times, if MySQL or
>> PostgreSQL are used instead of SQLite in-memory DB
>>     2) testr supports creation of isolated test environments
>> (https://testrepository.**readthedocs.org/en/latest/**
>> MANUAL.html#remote-or-**isolated-test-environments<https://testrepository.readthedocs.org/en/latest/MANUAL.html#remote-or-isolated-test-environments>
>> ).
>> Long story short: we can specify commands to execute before tests are
>> run, after test have ended and how to run tests
>>      3) module/package level setUp()/tearDown(), but these are probably
>> supported only in nosetest
>>
>
> How many Classes are we talking about? We're actually going after a
> similar problem in Tempest that setUp isn't cheap, so Matt Treinish has an
> experimental patch to testr which allows class level partitioning instead.
> Then you can use setupClass / teardownClass for expensive resource setup.
>
>
>  So:
>>     1) before tests are run, a few test DBs are created (the number of
>> created DBs is equal to the used concurrency level value)
>>     2) for each test running process an env variable, containing the
>> connection string to the created DB, is set;
>>     3) after all test running processes have ended, the created DBs are
>> dropped.
>>
>> 2. Tests cleanup should be fast.
>>
>> For SQLite in-memory DB we use "create DB/apply migrations/run test/drop
>> DB" pattern, but that would be too slow for running tests on MySQL or
>> PostgreSQL.
>>
>> Another option would be to create DB only once for each of test running
>> processes, apply DB migrations and then run each test case within a DB
>> transaction which is rolled back after a test ends. Combining with
>> something like "fsync = off" option of PostgreSQL this approach works
>> really fast (on my PC it takes ~5 s to run DB API tests on SQLite and
>> ~10 s on PostgreSQL).
>>
>
> I like the idea of creating a transaction in setup, and triggering
> rollback in teardown, that's pretty clever.
>
>
>  3. Tests should be easy to run for developers as well as for Jenkins.
>>
>> DB API tests are the only tests which should be run on different
>> backends. All other test cases can be run on SQLite. The convenient way
>> to do this is to create a separate tox env, running only DB API tests.
>> Developers specify the DB connection string which effectively defines
>> the backend that should be used for running tests.
>>
>> I'd rather not run those tests 'opportunistically' in py26 and py27 as
>> we do for migrations, because they are going to be broken for some time
>> (most problems are described here
>> https://docs.google.com/a/**mirantis.com/document/d/**1H82lIxd54CRmy-**
>> 22oPRUS1sBkEtiguMU8N0whtye-BE/**edit<https://docs.google.com/a/mirantis.com/document/d/1H82lIxd54CRmy-22oPRUS1sBkEtiguMU8N0whtye-BE/edit>
>> ).
>> So it would be really nice to have a separate non-voting gate test.
>>
>
> Seperate tox env is the right approach IMHO, that would let it run
> issolated non-voting until we get to the bottom of the issues. For
> simplicity I'd still use the opportunistic db user / pass, as that will
> mean it could run upstream today.
>
>         -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ______________________________**_________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.**org <OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130621/08ac4a35/attachment.html>


More information about the OpenStack-dev mailing list