[openstack-qa] Using Fixtures with Tempest

Matthew Treinish mtreinish at kortar.org
Tue Jun 11 20:47:23 UTC 2013

I've pushed out a WIP prototype of fixtures for images in the glance tests
located here:


However I have a couple of concerns with the new model. The biggest one being
how teardown works with fixtures. Currently in the images tests for teardown we
loop over the list of created images and make a delete call on each of the images.
Then after the loop exits we loop over the same list again and wait for the image
to be removed. (not found on a GET) This has the advantage of doing all the async
delete operations at once minimizing how long we wait. However, with fixtures this
no longer works the same way. The clean ups for the image fixtures now run the
delete immediately followed by the wait. This ends up giving us the worst case wait
because none of the delete wait time ends up overlapping. 

This will be a big issue if we end up going down the fixtures route. We've
already spent some time optimizing this type problem to try and minimize
waiting. But, I'm not sure that the fixtures model allows to optimize the
waiting on resource cleanup in the same way.

On Fri, Jun 07, 2013 at 08:23:56AM -0400, Attila Fazekas wrote:
> What prevent us to make testr more tempest friendly ?
> I did not find the how to contribute page.
> I am not sure is adding nosetests features or features helping
> developing tests with shared resources are welcome.
> I do not see how the fixtures can help,
> in the speed up perspective, if we cannot have faster parallel tempest,
> with adding a feature for calling the setUpClass and tearDownclass on every setUp
>  and tearDown.

This is true, fixtures doesn't solve this issue. We still have a shared resource
dependency for these types of tests. This is probably one of the biggest issues
with running testr in parallel on tempest as it sits today. (which works for the
most part, albeit slowly because setUpClass gets run on each test method)

> What is your opinion about, adding a configuration option and some tricky
> code for switching tempest to the above mode ?
> We have 2 not independent test class AFAIK, they should be fixed anyway.

I think that there are 2 or 3 ways that getting around this issue. We can
condense the multiple test methods with shared resources in setUpClass into one
large test method. Alternatively we can look into making testrepository
recognize the shared resource dependency so we don't end up duplicating the
expensive creates in setupClass. I think using fixtures would probably
eventually be necessary here, although I don't believe there is a way to do this
currently. (I may be wrong though) A third option is to enable testr to split on
test classes instead of test methods. 

I'm not really familiar with the testrepository code weigh in on how simple
those solutions are. As for condensing the tests with shared resources into a
single large test. I am not opposed to this, but it means we lose some of the
test granularity we have now. I'd like to consider the other options before we
start doing this for all the tests.

> We have many possibilities to speed up tempest on single thread,
> but non of the tricks can be stronger than an unlimited
>  horizontal scalability.
> Huge amount of raw force is stronger, than any single host optimization.
> ----- Original Message -----
> From: "Matthew Treinish" <mtreinish at kortar.org>
> To: openstack-qa at lists.openstack.org
> Cc: openstack-dev at lists.openstack.org
> Sent: Thursday, June 6, 2013 5:38:58 PM
> Subject: [openstack-qa] Using Fixtures with Tempest
> So I've started working on using fixtures with tempest, as a first step towards
> making tempest more testr friendly. I've started experimenting with it by using
> the images tests in tempest/api/images. (the glance api tests) The glance tests
> are a small enough set and relatively self contained in tempest that it's a good
> place to prototype this. I'm trying to figure out the best model for what should
> be created and used in a fixture. But, I'm still new with the whole fixtures
> model, so if anyone has any insight on the best way to organize things as
> fixture that would be great.
> Right now the image tests have a base image test class that contains a couple of
> helper functions, a manager object(which contains all the client objects) and a
> list of created images. This base class is inherited by an api version specific
> test classes that just specify the client object and run a check in setUpClass()
> to ensure that glance api supports that particular api version. These api
> version specific classes are then inherited by all the image test classes split
> by api version. It's probably also worth noting that two of these test classes
> create a number of images in setUpClass() to use for the list operations being
> tested in the individual test methods.
> I was thinking that fixtures could be used a number of different ways in this
> code, but I was leaning towards just having an images fixture that contained
> the list of images as a starting point. This would still keep the same class
> structure for the tests but move the resource tracking into a fixture. But,
> I'm not sure this would be the most effective use of fixtures.
> Does anyone have any ideas or insight into how fixtures should be used here?
> Thanks,
> Matt Treinish
> _______________________________________________
> openstack-qa mailing list
> openstack-qa at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa
> _______________________________________________
> openstack-qa mailing list
> openstack-qa at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa

More information about the openstack-qa mailing list