[openstack-qa] tempest run length - need a gate tag - call for help
Monty Taylor
mordred at inaugust.com
Wed May 15 19:54:36 UTC 2013
On 05/15/2013 12:42 AM, Attila Fazekas wrote:
>
>
>
>
> ----- Original Message -----
>> From: "James E. Blair" <jeblair at openstack.org>
>> To: "All Things QA." <openstack-qa at lists.openstack.org>
>> Sent: Monday, May 13, 2013 8:52:11 PM
>> Subject: Re: [openstack-qa] tempest run length - need a gate tag - call for help
>>
>> Sean Dague <sean at dague.net> writes:
>>
>>> Any assistance would be good.
>>>
>>> Right now we really just need 'gate' attr added to basically all the
>>> non skipped methods, we can prune later. Once 'gate' looks to be ~
>>> full, we can flip over check and gate to use that.
>>>
>>> I think long term the approach we're going to need to go with is 3
>>> sets of tests:
>>>
>>> smoke (< 10 mins)
>>> gate (< 45 mins)
>>> full (everything)
>>>
>>> All projects gate on gate
>>>
>>> Periodic runs of full - daily, more often?
>>>
>>> Tempest check runs full (but not gate), it's advisory.
>>>
>>> Some on demand facility for people to run full.
>>>
>>> At this point I'm not adding my +2 to any more tests (only approving
>>> fixes to existing tests) until we get gate tag in, as I don't think we
>>> should be running any longer than we currently are.
>>
>> We discussed this at the summit, and while running fewer tests is
>> certainly one of the things we can do, I don't remember consensus that
>> it was our first priority.
>>
>> We have a number of other things that we can do to reduce run-time that
>> I think we agreed should be a higher priority:
>>
>> A) Parallelize the test runner (move to testr).
> Probably we have other options to parallelize.
>
>> B) Split the run into multiple jobs (XML vs JSON, etc).
>
> XML-JSON test could be able to share on the same resources in most cases.
> The setUpClass -es are the expensive things, we cannot multiple those steps.
>
> If we allocate the resources by json, we need to ensure at least once the
> allocation steps tested with XML in a test case, not just in a fixture.
>
> Just making xml-json to be parallel in the same order, will causes concurrency
> in the same component.
>
>> C) Focus on flakey tests so that gate resets are less of a factor
>> (reducing sensitivity to runtime).
>>
>> Note that work on both A and B independently facilitates C.
>>
>> I think the general direction we'd like to head is to run _more_ tests,
>> not less. Further, I don't think that check jobs and gate jobs should
>> run different tests -- some people will learn to just ignore check jobs
>> and enqueue failing jobs into the gate (as people already ignore
>> non-voting jobs), resulting in more bad code landing. It's also
>> optimizing the wrong pipeline -- developers are more sensitive to slow
>> check jobs than gate jobs.
>>
>> I got the impression that we all agreed that testr was the highest
>> priority for this, and I'd still like to see that land before we move on
>> to functional job splits. Is that effort progressing? What can we do
>> to help?
>
> Yes, would be good to see the list of steps.
> AFAIK Without any resource sharing, just by removing the setUpClass -es
> the tests can be slower, even in parallel.
>
> We might need to modify the test runner related components as well.
>
> May be creating a tempest/testr branch could help.
> It can be a fat repository, it could including the testr, testresources, testtools.
>
> Until the testr variant is not ready, we can use the master branch.
>
> When the testr repo works, we can merge the changes.
There's actually an implementation plan for landing this incrementally
without breaking the speed, and without doing a feature branch (which I
do not think will work)
Step one is to encapsulate the setUpClass() _contents_ into fixtures.
Then, the body of the setUpClass should only contain calls to
instantiate the fixtures. If the tests are run via nosetests as
currently, there will be no operational changes and this can be landed
in master.
Step two is to start removing dependencies between the tests. This is
straight forward: for test in `testr list-tests` ; do testr run $test ;
done - repeat and fix issues until it works. Each test fix here can be
landed individually, as it's largely going to be a matter of adding
addCleanup calls to tests and adding setup commands to other tests. It
will be repetitive work, but each fix should be straightforward.
Once that runs to completion (notice, nothing here has broken or changed
the nose runs), testr run --parallel will work. We should at that point
test to see how it is with time.
If testr run --parallel is still too slow, then we can go in and wrap
the setup fixtures with testresources and use a ResourcedTestCase so
that we get resource affinity in the test runs.
Does that make sense to folks as a path forward?
Monty
More information about the openstack-qa
mailing list