[openstack-qa] Priolity brainstorming my initial toughts

Attila Fazekas afazekas at redhat.com
Mon Jun 17 20:11:45 UTC 2013


IMHO you can extend testtools.TestCase in the tempest.test.BaseTestCase.
Seams like _run_setup and _run_teardown can call the setUpClass and tearDownClass in time.
It may be good for benchmarking.
https://pypi.python.org/pypi/python-subunit/0.0.10 has configurable testsuite option.

If you can move the list resources out from the setUpclass  and organize them to the same process
 (both xml and json), IMHO you can save time.
The isDirty should be overridden.

It was quick look and many thing just from my old memory,
I hope I did not missed something.

Best Regards,

----- Original Message -----
From: "Matthew Treinish" <mtreinish at kortar.org>
To: "All Things QA." <openstack-qa at lists.openstack.org>
Sent: Monday, June 17, 2013 4:11:35 PM
Subject: Re: [openstack-qa] Priolity brainstorming my initial toughts

On Sun, Jun 16, 2013 at 04:40:19AM -0400, Attila Fazekas wrote:
> History of the Active VM creation performance.
> https://bugs.launchpad.net/nova/+bug/1016633 <open>
> https://bugs.launchpad.net/nova/+bug/1100446 <fixed>
> ----- Original Message -----
> From: "Christopher Yeoh" <cyeoh at au1.ibm.com>
> To: openstack-qa at lists.openstack.org
> Sent: Sunday, June 16, 2013 9:12:50 AM
> Subject: Re: [openstack-qa] Priolity brainstorming my initial toughts
> On Fri, 14 Jun 2013 16:13:24 -0400
> Matthew Treinish <mtreinish at kortar.org> wrote:
> > On Fri, Jun 14, 2013 at 09:12:11PM +0200, Giulio Fidente wrote:
> > > On 06/14/2013 10:33 AM, Attila Fazekas wrote:
> > > >- Gate time is continuously increasing and it will
> > > >   if we do not act soon.
> > > 
> > > It's an interesting topic but I need a little wrap on on the
> > > situation. Still, it'll hopefully be useful to other people also
> > > willing to contribute.
> > > 
> > > What are the blockers currently preventing us from using testr and
> > > "drop" nose?
> > 
> > So testr will technically run today, (to test it use 'testr run' or
> > in parallel 'testr run --parallel') but you'll find that it is slower
> > than just running serially with nose. 
> What sort of machine are you finding that running in parallel runs
> slower than run serially with nose? A few months ago I was seeing a 2x
> speed up on 4 core machine when running with testr in parallel compared
> to running with nose serially.

So I'm just running this on my dev box with dual Xeon X5570 (total of 8 cores)
and 48GB of RAM. The nose job finished successfully in just under an hour while
the testr --parallel run took slightly more than an hour.

Although, it's been a while since I tried a full run to do a side by side
comparison (which is difficult because testr won't pass everything currently)
but nose will. So I don't know if something like a few tests hitting the max
timeout on testr because of resource contention is what is causing the longer
run time. I can try running it again and getting better more detailed results.

-Matt Treinish

openstack-qa mailing list
openstack-qa at lists.openstack.org

More information about the openstack-qa mailing list