[openstack-qa] In connection with speed-up and future design of tempest
Christopher Yeoh
cyeoh at au1.ibm.com
Tue Jan 22 12:15:19 UTC 2013
On Mon, 21 Jan 2013 07:46:44 -0500 (EST)
Attila Fazekas <afazekas at redhat.com> wrote:
>
> 1. testtools
> I have seen various attempts in refactoring tempest to be
> compatible with testr (testrepository, testresources, testtools), but
> I did not see any detailed plans for doing it, and even did not read
> anything about the longer term goals.
>
> In the
> https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest the
> full specification is point to the blueprint edit instead of wiki
> page. Am I missed something ?
Not really. I'm not long back from a few weeks off but I've tried to
start to flesh out what we're doing. As I'm new to tempest and frankly
new to the various python test frameworks I'm putting in more detail as
I work out what is needed. So far I've been adding them as work items.
Will flesh out more as I go in a wiki page if you'd prefer?
Some of the work such as what tests can't be run in parallel as is
being discovered simply by Ivan and myself simply by attempting to run
them in parallel. Fixing this in most cases I think will be independent
of nosetests/testr.
> I saw many cool features in these tools. Probably I would be the
> first who say do it yesterday if I could see how it exactly will
> impure the performance, without additional side effects or resource
> starvation or even without deadlock or synchronization issues.
>
> I just seen testr in parallelization context I assume we just
> considering a major refactoring and switching to testtools just
> because of the parallel execution. Please FIXME.
We're broadly following the work done in nova to convert from
nosetests/unittests to testr/testtools. Perhaps some others could
comments on the reasons and advantages for this change.
From what I've seen so far I don't think the unittests to testtools
work will be intrusive. There are some nose dependencies but looks like
its mainly related to skipping tests and that translates to testtools
fairly easily. Do you know of others?
In general I don't think it will hurt to be able to run under either
testr or nosetests and perhaps we will be able to preserve both.
> 3. Resource reuse
>
> I heard many concerns about the resource reusing, but I think we
> can point out which test case made dirty the resource with proper
> logging and reuse strategy. The OpenStack API provides basic and
> advance information about the resource state, we can decide is the
> resource in good shape before starting a test code. If the above
> concept not working we found a real bug, or the API does not provide
> enough information, which is also a bug IMHO.
>
> I think we should try to go in resource reuse way it has great
> benefits even on a single tempest thread, but we need to consider a
> lot of thing if we even want to do it in parallel.
There are definitely issues with tests failing in a way that re-running
the tests will cause others to fail. I haven't been chasing these
problems down (yet), but it happens often enough that as a precaution I
restart devstack to clear everything out between test runs.
> Servers can be allocated by both XML and json and EC2 API call,
> however XML and json will know the same server id the EC2 will see a
> different one. Now the OS API can show the servers EC2 id as well,
> but in other cases (image) we might need to use "whitebox" DB query.
Thats an interesting point. I haven't seen the XML and JSON tests step
on each other when I run them in parallel with testr. I wonder if thats
just lucky or something to do with the test framework?
> Just saying in test fixture it "needs a server", not enough.
> Sometimes we require a special server. But all server using the same
> RAM pool and CPU pool, and it limited by the hardware.
>
> Test fixtures with multiple resource need can cause deadlock or
> unexpected failure, when we let them start before we can grantee the
> necessary resources.
There's two sets of flavor tests (admin and non-admin) and the former
can theoretically interfere with the latter (it happened once but I
wasn't able to replicate it even with high concurrency). Might be able
to rewrite the non-admin ones to avoid this, but has me wondering what
other problems there may be around.
> 3.3 Manual ordering
>
> As you can see the problem set is big. Probably I missed a lot of
> other thing. I would not be surprised, if we could achieve better
> performance more easily by "manual" performance tuning. ie. manual
> test case ordering (to multiple threads) while considering resource
> reusing.
My gut feeling is that we avoid manual ordering if possibly can. If
automatic ordering can get us reasonably close to optimal performance
its not worth the headache of often having to retune the manual
ordering. Because manual ordering when its wrong because something has
changed is likely to be very wrong and tuning it will require
good knowledge of all the tests which will get increasingly difficult
as the test suite gets larger.
> I can even live without a unittest framework, if it has
> significant benefits and someone can show me a very great plan about
> how to do it. Minimal requirement:
> - Return is everything was OK or not
> - On failure tells what was not OK exactly and very verbosely
> (first failure might be enough)
> - Ability to skip the failed part, and test the rest of the
> system
>
testtools is a superset of unittests in terms of functionality isn't it?
So we won't be losing anything in the conversion?
btw my immediate desire to go from unittests to testtools is to be able
to remove the nose.skip calls and replace them with testtools skips
(which will also work under nose) as currently the "raise nose.Skip"
type calls get flagged as a failure under testr rather than a skip. I'm
(perhaps naively) thinking that the conversion won't have major side
effects, but will certainly find out before anything is committed (Ivan
is looking at this at the moment).
> 4. Fear, uncertainty and doubt
>
> Looks like it is not clear for everyone. Do we rejecting patches,
> because it is not testr/testtools ready or because it is as nose
> dependent as the others.
Do you have some examples of nose dependencies that you think will be
introduced? I'm just trying to get a feeling of how much of a problem
this will be in practice...
Regards,
Chris
--
cyeoh at au.ibm.com
More information about the openstack-qa
mailing list