[openstack-dev] testr help
Clark Boylan
clark.boylan at gmail.com
Mon Mar 10 20:04:26 UTC 2014
On Mon, Mar 10, 2014 at 11:31 AM, Zane Bitter <zbitter at redhat.com> wrote:
> Thanks Clark for this great write-up. However, I think the solution to the
> problem in question is richer commands and better output formatting, not
> discarding information.
>
>
> On 07/03/14 16:30, Clark Boylan wrote:
>>
>> But running tests in parallel introduces some fun problems. Like where
>> do you send logging and stdout output. If you send it to the console
>> it will be interleaved and essentially useless. The solution to this
>> problem (for which I am probably to blame) is to have each test
>> collect the logging, stdout, and stderr associated to that test and
>> attach it to that tests subunit reporting. This way you get all of the
>> output associated with a single test attached to that test and don't
>> have crazy interleaving that needs to be demuxed. The capturing of
>
>
> This is not really a problem unique to parallel test runners. Printing to
> the console is just not a great way to handle stdout/stderr in general
> because it messes up the output of the test runner, and nose does exactly
> the same thing as testr in collecting them - except that nose combines
> messages from the 3 sources and prints the output for human consumption,
> rather than in separate groups surrounded by lots of {{{random braces}}}.
>
Except nose can make them all the same file descriptor and let
everything multiplex together. Nose isn't demuxing arbitrary numbers
of file descriptors from arbitrary numbers of processes.
>
>> this data is toggleable in the test suite using environment variables
>> and is off by default so that when you are not using testr you don't
>> get this behavior [0]. However we seem to have neglected log capture
>> toggles.
>
>
> Oh wow, there is actually a way to get the stdout and stderr? Fantastic! Why
> on earth are these disabled?
>
See above, testr has to deal with multiple writers to stdout and
stderr, you really don't want them all going to the same place when
using testr (which is why stdout and stderr are captured when running
testr but not otherwise).
>
> Please, please, please don't turn off the logging too. That's the only tool
> left for debugging now that stdout goes into a black hole.
>
Logging goes into the same "black hole" today, I am suggesting that we
make this toggleable like we have made stdout and stderr capturing
toggleable. FWIW this isn't a black hole it is all captured on disk
and you can refer back to it at any time (the UI around doing this
could definitely be better though).
>
>> Now onto indirectly answering some of the questions you have. If a
>> single unittest is producing thousands of lines of log output that is
>> probably a bug. The test scope is too large or the logging is too
>> verbose or both. To work around this we probably need to make the
>> fakeLogger fixture toggleable with configurable log level. Then you
>> could do something like `OS_LOG_LEVEL=error tox` and avoid getting all
>> of the debug level logs.
>
>
> Fewer logs is hardly ever what you want when debugging a unit test.
>
I agree, but in general Openstack does a terrible job at logging
particularly when you look at the DEBUG level. There is too much noise
and not enough consistency. Being able to set the level you want to
capture at or turn off capturing is a good thing.
>
> I think what John is looking for is a report at the end of each test run
> that just lists the tests that failed instead of all the details (like
> `testr failing --list`), or perhaps the complete list of tests with the
> coloured pass/fail like IIRC nose does. Since testr's killer feature is to
> helpfully store all of the results for later, maybe this output is all you
> need in the first instance (along with a message telling the user what
> command to run to see the full output, of course), or at least that could be
> an option.
>
This is a good idea. Could be done by having testr output info without
any subunit attachments.
>
>> For examining test results you can `testr load $SUBUNIT_LOG_FILE` then
>> run commands like `testr last`, `testr failing`, `testr slowest`, and
>> `testr stats` against that run (if you want details on the last run
>> you don't need an explicit load). There are also a bunch of tools that
>> come with python-subunit like subunit-filter, subunit2pyunit,
>> subunit-stats, and so on that you can use to do additional processing.
>
>
> It sounds like what John wants to do is pass a filter to something like
> `testr failing` or `testr last` to only report a subset of the results, in
> much the same way as it's possible to pass a filter to `testr` to only run a
> subset of the tests.
>
> BTW, since I'm on the subject, testr would be a lot more
> confidence-inspiring if running `testr failing` immediately after running
> `testr` reported the same number of failures, or indeed if running `testr`
> twice in a row was guaranteed to report the same number of failures
> (assuming that the tests themselves are deterministic). I can't imagine why
> one of the subunit processes reporting a test failure is considered a
> failure in itself (that's what it's supposed to do!), particularly when
> `testr failing` manages to filter them out OK.
>
IIRC the test runner exiting non zero is a failure in itself to handle
cases where test runners die a horrible death without outputting any
failed subunit data. It could probably be cleaner but this way it is
preserved in the subunit log.
>
> (I understand that some of the things mentioned above may already have been
> improved since the latest 0.0.18 release. I can't actually find the repo at
> the moment to check, because it's not linked from PyPI or Launchpad and
> 'testrepository' turns out to be an extremely common name for things on the
> Internet.)
>
https://launchpad.net/testrepository is linked from pypi....
Clark
More information about the OpenStack-dev
mailing list