[openstack-dev] testr help

Clark Boylan clark.boylan at gmail.com
Fri Mar 7 21:30:53 UTC 2014


On Fri, Mar 7, 2014 at 12:56 PM, John Dennis <jdennis at redhat.com> wrote:
> I've read the following documents as well as the doc for subunit and
> testtools but I'm still missing some big picture usage.
>
> https://wiki.openstack.org/wiki/Testr
> https://testrepository.readthedocs.org/en/latest/index.html
>
> The biggest problem seems to be whenever tests are listed I get
> thousands of lines of logging information and any information about the
> test is obscured by the enormous volume of logging data.
>
> From what I can figure out the files in .testrepository are in subunit
> version 1 protocol format. It seems to be a set of key/value pairs where
> the key is the first word on the line followed by a colon. It seems like
> one should be able to list just certain keys.
>
> Question: How do you list just the failing tests? I don't want to see
> the contents of the logging data stored under the pythonlogging: key.
> Ideally I'd like to see the name of the test, what the failure was, and
> possibly the associated stacktrace. Should be simple right? But I can't
> figure it out.
>
> Question: Suppose I'm debugging why a test failed. This is the one time
> I actually do want to see the pythonlogging data, but only for exactly
> the test I'm interested in. How does one do that?
>
> Question: Is there any simple how-to's or any cohesive documentation?
> I've read everything I can find but really simple tasks seem to elude
> me. The suite is composed of implementations from testtools, subunit and
> testr, each of which has decent doc but it's not always clear how these
> pieces fit together into one piece of functionality. OpenStack seems to
> have added something into the mix with the capture of the logging
> stream, something which is not covered anywhere in the testtools,
> subunit nor testtools doc that I can find. Any hints, suggestions or
> pointers would be deeply appreciated.
>
> --
> John
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I am going to do my best here. Testtools is essentially a replacement
for unittest. Similar to unittest2 but with saner python3 support and
it fits into the testrepository + subunit ecosystem. Subunit wraps
your test classes (unittest/testtools/etc) and outputs test results in
the subunit protocol which is a streaming test result protocol.
Subunit is useful because it allows you to run multiple test runners
in parallel and aggregate their results in a central test runner
runner. This is what testr does. Testr will list your tests using
discover, partition the list of tests into N sets where N is the
number of CPUs you have (at least how we use it, N can be arbitrary if
you choose), run N test runners each given a different test partition,
and collect the results in the .testrepository dir.

This behavior is great because it allows us to run tests in parallel
(even across different machines if we need to) with live streaming
updates. This allows us to have tests that take less wall time while
still getting the coverage we want. It will eventually allow us to do
more optimistic branching and reporting in Zuul because we can learn
immediately when the first test fails, report that, then continue
running the remaining tests to provide a full picture back to
developers.

But running tests in parallel introduces some fun problems. Like where
do you send logging and stdout output. If you send it to the console
it will be interleaved and essentially useless. The solution to this
problem (for which I am probably to blame) is to have each test
collect the logging, stdout, and stderr associated to that test and
attach it to that tests subunit reporting. This way you get all of the
output associated with a single test attached to that test and don't
have crazy interleaving that needs to be demuxed. The capturing of
this data is toggleable in the test suite using environment variables
and is off by default so that when you are not using testr you don't
get this behavior [0]. However we seem to have neglected log capture
toggles.

Now onto indirectly answering some of the questions you have. If a
single unittest is producing thousands of lines of log output that is
probably a bug. The test scope is too large or the logging is too
verbose or both. To work around this we probably need to make the
fakeLogger fixture toggleable with configurable log level. Then you
could do something like `OS_LOG_LEVEL=error tox` and avoid getting all
of the debug level logs.

For examining test results you can `testr load $SUBUNIT_LOG_FILE` then
run commands like `testr last`, `testr failing`, `testr slowest`, and
`testr stats` against that run (if you want details on the last run
you don't need an explicit load). There are also a bunch of tools that
come with python-subunit like subunit-filter, subunit2pyunit,
subunit-stats, and so on that you can use to do additional processing.

[0] https://git.openstack.org/cgit/openstack/nova/tree/nova/test.py#n231

Hope this helps,
Clark



More information about the OpenStack-dev mailing list