[openstack-dev] [QA][infra][all] Measuring code coverage in integration tests
Andrea Frittoli
andrea.frittoli at gmail.com
Thu Oct 6 14:34:13 UTC 2016
The difficulty with integration testing is that the services under test run
in processes separated from the test one(s).
There is not obvious / existing mechanism to collect coverage data in this
case. Several cycles back used to be a backdoor built into nova to enable
coverage data collection during integration testing, but it has been
removed long ago.
andrea
On Thu, Sep 29, 2016 at 12:12 PM Assaf Muller <assaf at redhat.com> wrote:
> On Thu, Sep 29, 2016 at 5:27 AM, milanisko k <vetrisko at gmail.com> wrote:
>
>
>
> út 27. 9. 2016 v 20:12 odesílatel Assaf Muller <assaf at redhat.com> napsal:
>
> On Tue, Sep 27, 2016 at 2:05 PM, Assaf Muller <assaf at redhat.com> wrote:
>
>
>
> On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
> tnurlygayanov at mirantis.com> wrote:
>
> Hi milan,
>
> we have measured the test coverage for OpenStack components with
> coverage.py tool [1]. It is very easy tool and it allows measure the
> coverage by lines of code and etc. (several metrics are available).
>
> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>
>
> coverage also supports aggregating results from multiple runs, so you can
> measure results from combinations such as:
>
>
>
> 1) Unit tests
> 2) Functional tests
> 3) Integration tests
> 4) 1 + 2
> 5) 1 + 2 + 3
>
> To my eyes 3 and 4 make the most sense. Unit and functional tests are
> supposed to give you low level coverage, keeping in mind that 'functional
> tests' is an overloaded term and actually means something else in every
> community. Integration tests aren't about code coverage, they're about user
> facing flows, so it'd be interesting to measure coverage
> from integration tests,
>
>
> Sorry, replace integration with unit + functional.
>
>
> then comparing coverage coming from integration tests, and getting the set
> difference between the two: That's the area that needs more unit and
> functional tests.
>
>
> To reiterate:
>
> Run coverage from integration tests, let this be c
> Run coverage from unit and functional tests, let this be c'
>
> Let diff = c \ c'
>
> 'diff' is where you're missing unit and functional tests coverage.
>
>
> Assaf, the tool I linked is a monkey-patched coverage.py but the collector
> stores the stats in Redis --- gives the same accumulative collecting.
> Is there any interest/effort to collect coverage stats from selected jobs
> in CI, no matter the tool used?
>
>
> Some projects already collect coverage stats on their post-merge queue:
>
> http://logs.openstack.org/61/61af70a734b99e61e751cfb494ddc93a85eec394/post/nova-coverage-db-ubuntu-xenial/55210aa/
>
> It's invoked with 'tox -e cover' which you define in your project's
> tox.ini file, I imagine most projects if not all have it set up to gather
> coverage from a unit tests run.
>
>
>
>
>
>
>
>
>
> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
> jordan.pittier at scality.com> wrote:
>
> Hi,
>
> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k <vetrisko at gmail.com> wrote:
>
> Dear Stackers,
> I'd like to gather some overview on the $Sub: is there some infrastructure
> in place to gather such stats? Are there any groups interested in it? Any
> plans to establish such infrastructure?
>
> I am working on such a tool with mixed results so far. Here's my approach
> taking let's say Nova as an example:
>
> 1) Print all the routes known to nova (available as a python-routes
> object: nova.api.openstack.compute.APIRouterV21())
> 2) "Normalize" the Nova routes
> 3) Take the logs produced by Tempest during a tempest run (in
> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
> 8774)
> 4) "Normalize" the tested-by-tempest Nova routes.
> 5) Compare the two sets of routes
> 6) ????
> 7) Profit !!
>
> So the hard part is obviously the normalizing of the URLs. I am currently
> using a tons of regex.... :) That's not fun.
>
> I'll let you guys know if I have something to show.
>
> I think there's real interest on the topic (it comes up every year or so),
> but no definitive answer/tool.
>
> Cheers,
> Jordan
>
>
>
>
> <https://www.scality.com/backup/?utm_source=signatures&utm_medium=email&utm_campaign=backup2016>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
>
> Timur,
> Senior QA Manager
> OpenStack Projects
> Mirantis Inc
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20161006/f40b3da5/attachment.html>
More information about the OpenStack-dev
mailing list