[openstack-qa] Documentation of tempest
Attila Fazekas
afazekas at redhat.com
Fri May 17 11:01:44 UTC 2013
I like the ACC model, but it looks like too high level overview for starting,
but it can be a good continuation.
A simple model for mainly API testing can be good for now.
Initial considerations:
* We have services or components and and sub-part of this at API level.
(3D matrix mapped to 2D)
* We have operations to test
* We have things to verify
* I would like to know is something missing or not covered properly
I have constructed a matrix, than I simplified it.
Something like below as a colored HTML table seams OK.
------------------------------------------------------------------------
| component | CRUD | list | associations | quota | security | misc |
------------------------------------------------------------------------
| nova-server | | | | | | |
------------------------------------------------------------------------
| swift-object | | | | | | |
------------------------------------------------------------------------
| keystone-user| | | | | | |
------------------------------------------------------------------------
| misc | | | | | | |
------------------------------------------------------------------------
CRUD: create, read, update, get
list: list, list detailed, filtering, pagination
misc: action , bulk-action
security: tenant-isolates, admin-denied
association: member, attach, assign
* In every intersection we should have multiple links to the source
(github, automatically generated link).
* One test case can be in multiple intersection
* every line and row should have a short description
* every intersection should have a short description
* based on the description anyone can color the intersections
1. Black: Does not makes sense in this context
2. Red: completely missing
3. yellow: partially completed
4. light green: good progress
5. green: OK
6. white: unknown
We could have tables for different API versions or CLI ..
In theory if we add enough meta-data (attribute) on test cases to be map able
into this kind of table. The only paper work will be:
* create short description about what we need to verify
(likely the columns will tell it alone)
* Color the table
Possible addition:
* Links to additional resources like blueprints, bugs..
Probably the majority of the existing test cases fits into this model.
----- Original Message -----
From: "Daryl Walleck" <daryl.walleck at RACKSPACE.COM>
To: "All Things QA." <openstack-qa at lists.openstack.org>
Sent: Tuesday, May 14, 2013 6:07:01 PM
Subject: Re: [openstack-qa] Documentation of tempest
>From my experience, doing a plain text test plan for applications with the complexity of Nova doesn't scale well. I tried that, but without ways to intelligently sort/search/group test cases, it became unmanageable when I actually needed to pull data from it. I've been tinkering with a test management tool based on Google's ACC methodology (http://code.google.com/p/test-analytics/wiki/AccExplained) that's solved some of my issues with managing test cases. It's definitely not perfect, but I'd be open to sharing what I've worked on and how I've broken out my test cases.
Daryl
________________________________________
From: Martina Kollarova [mkollaro at redhat.com]
Sent: Tuesday, May 14, 2013 10:10 AM
To: All Things QA.
Subject: Re: [openstack-qa] Documentation of tempest
I think we need to create/generate something like a test plan. Tests
that are not yet written could be proposed in some .rst document or in
email or on a wiki. The existing test cases could be documented by
adding test descriptions into docstrings and then generating a doc page
from that (using sphinx or some other tool).
Martina
On Tue 14 May 2013 05:03:24 PM CEST, Attila Fazekas wrote:
> Hi All,
>
> The functions and methods used by the test cases are just partially
> documented. Which makes it difficult to understand the source
> for anyone who would like to start contributing.
>
> We should improve the Python doc strings and generate and publish a
> tempest "API" documentation.
>
> Is it doable?
>
> The second thing is, that it is very difficult to follow what is being
> tested and what is missing. For example:
> * Do we cover CRUD/REST operations for a certain "domain" and to what extent?
> * What kind of verification do we have about whether the correct thing
> happened on the server side?
> * What kind of features are covered?
>
> As blueprints are supposed to be assigned to someone and targeted at milestone
> there is now no place for 'those areas are not covered yet' list.
> I am looking for recommendations, how (wiki, rst file in tempest?)
> can we maintain something which will help us in answering those questions.
>
>
> BTW: We very rarely link to a detailed description from the blueprints.
> IMHO we should create a more detailed plans/description
> for the future hi-impact changes.
>
> Best Regards,
> Attila
>
> _______________________________________________
> openstack-qa mailing list
> openstack-qa at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa
--
Martina Kollarova
RedHat OpenStack Storage QE, Brno
_______________________________________________
openstack-qa mailing list
openstack-qa at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa
_______________________________________________
openstack-qa mailing list
openstack-qa at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa
More information about the openstack-qa
mailing list