<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 03/03/15 00:56, Chris Dent wrote:<br>
</div>
<blockquote
cite="mid:alpine.OSX.2.00.1503021132410.57914@crank.local"
type="cite">
<br>
I (and a few others) have been using gabbi[1] for a couple of
months now
<br>
and it has proven very useful and evolved a bit so I thought it
would be
<br>
worthwhile to followup my original message and give an update.
<br>
<br>
Some recent reviews[1] give a sample of how it can be used to
validate
<br>
an existing API as well as search for less than perfect HTTP
behavior
<br>
(e.g sending a 404 when a 405 would be correct).
<br>
<br>
Regular use has lead to some important changes:
<br>
<br>
* It can now be integrated with other tox targets so it can run
<br>
alongside other functional tests.
<br>
* Individual tests can be xfailed and skipped. An entire YAML test
<br>
file can be skipped.
<br>
* For those APIs which provide insufficient hypermedia support,
the
<br>
ability to inspect and reference the prior test and use template
<br>
variables in the current request has been expanded (with support
for
<br>
environment variables pending a merge).
<br>
<br>
My original motivation for creating the tool was to make it easier
to
<br>
learn APIs by causing a body of readable YAML files to exist. This
<br>
remains important but what I've found is that writing the tests is
<br>
itself an incredible tool. Not only is it very easy to write tests
<br>
(throw some stuff at a URL and see what happen) and find (many)
bugs
<br>
as a result, the exploratory nature of test writing drives a
<br>
learning process.
<br>
<br>
You'll note that the reviews below are just the YAML files. That's
<br>
because the test loading and fixture python code is already
merged.
<br>
Adding tests is just a matter of adding more YAML. An interesting
<br>
trick is to run a small segment of the gabbi tests in a project
(e.g.
<br>
just one file that represents one type of resource) while
producing
<br>
coverage data. Reviewing the coverage of just the controller for
that
<br>
resource can help drive test creation and separation.
<br>
<br>
[1] <a class="moz-txt-link-freetext" href="http://gabbi.readthedocs.org/en/latest/">http://gabbi.readthedocs.org/en/latest/</a>
<br>
[2] <a class="moz-txt-link-freetext" href="https://review.openstack.org/#/c/159945/">https://review.openstack.org/#/c/159945/</a>
<br>
<a class="moz-txt-link-freetext" href="https://review.openstack.org/#/c/159204/">https://review.openstack.org/#/c/159204/</a>
<br>
<br>
</blockquote>
This looks very useful, I'd like to use this in the heat functional
tests job.<br>
<br>
Is it possible to write tests which do a POST/PUT then a loop of
GETs until some condition is met (a
<meta http-equiv="content-type" content="text/html;
charset=windows-1252">
response_json_paths match on IN_PROGRESS -> COMPLETE)<br>
<br>
This would allow for testing of non-atomic PUT/POST operations for
entities like nova servers, heat stacks etc.<br>
<br>
</body>
</html>