[openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
greg at greghaynes.net
Mon Jan 12 20:36:04 UTC 2015
Excerpts from Chris Dent's message of 2015-01-12 19:20:18 +0000:
> After some discussion with Sean Dague and a few others it became
> clear that it would be a good idea to introduce a new tool I've been
> working on to the list to get a sense of its usefulness generally,
> work towards getting it into global requirements, and get the
> documentation fleshed out so that people can actually figure out how
> to use it well.
> tl;dr: Help me make this interesting tool useful to you and your
> HTTP testing by reading this message and following some of the links
> and asking any questions that come up.
> The tool is called gabbi
> It describes itself as a tool for running HTTP tests where requests
> and responses are represented in a declarative form. Its main
> purpose is to allow testing of APIs where the focus of test writing
> (and reading!) is on the HTTP requests and responses, not on a bunch of
> Python (that obscures the HTTP).
> The tests are written in YAML and the simplest test file has this form:
> - name: a test
> url: /
> This test will pass if the response status code is '200'.
> The test file is loaded by a small amount of python code which transforms
> the file into an ordered sequence of TestCases in a TestSuite.
> def load_tests(loader, tests, pattern):
> """Provide a TestSuite to the discovery process."""
> test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
> return driver.build_tests(test_dir, loader, host=None,
> The loader provides either:
> * a host to which real over-the-network requests are made
> * a WSGI app which is wsgi-intercept-ed
> If an individual TestCase is asked to be run by the testrunner, those tests
> that are prior to it in the same file are run first, as prerequisites.
> Each test file can declare a sequence of nested fixtures to be loaded
> from a configured (in the loader) module. Fixtures are context managers
> (they establish the fixture upon __enter__ and destroy it upon
> With a proper group_regex setting in .testr.conf each YAML file can
> run in its own process in a concurrent test runner.
> The docs contain information on the format of the test files:
> Each test can state request headers and bodies and evaluate both response
> headers and response bodies. Request bodies can be strings in the
> YAML, files read from disk, or JSON created from YAML structures.
> Response verifcation can use JSONPath to inspect the details of
> response bodies. Response header validation may use regular
> There is limited support for refering to the previous request
> to construct URIs, potentially allowing traversal of a full HATEOAS
> compliant API.
> At the moment the most complete examples of how things work are:
> * Ceilometer's pending use of gabbi:
> * Gabbi's testing of gabbi:
> (the loader and faked WSGI app for those yaml files is in:
> One obvious thing that will need to happen is a suite of concrete
> examples on how to use the various features. I'm hoping that
> feedback will help drive that.
> In my own experimentation with gabbi I've found it very useful. It's
> helped me explore and learn the ceilometer API in a way that existing
> test code has completely failed to do. It's also helped reveal
> several warts that will be very useful to fix. And it is fast. To
> run and to write. I hope that with some work it can be useful to you
>  Getting gabbi to play well with PyUnit style tests and
> with infrastructure like subunit and testrepository was one of
> the most challenging parts of the build, but the result has been
> a lot of flexbility.
>  https://pypi.python.org/pypi/wsgi_intercept
>  https://pypi.python.org/pypi/jsonpath-rw
Awesome! I was discussing trying to add extensions to RAML so we
could do something like this the other day. Is there any reason you
didnt use an existing modeling language like this?
More information about the OpenStack-dev