[openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

Eoghan Glynn eglynn at redhat.com
Mon Jan 12 19:54:43 UTC 2015

> After some discussion with Sean Dague and a few others it became
> clear that it would be a good idea to introduce a new tool I've been
> working on to the list to get a sense of its usefulness generally,
> work towards getting it into global requirements, and get the
> documentation fleshed out so that people can actually figure out how
> to use it well.
> tl;dr: Help me make this interesting tool useful to you and your
> HTTP testing by reading this message and following some of the links
> and asking any questions that come up.
> The tool is called gabbi
>      https://github.com/cdent/gabbi
>      http://gabbi.readthedocs.org/
>      https://pypi.python.org/pypi/gabbi
> It describes itself as a tool for running HTTP tests where requests
> and responses are represented in a declarative form. Its main
> purpose is to allow testing of APIs where the focus of test writing
> (and reading!) is on the HTTP requests and responses, not on a bunch of
> Python (that obscures the HTTP).
> The tests are written in YAML and the simplest test file has this form:
> ```
> tests:
> - name: a test
>    url: /
> ```
> This test will pass if the response status code is '200'.
> The test file is loaded by a small amount of python code which transforms
> the file into an ordered sequence of TestCases in a TestSuite[1].
> ```
> def load_tests(loader, tests, pattern):
>      """Provide a TestSuite to the discovery process."""
>          test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
>          return driver.build_tests(test_dir, loader, host=None,
>                                    intercept=SimpleWsgi,
>                                    fixture_module=sys.modules[__name__])
> ```
> The loader provides either:
> * a host to which real over-the-network requests are made
> * a WSGI app which is wsgi-intercept-ed[2]
> If an individual TestCase is asked to be run by the testrunner, those tests
> that are prior to it in the same file are run first, as prerequisites.
> Each test file can declare a sequence of nested fixtures to be loaded
> from a configured (in the loader) module. Fixtures are context managers
> (they establish the fixture upon __enter__ and destroy it upon
> __exit__).
> With a proper group_regex setting in .testr.conf each YAML file can
> run in its own process in a concurrent test runner.
> The docs contain information on the format of the test files:
>      http://gabbi.readthedocs.org/en/latest/format.html
> Each test can state request headers and bodies and evaluate both response
> headers and response bodies. Request bodies can be strings in the
> YAML, files read from disk, or JSON created from YAML structures.
> Response verifcation can use JSONPath[3] to inspect the details of
> response bodies. Response header validation may use regular
> expressions.
> There is limited support for refering to the previous request
> to construct URIs, potentially allowing traversal of a full HATEOAS
> compliant API.
> At the moment the most complete examples of how things work are:
> * Ceilometer's pending use of gabbi:
>    https://review.openstack.org/#/c/146187/
> * Gabbi's testing of gabbi:
>    https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
>    (the loader and faked WSGI app for those yaml files is in:
>    https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
> One obvious thing that will need to happen is a suite of concrete
> examples on how to use the various features. I'm hoping that
> feedback will help drive that.
> In my own experimentation with gabbi I've found it very useful. It's
> helped me explore and learn the ceilometer API in a way that existing
> test code has completely failed to do. It's also helped reveal
> several warts that will be very useful to fix. And it is fast. To
> run and to write. I hope that with some work it can be useful to you
> too.

Thanks for the write-up Chris,

Needless to say, we're sold on the utility of this on the ceilometer
side, in terms of crafting readable, self-documenting tests that reveal
the core aspects of an API in a easily consumable way.

I'd be interested in hearing the api-wg viewpoint, specifically whether
that working group intends to recommend any best practices around the
approach to API testing.

If so, I think gabbi would be a worthy candidate for consideration.


> Thanks.
> [1] Getting gabbi to play well with PyUnit style tests and
>      with infrastructure like subunit and testrepository was one of
>      the most challenging parts of the build, but the result has been
>      a lot of flexbility.
> [2] https://pypi.python.org/pypi/wsgi_intercept
> [3] https://pypi.python.org/pypi/jsonpath-rw
> --
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list