[openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

Chris Dent chdent at redhat.com
Mon Jan 12 23:09:16 UTC 2015


On Tue, 13 Jan 2015, Boris Pavlovic wrote:

> Having separated engine seems like a good idea. It will really simplify
> stuff

I'm not certain that's the case, but it may be worth exploration.

> This seems like a huge duplication of efforts. I mean operators will write
> own
> tools developers own... Why not just resolve more common problem:
> "Does it work or not?"

Because no one tool can solve all problems well. I think it is far
better to have lots of small tools that are fairly focused on doing a
one or a few small jobs well.

It may be that there are pieces of gabbi which can be reused or
extracted to more general libraries. If there, that's fantastic. But
I think it is very important to try to solve one problem at a time
rather than everything at once.

>> $ python -m subunit.run discover gabbi |subunit-trace
>> [...]
>> gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request
>> [0.027512s] ... ok
>> [...]
>
>
> What is "test_request" Just one RestAPI call?

That long dotted name is the name of a dynamically (some metaclass
mumbo jumbo magic is used to turn the YAML into TestCase classes)
created single TestCase and within that TestCase is one single HTTP
request and the evaluation of its response. It directly corresponds to a
test named "inheritance of defaults" in a file called self.yaml.
self.yaml is in a directory containing other YAML files, all of which
are loaded by a python filed named test_intercept.py.

> Btw the thin that I am interested how they are all combined?

As I said before: Each yaml file is an ordered sequence of tests, each
one representing a singe HTTP request. Fixtures are per yaml file.
There is no cleanup phase outside of the fixtures. Each fixture is
expected to do its own cleanup, if required.

> And where are you doing cleanup? (like if you would like to test only
> creation of resource?)

In the ceilometer integration that is currently being built, the
test_gabbi.py[1] file configures itself to use a mongodb database that
is unique for this process. The test harness is responsible for
starting the mongodb. In a concurrency situation, each process will
have a different database in the same monogo server. When the test run
is done, mongo is shut down, the databases removed.

In other words, the environment surrounding gabbi is responsible for
doing the things it is good at, and gabbi does the HTTP tests. A long
running test cannot necessarily depend on what else might be in the
datastore used by the API. It needs to test that which it knows about.

I hope that clarifies things a bit.

[1] https://review.openstack.org/#/c/146187/2/ceilometer/gabbi/test_gabbi.py,cm

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent



More information about the OpenStack-dev mailing list