[openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

Boris Pavlovic boris at pavlovic.me
Mon Jan 12 22:00:52 UTC 2015


Hi Chris,

If there's sufficient motivation and time it might make sense to
> separate the part of gabbi that builds TestCases from the part that
> runs (and evaluates) HTTP requests and responses. If that happens then
> integration with tools like Rally and runners is probably possible.



Having separated engine seems like a good idea. It will really simplify
stuff


So, while this is an interesting idea, it's not something that gabbi
> intends to be. It doesn't validate existing clouds. It validates code
> that is used to run clouds.
> Such a thing is probably possible (especially given the fact that you
> can give a "real" host to gabbi tests) but that's not the primary
> goal.



This seems like a huge duplication of efforts. I mean operators will write
own
tools developers own... Why not just resolve more common problem:
"Does it work or not?"


But if you are concerned about individual test times gabbi makes every
> request an individual TestCase, which means that subunit can record times
> for it. Here's a sample of the output from running gabbi's own gabbi
> tests:
> $ python -m subunit.run discover gabbi |subunit-trace
> [...]
> gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request
> [0.027512s] ... ok
> [...]



What is "test_request" Just one RestAPI call?

Btw the thin that I am interested how they are all combined?

 -> fixtures.set
    -> run first Rest call
    -> run second Rest call
    ...
 -> fixtures.clean

Something like that?

And where are you doing cleanup? (like if you would like to test only
creation of resource?)


Best regards,
Boris Pavlovic



On Tue, Jan 13, 2015 at 12:37 AM, Chris Dent <chdent at redhat.com> wrote:

> On Tue, 13 Jan 2015, Boris Pavlovic wrote:
>
>  The Idea is brilliant. I may steal it! =)
>>
>
> Feel free.
>
>  But there are some issues that will be faced:
>>
>> 1) Using as a base unittest:
>>
>>  python -m subunit.run discover -f gabbi | subunit2pyunit
>>>
>>
>> So rally team won't be able to reuse it for load testing (if we directly
>> integrate it) because we will have huge overhead (of discover stuff)
>>
>
> So the use of unittest, subunit and related tools are to allow the
> tests to be integrated with the usual OpenStack testing handling. That
> is, gabbi is primarily oriented towards being a tool for developers to
> drive or validate their work.
>
> However we may feel about subunit, testr etc they are a de facto
> standard. As I said in my message at the top of the thread the vast
> majority of effort made in gabbi was getting it to be "tests" in the
> PyUnit view of the universe. And not just appear to be tests, but each
> request as an individual TestCase discoverable and addressable in the
> PyUnit style.
>
> In any case, can you go into more details about your concerns with
> discovery? In my limited exploration thus far the discovery portion is
> not too heavyweight: reading the YAML files.
>
>  2.3) It makes it hardly integratabtle with other tools. Like Rally..
>>
>
> If there's sufficient motivation and time it might make sense to
> separate the part of gabbi that builds TestCases from the part that
> runs (and evaluates) HTTP requests and responses. If that happens then
> integration with tools like Rally and runners is probably possible.
>
>  3) Usage by Operators is hard in case of N projects.
>>
>
> This is not a use case that I really imagined for gabbi. I didn't want
> to create a tool for everyone, I was after satisfying a narrow part of
> the "in tree functional tests" need that's been discussed for the past
> several months. That narrow part is: legible tests of the HTTP aspects
> of project APIs.
>
>  Operators would like to have 1 button that will say (does cloud work or
>> not). And they don't want to combine all gabbi files from all projects and
>> run test.
>>
>
> So, while this is an interesting idea, it's not something that gabbi
> intends to be. It doesn't validate existing clouds. It validates code
> that is used to run clouds.
>
> Such a thing is probably possible (especially given the fact that you
> can give a "real" host to gabbi tests) but that's not the primary
> goal.
>
>  4) Using subunit format is not good for functional testing.
>>
>> It doesn't allow you to collect detailed information about execution of
>> test. Like for benchmarking it will be quite interesting to collect
>> durations of every API call.
>>
>
> I think we've all got different definitions of functional testing. For
> example in my own personal defintion I'm not too concerned about test
> times: I'm worried about what fails.
>
> But if you are concerned about individual test times gabbi makes every
> request an individual TestCase, which means that subunit can record times
> for it. Here's a sample of the output from running gabbi's own gabbi
> tests:
>
> $ python -m subunit.run discover gabbi |subunit-trace
> [...]
> gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request
> [0.027512s] ... ok
> [...]
>
>
>
> --
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150113/87c4942b/attachment.html>


More information about the OpenStack-dev mailing list