[openstack-dev] [qa] moratorium on new negative tests in Tempest

Kenichi Oomichi oomichi at mxs.nes.nec.co.jp
Wed Nov 13 06:24:00 UTC 2013


Hi,

I was glad to meet OpenStack developers in the summit,
and I am interested in this topic.

> -----Original Message-----
> From: David Kranz [mailto:dkranz at redhat.com]
> Sent: Wednesday, November 13, 2013 4:33 AM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest
> 
> On 11/12/2013 01:36 PM, Clint Byrum wrote:
> > Excerpts from Sean Dague's message of 2013-11-12 10:01:06 -0800:
> >> During the freeze phase of Havana we got a ton of new contributors
> >> coming on board to Tempest, which was super cool. However it meant we
> >> had this new influx of negative tests (i.e. tests which push invalid
> >> parameters looking for error codes) which made us realize that human
> >> creation and review of negative tests really doesn't scale. David Kranz
> >> is working on a generative model for this now.
> >>
> > Are there some notes or other source material we can follow to understand
> > this line of thinking? I don't agree or disagree with it, as I don't
> > really understand, so it would be helpful to have the problems enumerated
> > and the solution hypothesis stated. Thanks!
>
> I am working on this with Marc Koderer but we only just started and are
> not quite ready. But since you asked now...
> 
> The problem is that the current implementation of negative tests is that
> each "case" is represented as code in a method and targets a particular
> set of api arguments and expected result. In most (but not all) of these
> tests there is boilerplate code surrounding the real content which is
> the actual arguments being passed and the value expected. That
> boilerplate code has to be written correctly and reviewed. The general
> form of the solution has to be worked out but basically would involve
> expressing these tests declaratively, perhaps in a yaml file. In order
> to do this we will need some kind of json schema for each api. The main
> implementation around this is defining the yaml attributes that make it
> easy to express the test cases, and somehow coming up with the json
> schema for each api.
>
> In addition, we would like to support "fuzz testing" where arguments
> are, at least partially, randomly generated and the return values are
> only examined for 4xx vs something else. This would be possible if we
> had json schemas. The main work is to write a generator and methods for
> creating bad values including boundary conditions for types with ranges.
> I had thought a bit about this last year and poked around for an
> existing framework. I didn't find anything that seemed to make the job
> much easier but if any one knows of such a thing (python, hopefully)
> please let me know.

I guess the tests expressed with jsonschema would be like some glance tests
in Tempest. The tempest tests(clients) get jsonschema of each API through
glance v2 API (v2/schemas/image), then validate a request body with the
jsonschema before sending the request. A glance server validates a request
again with the same jsonschema.
If adding negative tests based on jsonschema, these tests seem like for
jsonschema functions/behaviors. I felt we discussed this topic in the
negative tests session, but I could not understand the conclusion.
Should we do that?


> The negative tests for each api would be some combination of
> declaratively specified cases and auto-generated ones.
> 
> With regard to the json schema, there have been various attempts at this
> in the past, including some ideas of how wsme/pecan will help, and it
> might be helpful to have more project coordination. I can see a few options:
> 
> 1. Tempest keeps its own json schema data
> 2. Each project keeps its own json schema in a way that supports
> automated extraction
> 3. There are several use cases for json schema like this and it gets
> stored in some openstacky place that is not in tempest

I'm working for API validation of Nova v3 API with jsonschema, but I'm not
sure that we should implement the API for providing API jsonschema due to
the above thinking. 


> So that is the starting point. Comments and suggestions welcome! Marc
> and I just started working on an etherpad
> https://etherpad.openstack.org/p/bp_negative_tests but any one is
> welcome to contribute there.

Negative tests based on yaml would be nice because of cleaning the code up
and making the tests more readable.
just one question:
 On the etherpad, there are some "invaid_uuid"s.
 Does that mean invalid string (ex. utf-8 string, not ascii)?
             or invalid uuid format(ex. uuid.uuid4() + "foo")?
 IIUC, in negative test session, we discussed that tests passing utf-8 string
 as API parameter should be negative tests, and the server should return a
 BadRequest response.  
 I guess we need to implement such API negative tests. After that, if finding
 an unfavorable behavior of some server, we need to implement API validation
 for the server.
 (unfavorable behavior ex. When a client send utf-8 request, the server returns
  a NotFound response, not a BadRequest one)


Thanks
Ken'ichi Ohmichi




More information about the OpenStack-dev mailing list