[openstack-dev] [qa] moratorium on new negative tests in Tempest

David Kranz dkranz at redhat.com
Tue Nov 12 19:33:04 UTC 2013


On 11/12/2013 01:36 PM, Clint Byrum wrote:
> Excerpts from Sean Dague's message of 2013-11-12 10:01:06 -0800:
>> During the freeze phase of Havana we got a ton of new contributors
>> coming on board to Tempest, which was super cool. However it meant we
>> had this new influx of negative tests (i.e. tests which push invalid
>> parameters looking for error codes) which made us realize that human
>> creation and review of negative tests really doesn't scale. David Kranz
>> is working on a generative model for this now.
>>
> Are there some notes or other source material we can follow to understand
> this line of thinking? I don't agree or disagree with it, as I don't
> really understand, so it would be helpful to have the problems enumerated
> and the solution hypothesis stated. Thanks!
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
I am working on this with Marc Koderer but we only just started and are 
not quite ready. But since you asked now...

The problem is that the current implementation of negative tests is that 
each "case" is represented as code in a method and targets a particular 
set of api arguments and expected result. In most (but not all) of these 
tests there is boilerplate code surrounding the real content which is 
the actual arguments being passed and the value expected. That 
boilerplate code has to be written correctly and reviewed. The general 
form of the solution has to be worked out but basically would involve 
expressing these tests declaratively, perhaps in a yaml file. In order 
to do this we will need some kind of json schema for each api. The main 
implementation around this is defining the yaml attributes that make it 
easy to express the test cases, and somehow coming up with the json 
schema for each api.

In addition, we would like to support "fuzz testing" where arguments 
are, at least partially, randomly generated and the return values are 
only examined for 4xx vs something else. This would be possible if we 
had json schemas. The main work is to write a generator and methods for 
creating bad values including boundary conditions for types with ranges. 
I had thought a bit about this last year and poked around for an 
existing framework. I didn't find anything that seemed to make the job 
much easier but if any one knows of such a thing (python, hopefully) 
please let me know.

The negative tests for each api would be some combination of 
declaratively specified cases and auto-generated ones.

With regard to the json schema, there have been various attempts at this 
in the past, including some ideas of how wsme/pecan will help, and it 
might be helpful to have more project coordination. I can see a few options:

1. Tempest keeps its own json schema data
2. Each project keeps its own json schema in a way that supports 
automated extraction
3. There are several use cases for json schema like this and it gets 
stored in some openstacky place that is not in tempest

So that is the starting point. Comments and suggestions welcome! Marc 
and I just started working on an etherpad 
https://etherpad.openstack.org/p/bp_negative_tests but any one is 
welcome to contribute there.

  -David





More information about the OpenStack-dev mailing list