[openstack-dev] [qa] moratorium on new negative tests in Tempest
morita.daisuke at lab.ntt.co.jp
Fri Nov 22 08:13:26 UTC 2013
Now the test cases in Tempest are well-stocked so it's a good time to
rearrange the design of test codes.
I checked mailing lists, IRC log and etherpads relating to this topic.
Let me leave my 5 thoughts below.
How to handle:
1. Data type (e.g, int, bool)
2. Specific value or format support (e.g, RegExp)
3. Boundary value analysis (David made a mention to this issue below)
4. Invalid value by non-Unicode (Ken'ichi made a mention in his mail Nov 13)
5. Errors that complicated pre- or post- processings are required for
I suggest that issues 1-4 be considered in the scope of new framework.
From above sources, I feel a little bias towards invalid value testing.
On the other hand, I think that some tests remain outside of this framework.
As for Swift, the max total size of sending HTTP headers for metadata is
4096 bytes but the max size of meta-key is 128 bytes and the max of
meta-value is 256 bytes. It might be difficult to test boundary value of
total HTTP headers with the new framework. In such cases, is it OK to
write test case like current implementation?
Anyway, I do never want to derail this work. I am looking forward to a
> Excerpts from David Kranz's message of 2013-11-12 14:33:04 -0500:
> I am working on this with Marc Koderer but we only just started and are
> not quite ready. But since you asked now...
> The problem is that the current implementation of negative tests is that
> each "case" is represented as code in a method and targets a particular
> set of api arguments and expected result. In most (but not all) of these
> tests there is boilerplate code surrounding the real content which is
> the actual arguments being passed and the value expected. That
> boilerplate code has to be written correctly and reviewed. The general
> form of the solution has to be worked out but basically would involve
> expressing these tests declaratively, perhaps in a yaml file. In order
> to do this we will need some kind of json schema for each api. The main
> implementation around this is defining the yaml attributes that make it
> easy to express the test cases, and somehow coming up with the json
> schema for each api.
> In addition, we would like to support "fuzz testing" where arguments
> are, at least partially, randomly generated and the return values are
> only examined for 4xx vs something else. This would be possible if we
> had json schemas. The main work is to write a generator and methods for
> creating bad values including boundary conditions for types with ranges.
> I had thought a bit about this last year and poked around for an
> existing framework. I didn't find anything that seemed to make the job
> much easier but if any one knows of such a thing (python, hopefully)
> please let me know.
> The negative tests for each api would be some combination of
> declaratively specified cases and auto-generated ones.
> With regard to the json schema, there have been various attempts at this
> in the past, including some ideas of how wsme/pecan will help, and it
> might be helpful to have more project coordination. I can see a few options:
> 1. Tempest keeps its own json schema data
> 2. Each project keeps its own json schema in a way that supports
> automated extraction
> 3. There are several use cases for json schema like this and it gets
> stored in some openstacky place that is not in tempest
> So that is the starting point. Comments and suggestions welcome! Marc
> and I just started working on an etherpad
> https://etherpad.openstack.org/p/bp_negative_tests but any one is
> welcome to contribute there.
Daisuke Morita <morita.daisuke at lab.ntt.co.jp>
NTT Software Innovation Center, NTT Corporation
More information about the OpenStack-dev