[openstack-dev] [QA][all] Propose to remove negative tests from Tempest

Ken'ichi Ohmichi ken1ohmichi at gmail.com
Fri Mar 18 21:27:03 UTC 2016


2016-03-18 1:50 GMT-07:00 Masayuki Igawa <masayuki.igawa at gmail.com>:
> From: GHANSHYAM MANN <ghanshyammann at gmail.com>
> Subject: Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest
> Date: Fri, 18 Mar 2016 10:05:39 +0900
>
>> On Fri, Mar 18, 2016 at 9:06 AM, Ken'ichi Ohmichi <ken1ohmichi at gmail.com> wrote:
>>> 2016-03-17 4:05 GMT-07:00 Andrea Frittoli <andrea.frittoli at gmail.com>:
>>>> On Thu, Mar 17, 2016 at 2:57 AM Ken'ichi Ohmichi <ken1ohmichi at gmail.com>
>>>> wrote:
>>>>>
>>>>> 2016-03-16 19:41 GMT-07:00 Jim Rollenhagen <jim at jimrollenhagen.com>:
>>>>> > On Wed, Mar 16, 2016 at 06:20:11PM -0700, Ken'ichi Ohmichi wrote:
>>>>> >> Hi
>>>>> >>
>>>>> >> I have one proposal[1] related to negative tests in Tempest, and
>>>>> >> hoping opinions before doing that.
>>>>> >>
>>>>> >> Now Tempest contains negative tests and sometimes patches are being
>>>>> >> posted for adding more negative tests, but I'd like to propose
>>>>> >> removing them from Tempest instead.
>>>>> >>
>>>>> >> Negative tests verify surfaces of REST APIs for each component without
>>>>> >> any integrations between components. That doesn't seem integration
>>>>> >> tests which are scope of Tempest.
>>>>> >> In addition, we need to spend the test operating time on different
>>>>> >> component's gate if adding negative tests into Tempest. For example,
>>>>> >> we are operating negative tests of Keystone and more
>>>>> >> components on the gate of Nova. That is meaningless, so we need to
>>>>> >> avoid more negative tests into Tempest now.
>>>>> >>
>>>>> >> If wanting to add negative tests, it is a nice option to implement
>>>>> >> these tests on each component repo with Tempest plugin interface. We
>>>>> >> can avoid operating negative tests on different component gates and
>>>>> >> each component team can decide what negative tests are valuable on the
>>>>> >> gate.
>>>>> >>
>>>>> >> In long term, all negative tests will be migrated into each component
>>>>> >> repo with Tempest plugin interface. We will be able to operate
>>>>> >> valuable negative tests only on each gate.
>>>>> >
>>>>> > So, positive tests in tempest, negative tests as a plugin.
>>>>> >
>>>>> > Is there any longer term goal to have all tests for all projects in a
>>>>> > plugin for that project? Seems odd to separate them.
>>>>>
>>>>> Yeah, from implementation viewpoint, that seems a little odd.
>>>>> but from the main scope of Tempest and to avoid unnecessary gate
>>>>> operation time, that can be acceptable, I feel.
>>>>> Negative tests can be corner cases in most cases, they don't seem
>>>>> integration tests.
>>>>
>>>> I think it's difficult to define a single black and white criteria for
>>>> negative tests, as they encompass a wide range of types of tests.
>>>>
>>>> I agree that things that only testing the API level of a service (not even a
>>>> DB behind) do not necessarily belong in tempest - i.e. testing of input
>>>> validation done by an API.  We could have a guideline for such tests to be
>>>> implemented as unit/functional tests in tree of the service.
>>
>> Yes, this is key point here. If we see ~70% of the negative tests are
>> just checking API surface level (wrong input validation), which
>> defiantly
>> not belong to Tempest scope. Those should be in respective projects
>> repo either by functional/unit/plugin.
>> But in that case we have to define a very clear criteria about what
>> level of negative testing should be in scope of Tempest.
>>
>> Also another key point is that, as we have lot of surface level
>> negative testing in Tempest, should we reject the new one?
>> For me sometime it makes difficult to reject those as we already have
>> some in tempest.
>>
>> My vote here is we reject the new surface level negative tests and try
>> to move all existing negative tests(surface level) out of Tempest
>> ASAP.
>> And those can be just moved to projects functional/unit tests.
>>
>>>
>>> Yeah, it is difficult to distinguish corner cases or not on negative
>>> tests as the criteria.
>>> If necessary to do that, we(QA team) need to read the implementation
>>> code of the core six services deeply during Tempest reviews. Then I
>>> rushed to remove all of them. My first proposal is not good according
>>> to feedback, but I'm happy to get feedback to see our direction :-)
>>>
>>> The guideline is a nice idea.
>>> If necessary to add more negative tests into Tempest, how about
>>> requiring to write the reason which explains new tests are not corner
>>> cases in the commit message?
>>> We can know the merit of new negative ones when reviewing.
>>>
>>>> However Tempest is also interoperability, so we should keep at least a few
>>>> negative API checks in tempest (for the core six services) to enforce that
>>>> return codes do not change inadvertently in negative cases, which could
>>>> break existing clients and applications.
>>>
>>> This also is a nice point.
>>> How to change error return codes is unclear to me at this time.
>>> In Nova, there are some exceptions for changing error return code
>>> without microversion bumping as [1]. This kind of guideline will be
>>> discussed later.
>>
>> This makes Tempest scope little bit unclear again. If we want to
>> verify all error codes in Tempest then it leads to have all surface
>> level negative testing also in Tempest. There are lot of scenario
>> where error codes can be verified and will be difficult to cover all
>> in Tempest.
>>
>> Current negative tests does not cover all error codes for all APIs. If
>> we try to implement all then it will be huge tests number.
>> I think its project which should be verifying those.
>>
>> In Summary -
>>
>> 1. If we choose to have only valid negative tests (other than surface
>> level negative testing) which can verify stability/integration of API
>> and used by Def-core too then,
>>      - We should remove all existing tests which only touch surface of
>> APIs (wrong input validation).
>> 2. If we want to verify error codes also in Tempest then,
>>      - first point becomes invalid and we need to implement all
>> possible error code testing.
>>
>> Having mix of those always leads to have issue in reviews/development etc.
>
>  I think it doesn't depend on surface or deeply but it depends on the interface
> is stable or not. Because Tempest is a blackbox testing suite, Tempest shouldn't
> care about the individual project implementation, basically.
>
>  So the issue is we don't know that which interface(especially negative case)
> is stable or not, actually. I think individual projects (and/or Def-Core) can
> define that, though. In other words, when we have a tempest negative/positive
> test case, it defines as a stable interface for its project, IMO.

Excellent point.

I think positive interfaces(success status code, request/response
bodies, etc) should be stable and we(QA team) can check these
stability in a single team.
However, negative interfaces are unclear. That depends on individual
projects as you said.
Nova's interface rules of negative cases are still changing in Mitaka
also. I don't think it is easy to define these rules for negative
cases in whole OpenStack projects.
Then I did think it is good to move negative tests into individual
projects to define these own negative interfaces by each team.

Thanks



More information about the OpenStack-dev mailing list