[openstack-dev] [qa][all][Heat] Packaging of functional tests

Steve Baker sbaker at redhat.com
Thu Sep 4 21:42:17 UTC 2014


On 05/09/14 04:51, Matthew Treinish wrote:
> On Thu, Sep 04, 2014 at 04:32:53PM +0100, Steven Hardy wrote:
>> On Thu, Sep 04, 2014 at 10:45:59AM -0400, Jay Pipes wrote:
>>> On 08/29/2014 05:15 PM, Zane Bitter wrote:
>>>> On 29/08/14 14:27, Jay Pipes wrote:
>>>>> On 08/26/2014 10:14 AM, Zane Bitter wrote:
>>>>>> Steve Baker has started the process of moving Heat tests out of the
>>>>>> Tempest repository and into the Heat repository, and we're looking for
>>>>>> some guidance on how they should be packaged in a consistent way.
>>>>>> Apparently there are a few projects already packaging functional tests
>>>>>> in the package <projectname>.tests.functional (alongside
>>>>>> <projectname>.tests.unit for the unit tests).
>>>>>>
>>>>>> That strikes me as odd in our context, because while the unit tests run
>>>>>> against the code in the package in which they are embedded, the
>>>>>> functional tests run against some entirely different code - whatever
>>>>>> OpenStack cloud you give it the auth URL and credentials for. So these
>>>>>> tests run from the outside, just like their ancestors in Tempest do.
>>>>>>
>>>>>> There's all kinds of potential confusion here for users and packagers.
>>>>>> None of it is fatal and all of it can be worked around, but if we
>>>>>> refrain from doing the thing that makes zero conceptual sense then there
>>>>>> will be no problem to work around :)
>>>>>>
>>>>>> I suspect from reading the previous thread about "In-tree functional
>>>>>> test vision" that we may actually be dealing with three categories of
>>>>>> test here rather than two:
>>>>>>
>>>>>> * Unit tests that run against the package they are embedded in
>>>>>> * Functional tests that run against the package they are embedded in
>>>>>> * Integration tests that run against a specified cloud
>>>>>>
>>>>>> i.e. the tests we are now trying to add to Heat might be qualitatively
>>>>>> different from the <projectname>.tests.functional suites that already
>>>>>> exist in a few projects. Perhaps someone from Neutron and/or Swift can
>>>>>> confirm?
>>>>>>
>>>>>> I'd like to propose that tests of the third type get their own top-level
>>>>>> package with a name of the form <projectname>-integrationtests (second
>>>>>> choice: <projectname>-tempest on the principle that they're essentially
>>>>>> plugins for Tempest). How would people feel about standardising that
>>>>>> across OpenStack?
>>>>> By its nature, Heat is one of the only projects that would have
>>>>> integration tests of this nature. For Nova, there are some "functional"
>>>>> tests in nova/tests/integrated/ (yeah, badly named, I know) that are
>>>>> tests of the REST API endpoints and running service daemons (the things
>>>>> that are RPC endpoints), with a bunch of stuff faked out (like RPC
>>>>> comms, image services, authentication and the hypervisor layer itself).
>>>>> So, the "integrated" tests in Nova are really not testing integration
>>>>> with other projects, but rather integration of the subsystems and
>>>>> processes inside Nova.
>>>>>
>>>>> I'd support a policy that true integration tests -- tests that test the
>>>>> interaction between multiple real OpenStack service endpoints -- be left
>>>>> entirely to Tempest. Functional tests that test interaction between
>>>>> internal daemons and processes to a project should go into
>>>>> /$project/tests/functional/.
>>>>>
>>>>> For Heat, I believe tests that rely on faked-out other OpenStack
>>>>> services but stress the interaction between internal Heat
>>>>> daemons/processes should be in /heat/tests/functional/ and any tests the
>>>>> rely on working, real OpenStack service endpoints should be in Tempest.
>>>> Well, the problem with that is that last time I checked there was
>>>> exactly one Heat scenario test in Tempest because tempest-core doesn't
>>>> have the bandwidth to merge all (any?) of the other ones folks submitted.
>>>>
>>>> So we're moving them to openstack/heat for the pure practical reason
>>>> that it's the only way to get test coverage at all, rather than concerns
>>>> about overloading the gate or theories about the best venue for
>>>> cross-project integration testing.
>>> Hmm, speaking of passive aggressivity...
>>>
>>> Where can I see a discussion of the Heat integration tests with Tempest QA
>>> folks? If you give me some background on what efforts have been made already
>>> and what is remaining to be reviewed/merged/worked on, then I can try to get
>>> some resources dedicated to helping here.
>> We recieved some fairly strong criticism from sdague[1] earlier this year,
>> at which point we were  already actively working on improving test coverage
>> by writing new tests for tempest.
>>
>> Since then, several folks, myself included, commited very significant
>> amounts of additional effort to writing more tests for tempest, with some
>> success.
>>
>> Ultimately the review latency and overhead involved in constantly rebasing
>> changes between infrequent reviews has resulted in slow progress and
>> significant frustration for those attempting to contribute new test cases.
>>
>> It's been clear for a while that tempest-core have significant bandwidth
>> issues, as well as not necessarily always having the specific domain
>> expertise to thoroughly review some tests related to project-specific
>> behavior or functionality.
> So I view this as actually a breakdown in cross-team communication, with both
> sides at fault. For example, for a couple of months we had an outstanding
> meeting topic on heat testing which almost always no one brought up anything to
> discuss, eventually I just dropped it because it was never used. Instead I
> should have found someone to drive it forward. Or that the heat testing blueprint
> hasn't really seen much activity and only has 6 patches linked against it.
If I had been aware of the meeting topic I definitely would have taken
advantage of it.
> The QA team is also well aware of review latency issues, we have a few relief
> valves to try and help with it, like a meeting topic every week dedicated to
> reviews that need attention, and using review dashboards that prioritize reviews
> which need extra eyes. We also use the blueprints to track and prioritize
> reviews for efforts like bringing ramping up testing for a project. But if these
> aren't used it's hard to know that things aren't getting attention. Honestly, I
> think it's a major issue when the first I'd heard of this frustration about
> reviews on heat patches was when I happened to notice an abandoned patch that
> mentioned it.
>
> This case is actually why I'm planning on starting a QA liaison program soon so
> there is point of contact to push forward these things. Looking at neutron which
> had very little testing in havana and had ramped up the number of tests very
> quickly was having someone driving that effort and attending both meetings.
> Miguel Lavalle drove things forward by keeping on top of the patches in flight
> and letting people in both QA and Neutron know when something needed extra
> attention. I think the unspoken expectation from the QA team was that something
> like this was going to happen here. Hopefully, having a person formally take on
> this role in fostering communication between teams will be helpful in avoiding
> these issues in the future.
>
A QA liaison program sounds like a great idea.

>> So it was with some relief that we saw the proposal[2] to move the burden
>> for reviewing project test-cases to the project teams, who will presumably
>> be more motivated to do the reviews, and have the knowledge of what needs
>> testing.
>>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/029661.html
>> [2] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
>>
>>> I would greatly prefer just having a single source of integration testing in
>>> OpenStack, versus going back to the bad ol' days of everybody under the sun
>>> rewriting their own.
>>>
>>> Note that I'm not talking about functional testing here, just the
>>> integration testing...
>> You may have to define the terms functional and integration here, as IMO
>> there's already significant confusion about what the target of e.g API and
>> scenario tests in tempest are.
>>
>> This is also further complicated by the fact that all heat functional tests
>> also test integration of the various underlying services to some extent.
>>
>> My opinion is that any tests remaining in tempest should focus on API
>> correctness, e.g to keep us honest in terms of backwards incomaptible
>> changes to the API surface.
>>
>> Then for all tests which aim to prove the functionality of the project, e.g
>> my understanding of tempest scenario tests atm, we should allow project
>> teams to own them, and add to them as functionality develops over time.
> This is actually the opposite direction that things are pushing right now. The
> API tests are viewed as being mostly project specific, and besides for causing
> friction when attempting to make a breaking api change there isn't a reason to
> put them in an integrated test suite. While the scenario tests mostly involve
> cross-project interactions and would be outside the scope of project specific
> testing. Moving forward the expectation is that tempest's api tests will mostly
> move to the projects (once we have a solution to block breaking api changes) and
> the scenario tests will grow.
>
This sounds fine in the long term, but Heat needs a comprehensive
integration suite urgently, and developing them as tempest scenarios has
not delivered that yet. Tempest reviewer bandwidth has only been part of
the issue, not enough heat developers have been writing scenario tests
either. This has been a bit of a chicken-and-egg problem since we never
got to the point where there was enough existing scenario tests to -1
any new Heat feature that lacked one. Another issue is that it has taken
this long to get the devstack changes in which build a custom image
containing the required agents, which many of our tests will require.

The existing scenario tests have been forklifted into
heat_integrationtests, and they can always be forklifted back again in
the future. I would like to propose that we go ahead with the in-tree
integration tests with a view to moving them back to tempest in the
future. We could agree on a set of preconditions for moving them back.
On the heat side the preconditions could be:
- Good coverage of testing heat resources
- An established process for insisting on new integration tests for new
features

On the tempest side:
- An established QA liaison program
- Completion of transition to tempest-lib and in-tree functional tests

>> Ultimately I don't think it really matters which repo those tests live in,
>> provided we can write them and get them running in the gate (catching
>> regressions, which otherwise keep slipping through) in a timely manner.
> So for the most part this may be true, unless you are considering cross project
> testing and gating, which is what I think Jay's argument is here. Heat is in a
> different position that almost all of it's functionality is dependent on the
> other services. So if the expectation is to be running these tests in a full
> OpenStack deployment essentially you'll be duplicating the role of Tempest. But,
> by being a heat specific test suite you'll have symmetrical gating issues. 
>
This is touching on the limits of the gating infrastructure. We're
already at the limit of available cloud resources to run an integrated
gate, and the tests we'd like to write will by their nature consume
quite some resources. There is a human limit too, some of our best folk
are burning out on keeping on top of integrated gate issues.

There is a potential symmetrical gating issue, but in theory Heat is
just consuming stable tested APIs. sdague has suggested we only run
check-heat-dsvm-functional against heat for now, and any asymmetric
breakages be reverted/fixed as they occur, and prevented from recurring
in the test suites of the offending projects.



More information about the OpenStack-dev mailing list