[openstack-dev] [qa][all][Heat] Packaging of functional tests

David Kranz dkranz at redhat.com
Fri Sep 5 17:12:14 UTC 2014


On 09/05/2014 12:10 PM, Matthew Treinish wrote:
> On Fri, Sep 05, 2014 at 09:42:17AM +1200, Steve Baker wrote:
>> On 05/09/14 04:51, Matthew Treinish wrote:
>>> On Thu, Sep 04, 2014 at 04:32:53PM +0100, Steven Hardy wrote:
>>>> On Thu, Sep 04, 2014 at 10:45:59AM -0400, Jay Pipes wrote:
>>>>> On 08/29/2014 05:15 PM, Zane Bitter wrote:
>>>>>> On 29/08/14 14:27, Jay Pipes wrote:
>>>>>>> On 08/26/2014 10:14 AM, Zane Bitter wrote:
>>>>>>>> Steve Baker has started the process of moving Heat tests out of the
>>>>>>>> Tempest repository and into the Heat repository, and we're looking for
>>>>>>>> some guidance on how they should be packaged in a consistent way.
>>>>>>>> Apparently there are a few projects already packaging functional tests
>>>>>>>> in the package <projectname>.tests.functional (alongside
>>>>>>>> <projectname>.tests.unit for the unit tests).
>>>>>>>>
>>>>>>>> That strikes me as odd in our context, because while the unit tests run
>>>>>>>> against the code in the package in which they are embedded, the
>>>>>>>> functional tests run against some entirely different code - whatever
>>>>>>>> OpenStack cloud you give it the auth URL and credentials for. So these
>>>>>>>> tests run from the outside, just like their ancestors in Tempest do.
>>>>>>>>
>>>>>>>> There's all kinds of potential confusion here for users and packagers.
>>>>>>>> None of it is fatal and all of it can be worked around, but if we
>>>>>>>> refrain from doing the thing that makes zero conceptual sense then there
>>>>>>>> will be no problem to work around :)
>>>>>>>>
>>>>>>>> I suspect from reading the previous thread about "In-tree functional
>>>>>>>> test vision" that we may actually be dealing with three categories of
>>>>>>>> test here rather than two:
>>>>>>>>
>>>>>>>> * Unit tests that run against the package they are embedded in
>>>>>>>> * Functional tests that run against the package they are embedded in
>>>>>>>> * Integration tests that run against a specified cloud
>>>>>>>>
>>>>>>>> i.e. the tests we are now trying to add to Heat might be qualitatively
>>>>>>>> different from the <projectname>.tests.functional suites that already
>>>>>>>> exist in a few projects. Perhaps someone from Neutron and/or Swift can
>>>>>>>> confirm?
>>>>>>>>
>>>>>>>> I'd like to propose that tests of the third type get their own top-level
>>>>>>>> package with a name of the form <projectname>-integrationtests (second
>>>>>>>> choice: <projectname>-tempest on the principle that they're essentially
>>>>>>>> plugins for Tempest). How would people feel about standardising that
>>>>>>>> across OpenStack?
>>>>>>> By its nature, Heat is one of the only projects that would have
>>>>>>> integration tests of this nature. For Nova, there are some "functional"
>>>>>>> tests in nova/tests/integrated/ (yeah, badly named, I know) that are
>>>>>>> tests of the REST API endpoints and running service daemons (the things
>>>>>>> that are RPC endpoints), with a bunch of stuff faked out (like RPC
>>>>>>> comms, image services, authentication and the hypervisor layer itself).
>>>>>>> So, the "integrated" tests in Nova are really not testing integration
>>>>>>> with other projects, but rather integration of the subsystems and
>>>>>>> processes inside Nova.
>>>>>>>
>>>>>>> I'd support a policy that true integration tests -- tests that test the
>>>>>>> interaction between multiple real OpenStack service endpoints -- be left
>>>>>>> entirely to Tempest. Functional tests that test interaction between
>>>>>>> internal daemons and processes to a project should go into
>>>>>>> /$project/tests/functional/.
>>>>>>>
>>>>>>> For Heat, I believe tests that rely on faked-out other OpenStack
>>>>>>> services but stress the interaction between internal Heat
>>>>>>> daemons/processes should be in /heat/tests/functional/ and any tests the
>>>>>>> rely on working, real OpenStack service endpoints should be in Tempest.
>>>>>> Well, the problem with that is that last time I checked there was
>>>>>> exactly one Heat scenario test in Tempest because tempest-core doesn't
>>>>>> have the bandwidth to merge all (any?) of the other ones folks submitted.
>>>>>>
>>>>>> So we're moving them to openstack/heat for the pure practical reason
>>>>>> that it's the only way to get test coverage at all, rather than concerns
>>>>>> about overloading the gate or theories about the best venue for
>>>>>> cross-project integration testing.
>>>>> Hmm, speaking of passive aggressivity...
>>>>>
>>>>> Where can I see a discussion of the Heat integration tests with Tempest QA
>>>>> folks? If you give me some background on what efforts have been made already
>>>>> and what is remaining to be reviewed/merged/worked on, then I can try to get
>>>>> some resources dedicated to helping here.
>>>> We recieved some fairly strong criticism from sdague[1] earlier this year,
>>>> at which point we were  already actively working on improving test coverage
>>>> by writing new tests for tempest.
>>>>
>>>> Since then, several folks, myself included, commited very significant
>>>> amounts of additional effort to writing more tests for tempest, with some
>>>> success.
>>>>
>>>> Ultimately the review latency and overhead involved in constantly rebasing
>>>> changes between infrequent reviews has resulted in slow progress and
>>>> significant frustration for those attempting to contribute new test cases.
>>>>
>>>> It's been clear for a while that tempest-core have significant bandwidth
>>>> issues, as well as not necessarily always having the specific domain
>>>> expertise to thoroughly review some tests related to project-specific
>>>> behavior or functionality.
>>> So I view this as actually a breakdown in cross-team communication, with both
>>> sides at fault. For example, for a couple of months we had an outstanding
>>> meeting topic on heat testing which almost always no one brought up anything to
>>> discuss, eventually I just dropped it because it was never used. Instead I
>>> should have found someone to drive it forward. Or that the heat testing blueprint
>>> hasn't really seen much activity and only has 6 patches linked against it.
>> If I had been aware of the meeting topic I definitely would have taken
>> advantage of it.
>>> The QA team is also well aware of review latency issues, we have a few relief
>>> valves to try and help with it, like a meeting topic every week dedicated to
>>> reviews that need attention, and using review dashboards that prioritize reviews
>>> which need extra eyes. We also use the blueprints to track and prioritize
>>> reviews for efforts like bringing ramping up testing for a project. But if these
>>> aren't used it's hard to know that things aren't getting attention. Honestly, I
>>> think it's a major issue when the first I'd heard of this frustration about
>>> reviews on heat patches was when I happened to notice an abandoned patch that
>>> mentioned it.
>>>
>>> This case is actually why I'm planning on starting a QA liaison program soon so
>>> there is point of contact to push forward these things. Looking at neutron which
>>> had very little testing in havana and had ramped up the number of tests very
>>> quickly was having someone driving that effort and attending both meetings.
>>> Miguel Lavalle drove things forward by keeping on top of the patches in flight
>>> and letting people in both QA and Neutron know when something needed extra
>>> attention. I think the unspoken expectation from the QA team was that something
>>> like this was going to happen here. Hopefully, having a person formally take on
>>> this role in fostering communication between teams will be helpful in avoiding
>>> these issues in the future.
>>>
>> A QA liaison program sounds like a great idea.
>>
>>>> So it was with some relief that we saw the proposal[2] to move the burden
>>>> for reviewing project test-cases to the project teams, who will presumably
>>>> be more motivated to do the reviews, and have the knowledge of what needs
>>>> testing.
>>>>
>>>> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/029661.html
>>>> [2] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
>>>>
>>>>> I would greatly prefer just having a single source of integration testing in
>>>>> OpenStack, versus going back to the bad ol' days of everybody under the sun
>>>>> rewriting their own.
>>>>>
>>>>> Note that I'm not talking about functional testing here, just the
>>>>> integration testing...
>>>> You may have to define the terms functional and integration here, as IMO
>>>> there's already significant confusion about what the target of e.g API and
>>>> scenario tests in tempest are.
>>>>
>>>> This is also further complicated by the fact that all heat functional tests
>>>> also test integration of the various underlying services to some extent.
>>>>
>>>> My opinion is that any tests remaining in tempest should focus on API
>>>> correctness, e.g to keep us honest in terms of backwards incomaptible
>>>> changes to the API surface.
>>>>
>>>> Then for all tests which aim to prove the functionality of the project, e.g
>>>> my understanding of tempest scenario tests atm, we should allow project
>>>> teams to own them, and add to them as functionality develops over time.
>>> This is actually the opposite direction that things are pushing right now. The
>>> API tests are viewed as being mostly project specific, and besides for causing
>>> friction when attempting to make a breaking api change there isn't a reason to
>>> put them in an integrated test suite. While the scenario tests mostly involve
>>> cross-project interactions and would be outside the scope of project specific
>>> testing. Moving forward the expectation is that tempest's api tests will mostly
>>> move to the projects (once we have a solution to block breaking api changes) and
>>> the scenario tests will grow.
>>>
>> This sounds fine in the long term, but Heat needs a comprehensive
>> integration suite urgently, and developing them as tempest scenarios has
>> not delivered that yet. Tempest reviewer bandwidth has only been part of
>> the issue, not enough heat developers have been writing scenario tests
>> either. This has been a bit of a chicken-and-egg problem since we never
>> got to the point where there was enough existing scenario tests to -1
>> any new Heat feature that lacked one. Another issue is that it has taken
>> this long to get the devstack changes in which build a custom image
>> containing the required agents, which many of our tests will require.
>>
>> The existing scenario tests have been forklifted into
>> heat_integrationtests, and they can always be forklifted back again in
>> the future. I would like to propose that we go ahead with the in-tree
>> integration tests with a view to moving them back to tempest in the
>> future. We could agree on a set of preconditions for moving them back.
>>
>> On the heat side the preconditions could be:
>> - Good coverage of testing heat resources
>> - An established process for insisting on new integration tests for new
>> features
>>
>> On the tempest side:
>> - An established QA liaison program
>> - Completion of transition to tempest-lib and in-tree functional tests
> So this is actually something that is very similar to something we discussed at
> summit. [1] I don't have an issue with the model of developing tests in the heat
> tree to have testing be more tightly coupled with development. It has several
> advantages. Then we can do a graduation of tests into tempest when and where it
> would make sense to run against everyone, and move heat tests from nova into
> heat. However, I don't view any of this as a good reason to remove existing
> tests from tempest now. Maybe as part of the tempest cleanup that'll happen
> eventually some of the existing tests we'll find don't need to be in tempest.
> But for right now there isn't really any evidence supporting that, especially
> considering how limited heat test coverage is that would just seem like a
> premature action.
>
> I think what you've outlined for preconditions to migration makes sense for the
> most part. But, I think it should be for migration either way, not just for
> heat -> tempest. Because really when we're talking about test migrations we're
> talking about trying to optimize our test load so that we're only running things
> where and when they need to be.
>
Yes. An important thing in this regard is to make sure that tempest 
tests and in-project functional tests both use the same rest client 
interface. After having moved all the response checking to the client, 
tempest tests still use rest calls that return a response and body where 
the response is mostly ignored since the success/fail was already 
checked. A good alternative was suggested here 
http://lists.openstack.org/pipermail/openstack-dev/2014-August/044492.html 
to just return a single body object from which the response could be 
extracted if needed.
It would be good to make this change in a service client before moving 
the client to tempest-lib and starting to create in-project functional 
tests that use it.

  -David




More information about the OpenStack-dev mailing list