[openstack-dev] [all][tc] Clarifying testing recommendation for interop programs
Erno Kuvaja
ekuvaja at redhat.com
Mon Jan 15 12:59:44 UTC 2018
On Thu, Jan 11, 2018 at 4:36 PM, Colleen Murphy <colleen at gazlene.net> wrote:
> Hi everyone,
>
> We have governance review under debate[1] that we need the community's help on.
> The debate is over what recommendation the TC should make to the Interop team
> on where the tests it uses for the OpenStack trademark program should be
> located, specifically those for the new add-on program being introduced. Let me
> badly summarize:
>
> A couple of years ago we issued a resolution[2] officially recommending that
> the Interop team use solely tempest as its source of tests for capability
> verification. The Interop team has always had the view that the developers,
> being the people closest to the project they're creating, are the best people
> to write tests verifying correct functionality, and so the Interop team doesn't
> maintain its own test suite, instead selecting tests from those written in
> coordination between the QA team and the other project teams. These tests are
> used to validate clouds applying for the OpenStack Powered tag, and since all
> of the projects included in the OpenStack Powered program already had tests in
> tempest, this was a natural fit. When we consider adding new trademark programs
> comprising of other projects, the test source is less obvious. Two examples are
> designate, which has never had tests in the tempest repo, and heat, which
> recently had its tests removed from the tempest repo.
>
> So far the patch proposes three options:
>
> 1) All trademark-related tests should go in the tempest repo, in accordance
> with the original resolution. This would mean that even projects that have
> never had tests in tempest would now have to add at least some of their
> black-box tests to tempest.
>
> The value of this option is that centralizes tests used for the Interop program
> in a location where interop-minded folks from the QA team can control them. The
> downside is that projects that so far have avoided having a dependency on
> tempest will now lose some control over the black-box tests that they use for
> functional and integration that would now also be used for trademark
> certification.
> There's also concern for the review bandwidth of the QA team - we can't expect
> the QA team to be continually responsible for an ever-growing list of projects
> and their trademark tests.
>
> 2) All trademark-related tests for *add-on projects* should be sourced from
> plugins external to tempest.
>
> The value of this option is it allows project teams to retain control over
> these tests. The potential problem with it is that individual project teams are
> not necessarily reviewing test changes with an eye for interop concerns and so
> could inadvertently change the behavior of the trademark-verification tools.
>
> 3) All trademark-related tests should go in a single separate tempest plugin.
>
> This has the value of giving the QA and Interop teams control over
> interop-related tests while also making clear the distinction between tests
> used for trademark verification and tests used for CI. Matt's argument against
> this is that there actually is very little distinction between those two cases,
> and that a given test could have many different applications.
>
> Other ideas that have been thrown around are:
>
> * Maintaining a branch in the tempest repo that Interop tests are pulled from.
>
> * Tagging Interop-related tests with decorators to make it clear that they need
> to be handled carefully.
>
> At the heart of the issue is the perception that projects that keep their
> integration tests within the tempest tree are somehow blessed, maybe by the QA
> team or by the TC. It would be nice to try to clarify what technical
> and political
> reasons we have for why different projects have tests in different places -
> review bandwidth of the QA team, ownership/control by the project teams,
> technical interdependency between certain projects, or otherwise.
>
As someone who has been in middle of all that already once I'd like to
bring up bit more fundamental problem into this topic. I'm not able to
provide one size fits all solution but hopefully some insight that
would help the community to make the right decision.
I think the biggest problem is who's fox is let to guard the chicken coop.
By that I mean the basic problem of our testing still relies on what
is tested based on which assumptions and by whom. If the tests are
provided by the project teams, the test is more likely to cover the
intended usecase of the feature as it's implemented and if there is
bug found on that, the likelyhood that the test is altered is quite
high also the individual projects might not have the best idea what
might be the important things to the interoperability and trademark
purposes. Obviously when the test is written against intended behavior
it's less likely but also those changes might sneak in affecting the
interoprability. On the other hand if the test is written by
QA/interoperability people, is it actually testing the right thing and
is there more fundamental need to break it due to the fact that
instead of catching and reporting the bug when the test is written, we
start enforcing it. Are the tests written based on the intended
behavior, documented behavior or the current actual behavior? And the
biggest question of them all is who is going to have the bandwidth to
understand the depth of the projects and their ties between to ensure
we minimize the above?
In perfect world all features are bug free, rational to use and well
documented so that anyone can easily write a test that can be ran
against any version to verify that we do not have regressions. We just
are not living in that perfect world and each of the options are risky
to cause conflicts.
I think the optimal solution if we were introducing this as new fresh
concept would be using tempest as engine to run trademark test plugins
from their own repo and those plugins would be provided in
collaboration between the trademark group as what are the
functionalities tested, QA to ensure that the tests actually verify
what they should be testing and the project teams ensuring that the
tested feature is a) behaving and b) tested as it's intended to work
and documentation is aligned with that, where the faults on any 3
could be rectified before enforcing. Unfortunately I do not see us as
the community having the resources to this "the right way" and I have
really hard time trying to decide which of the proposed options would
be least bad.
I think the worst case scenario is that we scrape together what ever
we can just to have something to say that we test it and not have
consistency nor clear responsibility of who, what and how.
(Unfortunately I think this is the current situation and I'm super
happy to hear that this is being discussed and the decision is not
made lightly.)
Best,
Erno -jokke- Kuvaja
> Ultimately, as Jeremy said in the comments on the resolution patch, the
> recommendation should be one that works best for the QA and Interop teams. So
> far we've heard from Matt and Mark expressing moderate support for option 2.
> We'd like to hear more from those teams about how they see this working,
> especially with regard to concerns about the quality and stability standards
> that out-of-tree tests may be held to. We additionally need input from the
> whole community on how maintaining trademark-related tests in tempest will
> affect you if you don't already have your tests there. We'd especially like to
> address any perceptions of favoritism or exclusionism that stem from these
> issues.
>
> And to quickly clear up one detail before it makes it onto this thread, the
> Queens Community Goal about splitting tempest plugins out of the main project's
> tree[3] is entirely about addressing technical problems related to packaging for
> existing tempest plugins, it's not a decree about what should live
> within the tempest
> repository nor does it have anything to do with the Interop program.
>
> As I'm not deeply steeped in the history of either the Interop or QA teams I am
> sure I've misrepresented some details here, I'm sorry about that. But we'd like
> to get this resolution moving forward and we're currently stuck, so this thread
> is intended to gather enough community input to get unstuck and avoid letting
> this proposal become stale. Please respond to this thread or comment on the
> resolution proposal[1] if you have any thoughts.
>
> Colleen
>
> [1] https://review.openstack.org/#/c/521602
> [2] https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html
> [3] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list