[openstack-dev] [qa][tc][all] Tempest to reject trademark tests
doug at doughellmann.com
Fri Jun 2 14:03:31 UTC 2017
Excerpts from Matthew Treinish's message of 2017-06-01 20:51:24 -0400:
> On Thu, Jun 01, 2017 at 11:57:00AM -0400, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2017-06-01 11:51:50 +0200:
> > > Graham Hayes wrote:
> > > > On 01/06/17 01:30, Matthew Treinish wrote:
> > > >> TBH, it's a bit premature to have the discussion. These additional programs do
> > > >> not exist yet, and there is a governance road block around this. Right now the
> > > >> set of projects that can be used defcore/interopWG is limited to the set of
> > > >> projects in:
> > > >>
> > > >> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
> > > >
> > > > Sure - but that is a solved problem, when the interop committee is
> > > > ready to propose them, they can add projects into that tag. Or am I
> > > > misunderstanding  (again)?
> > >
> > > I think you understand it well. The Board/InteropWG should propose
> > > additions/removals of this tag, which will then be approved by the TC:
> > >
> > > https://governance.openstack.org/tc/reference/tags/tc_approved-release.html#tag-application-process
> > >
> > > > [...]
> > > >> We had a forum session on it (I can't find the etherpad for the session) which
> > > >> was pretty speculative because it was about planning the new programs. Part of
> > > >> that discussion was around the feasibility of using tests in plugins and whether
> > > >> that would be desirable. Personally, I was in favor of doing that for some of
> > > >> the proposed programs because of the way they were organized it was a good fit.
> > > >> This is because the proposed new programs were extra additions on top of the
> > > >> base existing interop program. But it was hardly a definitive discussion.
> > > >
> > > > Which will create 2 classes of testing for interop programs.
> > >
> > > FWIW I would rather have a single way of doing "tests used in trademark
> > > programs" without differentiating between old and new trademark programs.
> > >
> > > I fear that we are discussing solutions before defining the problem. We
> > > want:
> > >
> > > 1- Decentralize test maintenance, through more tempest plugins, to
> > > account for limited QA resources
> > > 2- Additional codereview constraints and approval rules for tests that
> > > happen to be used in trademark programs
> > > 3- Discoverability/ease-of-install of the set of tests that happen to be
> > > used in trademark programs
> > > 4- A git repo layout that can be simply explained, for new teams to
> > > understand
> > >
> > > It feels like the current git repo layout (result of that 2016-05-04
> > > resolution) optimizes for 2 and 3, which kind of works until you add
> > > more trademark programs, at which point it breaks 1 and 4.
> > >
> > > I feel like you could get 2 and 3 without necessarily using git repo
> > > boundaries (using Gerrit approval rules and some tooling to install/run
> > > subset of tests across multiple git repos), which would allow you to
> > > optimize git repo layout to get 1 and 4...
> > >
> > > Or am I missing something ?
> > >
> > Right. The point of having the trademark tests "in tempest" was not
> > to have them "in the tempest repo", that was just an implementation
> > detail of the policy of "put them in a repository managed by people
> > who understand the expanded review rules".
> There was more to it than this, a big part was duplication of effort as well.
> Tempest itself is almost a perfect fit for the scope of the testing defcore is
> doing. While tempest does additional testing that defcore doesn't use, a large
> subset is exactly what they want.
That does explain why Tempest was appealing to the DefCore folks.
I was trying to explain my motivation for writing the resolution
saying that we did not want DefCore using tests scattered throughout
a bunch of plugin repositories managed by different reviewer teams.
> > There were a lot of unexpected issues when we started treating the
> > test suite as a production tool for validating a cloud. We have
> > to be careful about how we change the behavior of tests, for example,
> > even if the API responses are expected to be the same. It's not
> > fair to vendors or operators who get trademark approval with one
> > release to have significant changes in behavior in the exact same
> > tests for the next release.
> I actually find this to be kinda misleading. Tempest has always had
> running on any cloud as part of it's mission. I think you're referring
> to the monster defcore thread from last summer about proprietary nova extensions
> adding on to API responses. This is honestly a completely separate problem
> which is not something I want to dive into again, because that was a much more
> nuanced problem that involved much more than just code review.
That may have been the situation I'm thinking of, and I agree,
there's not a lot of point in rehashing that argument. I was trying
to refer to a specific example of how reviewing the tests used for
the trademark programs requires extra thought because the tests
themselves have to be backwards compatible, not just the system
they are testing. That is not obvious on its face, and someone used
to reviewing tests under "regular" criteria might miss a change
that breaks this requirement.
> > At the early stage, when the DefCore team was still figuring out
> > these issues, it made sense to put all of the tests in one place
> > with a review team that was actively participating in establishing
> > the process. If we better understand the "rules" for these tests
> > now, we can document them and distribute the work of maintaining the
> > test suites.
> I think you're overestimating how much work is actually being done
> bidirectionally here. The interaction with defcore is more straight consumption
> then you might think. They tend to just pick and choose from what tempest has
> and don't actually identify gaps or contribute back into tempest much.
Well, the hope was there. :-/
> This actually is the crux of my entire concern, that assuming we do widely
> expand the number of trademark programs there is an assumption that a bunch of
> people are going to show up, write tests, and maintain them. However, all past
> evidence shows that this just doesn't ever happen. I linked to that graph from
I agree. That's why I said it was fine if we move these things into
plugins. That move comes with other trade-offs, though, because we
have to teach an even larger reviewer base about the expanded review
criteria and when they apply.
> > And yes, I agree with the argument that we should be fair and treat
> > all projects the same way. If we're going to move tests out of the
> > tempest repository, we should move all of them. The QA team can
> > still help maintain the test suites for whatever projects they want,
> > even if those tests are in plugins.
> Again, where has this been proposed? I've yet to see anything where
> tests being removed from tempest has being proposed anywhere. Also, moving
Chris Dent proposed it in another email in this thread, to address
the apparent unfairness of legacy projects having all of their tests
in tempest. I interpreted the suggestion as an extension of the
proposed goal to have projects move their tests out of project repos
> tests from tempest to some other place doesn't magically solve any of the issues.
> The fundamental problem I'm concerned with is a large expansion in the number
> of trademark programs overloading a small team. Just saying the QA team can
> still help maintain them doesn't change the scaling problem. It just shifts
> that from the tempest repo to another repo. (or multiples)
The docs team is currently going through the process of disaggregating
the documentation and placing it back in the project team repositories.
The motivation for that change is similar to what I hear you saying
about QA: There are not enough contributors on the small centralized
team to maintain a monolithic, and growing, product. The docs team
will continue to write and otherwise contribute to documentation
in its new home, but "ownership" of the documentation will shift
to the project teams.
This approach is an evolution of the liaison system, with the roles
reversed. It's something the QA team should think about, not because
anyone is doing a bad job or because of any one event triggering a
desire for change, but because we need to consider options to allow
us to continue to have a healthy test suite and toolset even without
significant growth for the QA team.
More information about the OpenStack-dev