<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 09/05/2014 12:10 PM, Matthew
Treinish wrote:<br>
</div>
<blockquote cite="mid:20140905161004.GA1329@Sazabi.treinish"
type="cite">
<pre wrap="">On Fri, Sep 05, 2014 at 09:42:17AM +1200, Steve Baker wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On 05/09/14 04:51, Matthew Treinish wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On Thu, Sep 04, 2014 at 04:32:53PM +0100, Steven Hardy wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On Thu, Sep 04, 2014 at 10:45:59AM -0400, Jay Pipes wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On 08/29/2014 05:15 PM, Zane Bitter wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On 29/08/14 14:27, Jay Pipes wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On 08/26/2014 10:14 AM, Zane Bitter wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Steve Baker has started the process of moving Heat tests out of the
Tempest repository and into the Heat repository, and we're looking for
some guidance on how they should be packaged in a consistent way.
Apparently there are a few projects already packaging functional tests
in the package <projectname>.tests.functional (alongside
<projectname>.tests.unit for the unit tests).
That strikes me as odd in our context, because while the unit tests run
against the code in the package in which they are embedded, the
functional tests run against some entirely different code - whatever
OpenStack cloud you give it the auth URL and credentials for. So these
tests run from the outside, just like their ancestors in Tempest do.
There's all kinds of potential confusion here for users and packagers.
None of it is fatal and all of it can be worked around, but if we
refrain from doing the thing that makes zero conceptual sense then there
will be no problem to work around :)
I suspect from reading the previous thread about "In-tree functional
test vision" that we may actually be dealing with three categories of
test here rather than two:
* Unit tests that run against the package they are embedded in
* Functional tests that run against the package they are embedded in
* Integration tests that run against a specified cloud
i.e. the tests we are now trying to add to Heat might be qualitatively
different from the <projectname>.tests.functional suites that already
exist in a few projects. Perhaps someone from Neutron and/or Swift can
confirm?
I'd like to propose that tests of the third type get their own top-level
package with a name of the form <projectname>-integrationtests (second
choice: <projectname>-tempest on the principle that they're essentially
plugins for Tempest). How would people feel about standardising that
across OpenStack?
</pre>
</blockquote>
<pre wrap="">By its nature, Heat is one of the only projects that would have
integration tests of this nature. For Nova, there are some "functional"
tests in nova/tests/integrated/ (yeah, badly named, I know) that are
tests of the REST API endpoints and running service daemons (the things
that are RPC endpoints), with a bunch of stuff faked out (like RPC
comms, image services, authentication and the hypervisor layer itself).
So, the "integrated" tests in Nova are really not testing integration
with other projects, but rather integration of the subsystems and
processes inside Nova.
I'd support a policy that true integration tests -- tests that test the
interaction between multiple real OpenStack service endpoints -- be left
entirely to Tempest. Functional tests that test interaction between
internal daemons and processes to a project should go into
/$project/tests/functional/.
For Heat, I believe tests that rely on faked-out other OpenStack
services but stress the interaction between internal Heat
daemons/processes should be in /heat/tests/functional/ and any tests the
rely on working, real OpenStack service endpoints should be in Tempest.
</pre>
</blockquote>
<pre wrap="">Well, the problem with that is that last time I checked there was
exactly one Heat scenario test in Tempest because tempest-core doesn't
have the bandwidth to merge all (any?) of the other ones folks submitted.
So we're moving them to openstack/heat for the pure practical reason
that it's the only way to get test coverage at all, rather than concerns
about overloading the gate or theories about the best venue for
cross-project integration testing.
</pre>
</blockquote>
<pre wrap="">Hmm, speaking of passive aggressivity...
Where can I see a discussion of the Heat integration tests with Tempest QA
folks? If you give me some background on what efforts have been made already
and what is remaining to be reviewed/merged/worked on, then I can try to get
some resources dedicated to helping here.
</pre>
</blockquote>
<pre wrap="">We recieved some fairly strong criticism from sdague[1] earlier this year,
at which point we were already actively working on improving test coverage
by writing new tests for tempest.
Since then, several folks, myself included, commited very significant
amounts of additional effort to writing more tests for tempest, with some
success.
Ultimately the review latency and overhead involved in constantly rebasing
changes between infrequent reviews has resulted in slow progress and
significant frustration for those attempting to contribute new test cases.
It's been clear for a while that tempest-core have significant bandwidth
issues, as well as not necessarily always having the specific domain
expertise to thoroughly review some tests related to project-specific
behavior or functionality.
</pre>
</blockquote>
<pre wrap="">So I view this as actually a breakdown in cross-team communication, with both
sides at fault. For example, for a couple of months we had an outstanding
meeting topic on heat testing which almost always no one brought up anything to
discuss, eventually I just dropped it because it was never used. Instead I
should have found someone to drive it forward. Or that the heat testing blueprint
hasn't really seen much activity and only has 6 patches linked against it.
</pre>
</blockquote>
<pre wrap="">If I had been aware of the meeting topic I definitely would have taken
advantage of it.
</pre>
<blockquote type="cite">
<pre wrap="">The QA team is also well aware of review latency issues, we have a few relief
valves to try and help with it, like a meeting topic every week dedicated to
reviews that need attention, and using review dashboards that prioritize reviews
which need extra eyes. We also use the blueprints to track and prioritize
reviews for efforts like bringing ramping up testing for a project. But if these
aren't used it's hard to know that things aren't getting attention. Honestly, I
think it's a major issue when the first I'd heard of this frustration about
reviews on heat patches was when I happened to notice an abandoned patch that
mentioned it.
This case is actually why I'm planning on starting a QA liaison program soon so
there is point of contact to push forward these things. Looking at neutron which
had very little testing in havana and had ramped up the number of tests very
quickly was having someone driving that effort and attending both meetings.
Miguel Lavalle drove things forward by keeping on top of the patches in flight
and letting people in both QA and Neutron know when something needed extra
attention. I think the unspoken expectation from the QA team was that something
like this was going to happen here. Hopefully, having a person formally take on
this role in fostering communication between teams will be helpful in avoiding
these issues in the future.
</pre>
</blockquote>
<pre wrap="">A QA liaison program sounds like a great idea.
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">So it was with some relief that we saw the proposal[2] to move the burden
for reviewing project test-cases to the project teams, who will presumably
be more motivated to do the reviews, and have the knowledge of what needs
testing.
[1] <a class="moz-txt-link-freetext" href="http://lists.openstack.org/pipermail/openstack-dev/2014-March/029661.html">http://lists.openstack.org/pipermail/openstack-dev/2014-March/029661.html</a>
[2] <a class="moz-txt-link-freetext" href="http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html">http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html</a>
</pre>
<blockquote type="cite">
<pre wrap="">I would greatly prefer just having a single source of integration testing in
OpenStack, versus going back to the bad ol' days of everybody under the sun
rewriting their own.
Note that I'm not talking about functional testing here, just the
integration testing...
</pre>
</blockquote>
<pre wrap="">You may have to define the terms functional and integration here, as IMO
there's already significant confusion about what the target of e.g API and
scenario tests in tempest are.
This is also further complicated by the fact that all heat functional tests
also test integration of the various underlying services to some extent.
My opinion is that any tests remaining in tempest should focus on API
correctness, e.g to keep us honest in terms of backwards incomaptible
changes to the API surface.
Then for all tests which aim to prove the functionality of the project, e.g
my understanding of tempest scenario tests atm, we should allow project
teams to own them, and add to them as functionality develops over time.
</pre>
</blockquote>
<pre wrap="">This is actually the opposite direction that things are pushing right now. The
API tests are viewed as being mostly project specific, and besides for causing
friction when attempting to make a breaking api change there isn't a reason to
put them in an integrated test suite. While the scenario tests mostly involve
cross-project interactions and would be outside the scope of project specific
testing. Moving forward the expectation is that tempest's api tests will mostly
move to the projects (once we have a solution to block breaking api changes) and
the scenario tests will grow.
</pre>
</blockquote>
<pre wrap="">This sounds fine in the long term, but Heat needs a comprehensive
integration suite urgently, and developing them as tempest scenarios has
not delivered that yet. Tempest reviewer bandwidth has only been part of
the issue, not enough heat developers have been writing scenario tests
either. This has been a bit of a chicken-and-egg problem since we never
got to the point where there was enough existing scenario tests to -1
any new Heat feature that lacked one. Another issue is that it has taken
this long to get the devstack changes in which build a custom image
containing the required agents, which many of our tests will require.
The existing scenario tests have been forklifted into
heat_integrationtests, and they can always be forklifted back again in
the future. I would like to propose that we go ahead with the in-tree
integration tests with a view to moving them back to tempest in the
future. We could agree on a set of preconditions for moving them back.
On the heat side the preconditions could be:
- Good coverage of testing heat resources
- An established process for insisting on new integration tests for new
features
On the tempest side:
- An established QA liaison program
- Completion of transition to tempest-lib and in-tree functional tests
</pre>
</blockquote>
<pre wrap="">
So this is actually something that is very similar to something we discussed at
summit. [1] I don't have an issue with the model of developing tests in the heat
tree to have testing be more tightly coupled with development. It has several
advantages. Then we can do a graduation of tests into tempest when and where it
would make sense to run against everyone, and move heat tests from nova into
heat. However, I don't view any of this as a good reason to remove existing
tests from tempest now. Maybe as part of the tempest cleanup that'll happen
eventually some of the existing tests we'll find don't need to be in tempest.
But for right now there isn't really any evidence supporting that, especially
considering how limited heat test coverage is that would just seem like a
premature action.
I think what you've outlined for preconditions to migration makes sense for the
most part. But, I think it should be for migration either way, not just for
heat -> tempest. Because really when we're talking about test migrations we're
talking about trying to optimize our test load so that we're only running things
where and when they need to be.
I'm also not entirely sure what you mean by: 'Completion of transition to
tempest-lib and in-tree functional tests' I think you might have some
unrealistic expectations around what is happening here. We're not going to be
shrinking tempest scope in the short term, and I don't expect it will happen for
some time. When we have discussions about moving a large chunk of api test
coverage back into projects that's a *long* term goal. Several things will need
to happen before we can even consider working on an en masse migration of tests
out of tempest. (including having the discussion on what the procedure and
prereqs are for doing that, which is really a summit topic) Just one of which is
having the projects actually starting to spin up their own functional test
suites capable of doing the same class of api validation. Honestly I feel that
removing tests from tempest will probably start to happen at the end of Kilo at
the very earliest, but more realistically it'll be a L task. I can understand
this precondition being just the tempest-lib migration especially if your
in-tree heat tests are going to essentially be a mini-tempest. That will make
the migrations in either direction much simpler.
</pre>
<blockquote type="cite">
<pre wrap="">
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">Ultimately I don't think it really matters which repo those tests live in,
provided we can write them and get them running in the gate (catching
regressions, which otherwise keep slipping through) in a timely manner.
</pre>
</blockquote>
<pre wrap="">So for the most part this may be true, unless you are considering cross project
testing and gating, which is what I think Jay's argument is here. Heat is in a
different position that almost all of it's functionality is dependent on the
other services. So if the expectation is to be running these tests in a full
OpenStack deployment essentially you'll be duplicating the role of Tempest. But,
by being a heat specific test suite you'll have symmetrical gating issues.
</pre>
</blockquote>
<pre wrap="">This is touching on the limits of the gating infrastructure. We're
already at the limit of available cloud resources to run an integrated
gate, and the tests we'd like to write will by their nature consume
quite some resources. There is a human limit too, some of our best folk
are burning out on keeping on top of integrated gate issues.
</pre>
</blockquote>
<pre wrap="">
Yes, I agree we're reaching resource limitations here, but that in itself isn't
a reason to abandon the system we have now. We've identified a set of problems
with our current methodology and have a plan to try and fix it but rushing to
implement it by removing existing things is not a good way to handle it. We need
to be targeted in how we change things so that we don't lose the advantages of
our current system in the process.
</pre>
<blockquote type="cite">
<pre wrap="">There is a potential symmetrical gating issue, but in theory Heat is
just consuming stable tested APIs. sdague has suggested we only run
check-heat-dsvm-functional against heat for now, and any asymmetric
breakages be reverted/fixed as they occur, and prevented from recurring
in the test suites of the offending projects.
</pre>
</blockquote>
<pre wrap="">
So I can safely say from experience that saying this and doing it are 2 very
different things. Every time we've tried doing something like this with having
asymmetry it's caused way more pain than anyone considers at first. Having seen
neutron break their gate for weeks at a time because of asymmetry. Or trying to
spin up a new gating test job that only runs on a subset of projects constantly
delayed because something else broke it. Although this might not necessarily be
a bad thing, I was just outlining it as a potential concern before. I actually
think this will probably be one way to stimulate better cross project
communication.
The counter example that's been brought up to get around this is having
"contract tests" in the projects to enforce stability on interfaces consumed by
other projects. However, I don't think for Heat this will work or scale too well
because it consumes pretty much all of the projects' REST APIs, which really
comes back to all of Jay's posts on this thread.
-Matt Treinish
[1] <a class="moz-txt-link-freetext" href="https://etherpad.openstack.org/p/juno-qa-functional-api">https://etherpad.openstack.org/p/juno-qa-functional-api</a>
</pre>
<br>
</blockquote>
+1<br>
<br>
Moving tests out of tempest will be done on a project-by-project
basis, when each is ready.<br>
From the QA perspective, there are two reasons for moving tests out
of tempest:<br>
<br>
1. tempest gates every project and it is not scalable to do proper
in-depth functional testing of each project in every project's gate<br>
2. tempest does not have the review bandwidth nor domain expertise
to move through all the tests we should have. In a way this seems
similar to<br>
the ongoing discussion about breaking out nova drivers<br>
<br>
From a project-selfish point of view, each project should want as
many of its tests as possible in the gates of any project on which
it depends because that is how you prevent other projects from
breaking you, though it is clear in the heat case that this was
countered by a review process seen as not adequate. Deciding which
tests should be part of a symetric gate and which should just run
for one project is difficult, and part of what the heat discussion
is about. A (reasonable) desire to avoid having to make these
potentially fragile decisions, and keep them correct going forward,
is how we got to where we are. I think we all agree now that the
current system is not sustainable. But as Matt said, we have to be
very deliberate and careful about moving tests out of tempest.<br>
<br>
-David<br>
<br>
<br>
<blockquote cite="mid:20140905161004.GA1329@Sazabi.treinish"
type="cite">
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
OpenStack-dev mailing list
<a class="moz-txt-link-abbreviated" href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
</blockquote>
<br>
</body>
</html>