[openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy
Eoghan Glynn
eglynn at redhat.com
Sat Jul 12 19:24:25 UTC 2014
> So I'm not sure that this should be a mandatory thing, but an
> opt-in. My real concern is the manpower, who is going to take the
> time to write all the test suites for all of the projects. I think
> it would be better to add that on-demand as the extra testing is
> required. That being said, I definitely view doing this as a good
> thing and something to be encouraged, because tempest won't be able
> to test everything.
>
> The other thing to also consider is duplicated effort between
> projects. For an example, look at the CLI tests in Tempest, the
> functional testing framework for testing CLI formatting was
> essentially the same between all the clients which is why they're in
> tempest. Under your proposal here, CLI tests should be moved back to
> the clients. But, would that mean we have a bunch of copy and pasted
> versions of the CLI test framework between all the project.
>
> I really want to avoid a situation where every project does the same
> basic testing differently just in a rush to spin up functional
> testing. I think coming up with a solution for a place with common
> test patterns and frameworks that can be maintained independently of
> all the projects and consumed for project specific testing is
> something we should figure out first. (I'm not sure oslo would be
> the right place for this necessarily)
Yep, I'd have similar concerns about dupication of effort and
divergence in a rush to spin up in-tree mini-Tempests across all the
projects.
So, I think it would be really great to have one or two really solid
exemplar in-tree functional test suites in place, in order to allow
the inevitable initial mistakes to be made the minimal number of
times.
Ideally the QA team would have an advisory, assisting role in getting
these spun up so that the project gets the benefit of their domain
expertise.
Of course it would be preferable also to have the re-usable elements
of the test infrastructure in a consumable form that the laggard
projects can easily pick up without doing whole-scale copy'n'paste.
> So I think that the contract unit tests work well specifically for
> the ironic use case, but isn't a general solution. Mostly because
> the Nova driver api is an unstable interface and there is no reason
> for that to change. It's also a temporary thing because eventually
> the driver will be moved into Nova and then the only cross-project
> interaction between Ironic and Nova will be over the stable REST
> APIs.
>
> I think in general we should try to avoid doing non REST API
> cross-project communication.
As I've pointed out before, I don't think it's feasible for ceilometer
to switch to using only REST APIs for cross-project communication.
However what we can do is finally grasp the nettle of "contractizing"
notifications, as discussed on this related thread:
http://lists.openstack.org/pipermail/openstack-dev/2014-July/039858.html
The realistic time horizon for that is the K* cycle I suspect, but
overall from that thread there seems to be some appetite for actually
doing it finally.
> So hopefully there won't be more of
> these class of things, and if there are we can tackle them on a per
> case basis. But, even if it's a non REST API I don't think we should
> ever encourage or really allow any cross-project interactions over
> unstable interfaces.
Yes, if we go from discouragement to explicitly *disallowing* such
interactions, that's probably something that would need to mandated
at TC level IMO, with the appropriate grandfathering of existing usage.
Is this something you or Sean (being a member of the TC) or I could
drive?
I'd be happy to draft some language for a governance patch, but having
been to-and-fro the well before on such patches, I'm well aware that a
TC member pushing it would add considerably to effectiveness.
> As a solution for notifications I'd rather see a separate
> notification white/grey (or any other monochrome shade) box test
> suite. If as a project we say that notifications have to be
> versioned for any change we can then enforce that easily with an
> external test suite that contains the definitions for all the
> notification. It then just makes a bunch of api calls and sits on
> RPC verifying the notification format. (or something of that ilk)
Do you mean a *single* external test suite?
As opposed to multiple test suites, each validating the notifications
emitted by each project?
The reason I'm laboring this point is that such an over-arching
testsuite feels a little Tempest-y. Seems it would have to spin up an
entire devstack and then tickle the services into producing a range
of notifications before consuming and verifying the events.
Would that possibly be more lightweight if tested on a service-by-service
basis?
> I agree that normally whitebox testing needs to be tightly coupled
> with the data models in the projects, but I feel like notifications
> are slightly different. Mostly, because the basic format is the
> same between all the projects to make consumption simpler. So
> instead of duplicating the work to validate the notifications in all
> the projects it would be better to just implement it once. I also
> think tempest being an external audit on the API has been invaluable
> so enforcing that for notifications would have similar benefits.
So, do I take from that your vision is for something Tempest-y,
except with notifications as opposed to APIs being the primary
axis of verification?
> As an aside I think it would probably be fair if this was maintained
> as part of ceilometer or the telemetry program, since that's really
> all notifications are used for. (or least as AIUI) But, it would
> still be a co-gating test suite for anything that emits
> notifications.
Possibly. Though I'm not sure ceilo should really be considered the
exclusive consumer of these notifications. We have things on stackforge
(e.g. StackTach) that also consume events, and other integrated projects
have expressed interest in doing so (e.g. Horizon).
> I think the only real issue in your proposal is that the boundaries
> between all the test classifications aren't as well defined as they
> seem. I agree that having more intermediate classes of testing is
> definitely a good thing to do. Especially, since there is a great
> deal of hand waving on the details between what is being run in
> between tempest and unit tests. But, the issue as I see it is
> without guidelines on what type of tests belong where we'll end up
> with a bunch duplicated work.
Yes, we really need to protect the community from burning a lot of
duplicated effort to end up with a hotchpotch of different functional
testing harnesses.
> It's the same problem we have all the time in tempest, where we get
> a lot of patches that exceed the scope of tempest, despite it being
> arguably clearly outlined in the developer docs. But, the complexity
> is higher in this situation, because of having a bunch of different
> types of test suites that are available to add a new test to. I just
> think before we adopt #2 as mandatory it's important to have a
> better definition on the scope of the project specific functional
> testing.
So, I'm glad you brought up exceeding the scope of Tempest, I think it
really needs to communicated more widely exactly what Tempest is and
isn't. Maybe we just didn't reads the docco with enough care, but I
guess the gung-ho adoption of Tempest was at least partially driven by
the perception that it was crucial to being considered a "good
citizen" in openstack (c.f. the relevant TC mandates).
> I think that negative testing is still part of tempest in your
> proposal. I still feel that the negative space of an API still is
> part of the contract, and should be externally validated. As part of
> tempest I think we need to revisit the negative space solution
> again, because I haven't seen much growth on the automatic test
> generation. We also can probably be way more targeted about what
> we're running, but I don't think punting on negative testing in
> tempest is something we should do.
I agree, negative tests are IMO a crucial aspect of ensuring a
usable API implementation.
> I agree that we should be directly testing the cross-project
> integration points which aren't REST APIs.
Do you mean, directly testing in Tempest or directly testing
elsewhere?
> I definitely think more testing is always better.
Ain't that the truth! :)
> I just want to make sure we're targeting the right things, because
> this proposal is pushing for a lot extra work for everyone. I want
> to make sure that before we commit to something this large that it's
> the right direction.
+100.
I'm not even sure what the appropriate mechanism is for "commit[ting]
to something this large". At the very least it needs cross-project
approval approval/acquiescence at the PTLs meeting. If this had come
up a couple months ago, obviously we'd have been discussing at length
at the cross-project track in ATL. OTOH I don't know if we can leave
it fester until the K* summit.
Cheers,
Eoghan
More information about the OpenStack-dev
mailing list