[openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases
Matthew Treinish
mtreinish at kortar.org
Fri Jan 17 04:17:55 UTC 2014
On Wed, Jan 15, 2014 at 11:20:22AM -0500, Yair Fried wrote:
> Hi Guys
> As Maru pointed out - NetworkBasicOps scenario has grown out of proportion and is no longer "basic" ops.
Is your issue here that it's just called basic ops and you don't think that's
reflective of what is being tested in that file anymore. If that's the case
then just change the name.
>
> So, I started breaking it down to smaller test cases that can fail independently.
I'm not convinced this is needed. Some scenarios are going to be very involved
and complex. Each scenario tests is designed to simulate real use cases in the
cloud, so obviously some of them will be fairly large. The solution for making
debugging easier in these cases is to make sure that any failures have clear
messages. Also make sure there is plenty of signposting debug log messages so
when something goes wrong you know what state the test was in.
If you split things up into smaller individual tests you'll most likely end up
making tests that are really aren't scenario tests. They'd be closer to API
tests, just using the official clients, which really shouldn't be in the
scenario tests.
>
> Those test cases share the same setup and tear-down code:
> 1. create network resources (and verify them)
> 2. create VM with floating IP.
>
> I see 2 options to manage these resources:
> 1. Completely isolated - resources are created and cleaned using setUp() and tearDown() methods [1]. Moving cleanup to tearDown revealed this bug [2]. Apparently the previous way (with tearDownClass) wasn't as fast). This has the side effect of having expensive resources (ie VMs and floating IPs) created and discarded multiple times though they are unchanged.
>
> 2. Shared Resources - Using the idea of (or actually using) Fixtures - use the same resources unless a test case fails, in which case resources are deleted and created by the next test case [3].
If you're doing this and splitting things into smaller tests then it has to be
option 1. Scenarios have to be isolated if there are resources shared between
scenario tests that really is only one scenario and shouldn't be split. In fact
I've been working on a change that fixes the scenario test tearDowns that has the
side effect of enforcing this policy.
Also just for the record we've tried doing option 2 in the past, for example
there used to be a tenant-reuse config option. The problem with doing that was
actually tends to cause more non-deterministic failures or adding a not
insignificant wait time to ensure the state is clean when you start the next
test. Which is why we ended up pulling this out of tree. What ends up happening
is that you get leftover state from previous tests and the second test ends up
failing because things aren't in the clean state that the test case assumes. If
you look at some of the oneserver files in the API that is the only place we
still do this in the tempest, and we've had many issues on making that work
reliably. Those tests are in a relatively good place now but those are much
simpler tests. Also between each test setUp has to check and ensure that the
shared server is in the proper state. If it's not then the shared server has to
be rebuilt. This methodology would become far more involved for the scenario
tests where you have to manage more than one shared resource.
>
> I would like to hear your opinions, and know if anyone has any thoughts or ideas on which direction is best, and why.
>
> Once this is completed, we can move on to other scenarios as well
>
> Regards
> Yair
>
> [1] fully isolated - https://review.openstack.org/#/c/66879/
> [2] https://bugs.launchpad.net/nova/+bug/1269407/+choose-affected-product
> [3] shared resources - https://review.openstack.org/#/c/64622/
-Matt Treinish
More information about the OpenStack-dev
mailing list