[openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

Yair Fried yfried at redhat.com
Sun Jan 19 12:17:25 UTC 2014



----- Original Message -----
> From: "Sean Dague" <sean at dague.net>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Sent: Sunday, January 19, 2014 1:53:21 PM
> Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases
> 
> On 01/19/2014 02:06 AM, Yair Fried wrote:
> > MT:"Is your issue here that it's just called basic ops and you
> > don't think that's
> > reflective of what is being tested in that file anymore"
> > 
> > No.
> > My issue is, that the current scenario is, in fact, at least 2
> > separate scenarios:
> > 1. original basic_ops
> > 2. reassociate_floating_ip
> > to which I would like to add (
> > https://review.openstack.org/#/c/55146/ ):
> > 4. check external/internal connectivity
> > 3. update dns
> > 
> > While #2 includes #1 as part of its setup, its failing shouldn't
> > prevent #1 from passing. the obvious solution would be to create
> > separate modules for each test case, but since they all share the
> > same setup sequence, IMO, they should at least share code.
> > Notice that in my patch, #2 still includes #1.
> > 
> > Actually, the more network scenario we get, the more we will need
> > to do something in that direction, since most of the scenarios
> > will require the setup of a VM with a floating-ip to ssh into.
> > 
> > So either we do this, or we move all of this code to
> > scenario.manager which is also becoming very complicated
> 
> If #2 is always supposed to work, then I don't actually understand
> why
> #1 being part of the test or not part of the test is really relevant.
> And being part of the same test saves substantial time.
> 
> If you have tests that do:
>  * A -> B -> C
>  * A -> B -> D -> F
> 
> There really isn't value in a test for A -> B *as long* as you have
> sufficient sign posting to know in the fail logs that A -> B worked
> fine.
> 
> And there are sufficient detriments in making it a separate test,
> because it's just adding time to the runs without actually testing
> anything different.

OK,
but considering my pending patch (#3 and #4)
what about:

#1 -> #2
#1 -> #3
#1 -> #4

instead of 

#1 -> #2 -> #3 -> #4

a failure in #2 will prevent #3 and #4 from running even though they are completely unrelated


> 
> 	-Sean
> 
> > 
> > Yair
> > 
> > ----- Original Message -----
> > From: "Matthew Treinish" <mtreinish at kortar.org>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev at lists.openstack.org>
> > Sent: Friday, January 17, 2014 6:17:55 AM
> > Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break
> > down NetworkBasicOps to smaller test cases
> > 
> > On Wed, Jan 15, 2014 at 11:20:22AM -0500, Yair Fried wrote:
> >> Hi Guys
> >> As Maru pointed out - NetworkBasicOps scenario has grown out of
> >> proportion and is no longer "basic" ops.
> > 
> > Is your issue here that it's just called basic ops and you don't
> > think that's
> > reflective of what is being tested in that file anymore. If that's
> > the case
> > then just change the name.
> > 
> >>
> >> So, I started breaking it down to smaller test cases that can fail
> >> independently.
> > 
> > I'm not convinced this is needed. Some scenarios are going to be
> > very involved
> > and complex. Each scenario tests is designed to simulate real use
> > cases in the
> > cloud, so obviously some of them will be fairly large. The solution
> > for making
> > debugging easier in these cases is to make sure that any failures
> > have clear
> > messages. Also make sure there is plenty of signposting debug log
> > messages so
> > when something goes wrong you know what state the test was in.
> > 
> > If you split things up into smaller individual tests you'll most
> > likely end up
> > making tests that are really aren't scenario tests. They'd be
> > closer to API
> > tests, just using the official clients, which really shouldn't be
> > in the
> > scenario tests.
> > 
> >>
> >> Those test cases share the same setup and tear-down code:
> >> 1. create network resources (and verify them)
> >> 2. create VM with floating IP.
> >>
> >> I see 2 options to manage these resources:
> >> 1. Completely isolated - resources are created and cleaned using
> >> setUp() and tearDown() methods [1]. Moving cleanup to tearDown
> >> revealed this bug [2]. Apparently the previous way (with
> >> tearDownClass) wasn't as fast). This has the side effect of
> >> having expensive resources (ie VMs and floating IPs) created and
> >> discarded  multiple times though they are unchanged.
> >>
> >> 2. Shared Resources - Using the idea of (or actually using)
> >> Fixtures - use the same resources unless a test case fails, in
> >> which case resources are deleted and created by the next test
> >> case [3].
> > 
> > If you're doing this and splitting things into smaller tests then
> > it has to be
> > option 1. Scenarios have to be isolated if there are resources
> > shared between
> > scenario tests that really is only one scenario and shouldn't be
> > split. In fact
> > I've been working on a change that fixes the scenario test
> > tearDowns that has the
> > side effect of enforcing this policy.
> > 
> > Also just for the record we've tried doing option 2 in the past,
> > for example
> > there used to be a tenant-reuse config option. The problem with
> > doing that was
> > actually tends to cause more non-deterministic failures or adding a
> > not
> > insignificant wait time to ensure the state is clean when you start
> > the next
> > test. Which is why we ended up pulling this out of tree. What ends
> > up happening
> > is that you get leftover state from previous tests and the second
> > test ends up
> > failing because things aren't in the clean state that the test case
> > assumes. If
> > you look at some of the oneserver files in the API that is the only
> > place we
> > still do this in the tempest, and we've had many issues on making
> > that work
> > reliably. Those tests are in a relatively good place now but those
> > are much
> > simpler tests. Also between each test setUp has to check and ensure
> > that the
> > shared server is in the proper state. If it's not then the shared
> > server has to
> > be rebuilt. This methodology would become far more involved for the
> > scenario
> > tests where you have to manage more than one shared resource.
> > 
> >>
> >> I would like to hear your opinions, and know if anyone has any
> >> thoughts or ideas on which direction is best, and why.
> >>
> >> Once this is completed, we can move on to other scenarios as well
> >>
> >> Regards
> >> Yair
> >>
> >> [1] fully isolated - https://review.openstack.org/#/c/66879/
> >> [2]
> >> https://bugs.launchpad.net/nova/+bug/1269407/+choose-affected-product
> >> [3] shared resources - https://review.openstack.org/#/c/64622/
> > 
> > -Matt Treinish
> > 
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> --
> Sean Dague
> Samsung Research America
> sean at dague.net / sean.dague at samsung.com
> http://dague.net
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



More information about the OpenStack-dev mailing list