[openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

Jay Pipes jaypipes at gmail.com
Mon Jul 25 12:03:32 UTC 2016


On 07/25/2016 07:57 AM, Sean Dague wrote:
> On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:
>> On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
>>> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
>>>
>>> I agree that it's not a bug. I also agree that it helps in some specific
>>> types of tests which are doing some kind of input validation (like the patch
>>> you've proposed) or are simply iterating over some list of values (status
>>> values on a server instance for example).
>>>
>>> Using DDT in Nova has come up before and one of the concerns was hiding
>>> details in how the tests are run with a library, and if there would be a
>>> learning curve. Depending on the usage, I personally don't have a problem
>>> with it. When I used it in manila it took a little getting used to but I was
>>> basically just looking at existing tests and figuring out what they were
>>> doing when adding new ones.
>>
>> I don't think there's significant learning curve there - the way it
>> lets you annotate the test methods is pretty easy to understand and
>> the ddt docs spell it out clearly for newbies. We've far worse things
>> in our code that create a hard learning curve which people will hit
>> first :-)
>>
>> People have essentially been re-inventing ddt in nova tests already
>> by defining one helper method and them having multiple tests methods
>> all calling the same helper with a different dataset. So ddt is just
>> formalizing what we're already doing in many places, with less code
>> and greater clarity.
>>
>>> I definitely think DDT is easier to use/understand than something like
>>> testscenarios, which we're already using in Nova.
>>
>> Yeah, testscenarios feels little over-engineered for what we want most
>> of the time.
>
> Except, DDT is way less clear (and deterministic) about what's going on
> with the test name munging. Which means failures are harder to track
> back to individual tests and data load. So debugging the failures is harder.
>
> I agree with have a lot of bad patterns in the tests. But I also don't
> think that embedding another pattern during milestone 3 is the right
> time to do it. At least lets hold until next cycle opens up when there
> is more time to actually look at trade offs here.

+1

Also, I actually don't see how testscenarios won't/can't work for 
everything DDT is doing. Sounds a bit like the "why can't we use pytest 
instead of testr?" thing again.

Best,
-jay



More information about the OpenStack-dev mailing list