<br><br><div class="gmail_quote">On Sun, Feb 10, 2013 at 3:03 AM, Huang Zhiteng <span dir="ltr"><<a href="mailto:winston.d@gmail.com" target="_blank">winston.d@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
John, Monty, Mark,<br>
<br>
I think one example is unit tests involving test case against entry<br>
points which requires installation. I guess that's John's most<br>
annoying thing.<br>
<br>
For this special case, I think switching from run_tests.sh to tox (or<br>
even ditching run_tests.sh) is the right solution. I'm working on a<br>
patch for Cinder to make this change<br>
(<a href="https://review.openstack.org/#/c/21597/" target="_blank">https://review.openstack.org/#/c/21597/</a>). It's not ready yet as I<br>
met some difficulty with coverage tests, but as a starter, you can try<br>
functionality other than coverage tests.<br>
<div class="HOEnZb"><div class="h5"><br>
On Sun, Feb 10, 2013 at 5:23 AM, Monty Taylor <<a href="mailto:mordred@inaugust.com">mordred@inaugust.com</a>> wrote:<br>
><br>
><br>
> On 02/09/2013 01:59 PM, Mark McLoughlin wrote:<br>
>> Hi John,<br>
>><br>
>> On Sat, 2013-02-09 at 12:15 -0700, John Griffith wrote:<br>
>>> We seem to be going down a path of requiring more and more infrastructure<br>
>>> and reliance from modules inside of the respective projects code base, and<br>
>>> lately (the most annoying thing to me) is actually requiring the<br>
>>> installation of the project and it's services in order for our unit tests<br>
>>> to run.<br>
>><br>
>> Got an example?<br>
><br>
> Yes. I'd also like an example.<br>
><br>
> FWIW - I agree with the sentiment that our unittests should be simple to<br>
> run and that cross-project testing should be the purview of integration<br>
> tests.<br>
><br>
> Monty<br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Regards<br>
Huang Zhiteng<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br>Sorry for dropping the email then going MIA.<br><br>There are multiple things I wanted to get some thought around, I'll start with what I believe are the easier ones first:<br><br>1. I don't believe that we should require the installation of Cinder to run the Cinder unit tests<br>
This to me clearly crosses the line in to functional or integration territory, and moving it in to<br> a venv (tox, or otherwise) doesn't address my issue with it. Sorry Winston but right now 21597 does <br>
exactly what I don't want.<br><br> Today for Cinder I have the test-requires packages and other required items all installed on my laptop and other machines. <br> I can clone the Cinder repo and run all of the tests using 'run_tests.sh -N' in less than a minute. Lately unit tests are coming<br>
online that depend on the egg info being installed as Winston touched on.<br><br>2. I haven't figured out what Tox versus Nose buys me in Cinder.<br> I understand in Nova and others there may be advantages due to parallelism and some other enhancements, but in my<br>
case my test cycle time has actually increased by having to install the tox venv when I do a fresh clone, and in the case<br> of Nova, I *think* my tests are running a bit faster but really I don't feel that I actually know what's being run and the results.<br>
<br>These first two items are the main things I'm looking for feed-back from folks on, the rest of this email is more around test philosophy, and I don't think I'm interested in trying to come to an agreement or change on things here. I'd just like to raise some awareness, if anybody is interested in pursuing some of this, maybe we can all get together and chat at the summit. If people think it's ridiculous that's fine too.<br>
<br><br>In my mind you have unit tests, functional tests and integration tests. Unit tests to me aren't designed to find bugs, or test for regressions. They're specifically to test a unit of code that I've written, and the tests should only depend on that unit of code that I'm intending to test. <br>
<br>So for example if I write a Cinder driver called 'super_cool_driver', I have a single file of unit tests that tests super_cool_driver methods and NOTHING else. It has no dependencies on any other files/modules in the Cinder project. It's only purpose is to make sure that super_cool_driver's behaviours are in fact what I intended and expect. It also ensures that somebody later doesn't change those behaviours inadvertently.<br>
<br>Testing the interaction with other Cinder modules should be done through functional tests. This is where devstack exercises would come in to play. Currently I don't think we put enough emphasis here, when somebody submits a bug fix for something we often say "You need a unit test to catch this sort of thing in the future", maybe that's true in some cases but I'd argue that we should be saying "You need a test in devstack or tempest to catch this in the future". One of the main reasons unit tests aren't very helpful here IMO is that in most cases a bug results from a refactor or enhancement in the code. Most of the time when folks make these enhancements they also modify the unit test to behave the way they "expect" it to and think it should. The result is they just create a unit test that verifies the incorrect behaviour.<br>
<br>Another point that came up; we have "fake" volume objects and such spread out and created in multiple places throughout the Cinder unit tests. Some folks use the fake class that's set up, and most create a fake in their setup, or on the fly in the test case they're writing. This creates a bit of a problem if the volume object ever changes (you have to find every fake that's created and modify it, then modify any test that uses it etc etc). It would be great if at some point in the future we had good fakes for things like volume objects, instance objects etc (sort of along the lines of what Robert mentioned, but I'm not sure I'm thinking of the same scope as he mentioned).<br>
<br>An example of this was recently Sean Dague asked me about test objects for Cinder (Volumes, Snapshots, Types etc), I was kinda surprised but we didn't have an existing set of updated fakes to represent these things. Seems like it would be smart to have these things (both objects and DB entries) to use throughout the unit tests to provide consistency and make tests a bit more realistic, rather than just rolling your own fake object to fit the test your writing it for.<br>
<br>Thanks,<br>John<br>