[openstack-dev] [Quantum] Unit tests and memory consumption

Salvatore Orlando sorlando at nicira.com
Mon Nov 5 15:38:04 UTC 2012


Some projects are already running tox only.
I think there is no particular technical reason for which we're keeping
both of them, if not that some developers (like me) are very stubborn and
obstinate.
Up to a few months ago, run_tests.sh had some reason for existing because
of the plugin specific unit tests, which are now all in the main test set
(and that's where the memory usage bomb exploded).

We can have a discussion on removing run_tests.sh, and I have no major
objection.



On 5 November 2012 08:58, Zhongyue Luo <zhongyue.luo at gmail.com> wrote:

> This is kind of off topic but I was wondering why there are two different
> methods of testing? run_tests and tox
>
> Maybe I'm missing something but can't we just get rid of run_tests and use
> only tox?
>
>
> On Mon, Nov 5, 2012 at 3:05 PM, Monty Taylor <mordred at inaugust.com> wrote:
>
>>
>>
>> On 11/05/2012 07:56 AM, Gary Kotton wrote:
>> > On 11/04/2012 10:48 PM, Monty Taylor wrote:
>> >>
>> >> On 11/04/2012 03:26 PM, Gary Kotton wrote:
>> >>> On 10/24/2012 03:24 PM, Gary Kotton wrote:
>> >>>> On 10/24/2012 12:20 AM, Monty Taylor wrote:
>> >>>>> I believe that we can address this for both run_tests and tox with
>> >>>>> some
>> >>>>> of the testtools/fixtures stuff clark has been playing with. We'll
>> >>>>> poke
>> >>>>> at it tomorrow (he's out for the day) and see if we can get an
>> >>>>> approach
>> >>>>> that would make everyone happy.
>> >>> I tried playing around with the flags in the tox.ini file. There are
>> >>> options to run this in multiple processes. There are two flags that
>> are
>> >>> of interest:
>> >>>
>> >>> 1. NOSE_PROCESS_RESTARTWORKER: - This is documented as follows:
>> >>> "--process-restartworker
>> >>>                          If set, will restart each worker process
>> >>> once their
>> >>>                          tests are done, this helps control memory
>> leaks
>> >>> from
>> >>>                          killing the system.
>> >>> [NOSE_PROCESS_RESTARTWORKER]"
>> >>> 2. NOSE_PROCESSES: - This is documented as follows:
>> >>> "--processes=NUM       Spread test run among this many processes. Set
>> a
>> >>>                          number equal to the number of processors or
>> >>> cores in
>> >>>                          your machine for best results.
>> >>> [NOSE_PROCESSES]"
>> >> So - we're moving towards using testr as the test runner _instead_ of
>> >> nose. There are several reasons for this, but one of them is better
>> >> support for parallelism.
>> >>
>> >> clarkb has been working on this for quantum - you might want to sync up
>> >> with him if you're interested in helping out.
>> >
>> > Thanks. I have been in touch with Clark. If I understand correctly when
>> > he ran parallel tests this did not reduce the amount of memory consumed.
>> > This was the reason I tried to set the above flags. I was hoping for a
>> > magic bullet.
>>
>> AH - gotcha. Yeah, magic bullet would be nice.
>>
>> >>> The problem that I have is that when the first variable is set nothing
>> >>> is done, this is really dependent on the second. The problem when
>> >>> setting the second is that I get the following exception:
>> >>>
>> >>> garyk at linus:~/quantum$ tox
>> >>> GLOB sdist-make: /home/garyk/quantum/setup.py
>> >>> py26 create: /home/garyk/quantum/.tox/py26
>> >>> ERROR: InterpreterNotFound: python2.6
>> >>> py27 sdist-reinst: /home/garyk/quantum/.tox/dist/quantum-2013.1.zip
>> >>> py27 runtests: commands[0]
>> >>>
>> >>>
>> /home/garyk/quantum/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/ext/declarative.py:1343:
>> >>>
>> >>> SAWarning: The classname 'NetworkBinding' is already in the registry
>> of
>> >>> this declarative base, mapped to<class
>> >>> 'quantum.plugins.openvswitch.ovs_models_v2.NetworkBinding'>
>> >>>    _as_declarative(cls, classname, cls.__dict__)
>> >>>
>> >>> Traceback (most recent call last):
>> >>>    File ".tox/py27/bin/nosetests", line 9, in<module>
>> >>>      load_entry_point('nose==1.2.1', 'console_scripts', 'nosetests')()
>> >>>    File
>> >>>
>> "/home/garyk/quantum/.tox/py27/local/lib/python2.7/site-packages/nose/core.py",
>> >>>
>> >>> line 118, in __init__
>> >>>      **extra_args)
>> >>>    File "/usr/lib/python2.7/unittest/main.py", line 95, in __init__
>> >>>      self.runTests()
>> >>>    File
>> >>>
>> "/home/garyk/quantum/.tox/py27/local/lib/python2.7/site-packages/nose/core.py",
>> >>>
>> >>> line 197, in runTests
>> >>>      result = self.testRunner.run(self.test)
>> >>>    File
>> >>>
>> "/home/garyk/quantum/.tox/py27/local/lib/python2.7/site-packages/nose/plugins/multiprocess.py",
>> >>>
>> >>> line 357, in run
>> >>>      testQueue = Queue()
>> >>>    File "/usr/lib/python2.7/multiprocessing/managers.py", line 667,
>> >>> in temp
>> >>>      token, exp = self._create(typeid, *args, **kwds)
>> >>>    File "/usr/lib/python2.7/multiprocessing/managers.py", line 565, in
>> >>> _create
>> >>>      conn = self._Client(self._address, authkey=self._authkey)
>> >>>    File "/usr/lib/python2.7/multiprocessing/connection.py", line 175,
>> in
>> >>> Client
>> >>>      answer_challenge(c, authkey)
>> >>>    File "/usr/lib/python2.7/multiprocessing/connection.py", line 413,
>> in
>> >>> answer_challenge
>> >>>      message = connection.recv_bytes(256)         # reject large
>> message
>> >>> IOError: [Errno 11] Resource temporarily unavailable
>> >>>
>> >>> Has anyone ever encountered something like this.
>> >>> Thanks
>> >>> Gary
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> OpenStack-dev mailing list
>> >>> OpenStack-dev at lists.openstack.org
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> *Intel SSG/SSD/SOTC/PRC/CITT*
> 880 Zixing Road, Zizhu Science Park, Minhang District, Shanghai, 200241,
> China
> +862161166500
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121105/1850734f/attachment.html>


More information about the OpenStack-dev mailing list