[openstack-dev] [cinder] Proposal: changes to our current testing process

Knight, Clinton Clinton.Knight at netapp.com
Mon Mar 7 15:57:10 UTC 2016



On 3/7/16, 10:45 AM, "Eric Harney" <eharney at redhat.com> wrote:

>On 03/06/2016 09:35 PM, John Griffith wrote:
>> On Sat, Mar 5, 2016 at 4:27 PM, Jay S. Bryant
>><jsbryant at electronicjungle.net
>>> wrote:
>> 
>>> Ivan,
>>>
>>> I agree that our testing needs improvement.  Thanks for starting this
>>> thread.
>>>
>>> With regards to adding a hacking check for tests that run too long ...
>>>are
>>> you thinking that we would have a timer that checks or long running
>>>jobs or
>>> something that checks for long sleeps in the testing code?  Just
>>>curious
>>> your ideas for tackling that situation.  Would be interested in helping
>>> with that, perhaps.
>>>
>>> Thanks!
>>> Jay
>>>
>>>
>>> On 03/02/2016 05:25 AM, Ivan Kolodyazhny wrote:
>>>
>>> Hi Team,
>>>
>>> Here are my thoughts and proposals how to make Cinder testing process
>>> better. I won't cover "3rd party CI's" topic here. I will share my
>>>opinion
>>> about current and feature jobs.
>>>
>>>
>>> Unit-tests
>>>
>>>    - Long-running tests. I hope, everybody will agree that unit-tests
>>>    must be quite simple and very fast. Unit tests which takes more
>>>than 3-5
>>>    seconds should be refactored and/or moved to 'integration' tests.
>>>    Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
>>>    good to have some hacking checks to prevent such issues in a future.
>>>
>>>    - Tests coverage. We don't check it in an automatic way on gates.
>>>    Usually, we require to add some unit-tests during code review
>>>process. Why
>>>    can't we add coverage job to our CI and do not merge new patches,
>>>with
>>>    will decrease tests coverage rate? Maybe, such job could be voting
>>>in a
>>>    future to not ignore it. For now, there is not simple way to check
>>>coverage
>>>    because 'tox -e cover' output is not useful [2].

The Manila project has a coverage job that may be of interest to Cinder.
It’s not perfect, because sometimes the periodic loopingcall routines run
during the test run and sometimes not, leading to false negatives.  But
most of the time it’s a handy confirmation that the unit test coverage
didn’t decline due to a patch.  Look at the manila-coverage job in this
example:  https://review.openstack.org/#/c/287575/

>>>
>>>
>>> Functional tests for Cinder
>>>
>>> We introduced some functional tests last month [3]. Here is a patch to
>>> infra to add new job [4]. Because these tests were moved from
>>>unit-tests, I
>>> think we're OK to make this job voting. Such tests should not be a
>>> replacement for Tempest. They even could tests Cinder with Fake Driver
>>>to
>>> make it faster and not related on storage backends issues.
>>>
>>>
>>> Tempest in-tree tests
>>>
>>> Sean started work on it [5] and I think it's a good idea to get them in
>>> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a
>>>real
>>> backend.
>>>
>>>
>>> Functional tests for python-brick-cinderclient-ext
>>>
>>> There are patches that introduces functional tests [6] and new job [7].
>>>
>>>
>>> Functional tests for python-cinderclient
>>>
>>> We've got a very limited set of such tests and non-voting job. IMO, we
>>>can
>>> run them even with Cinder Fake Driver to make them not depended on a
>>> storage backend and make it faster. I believe, we can make this job
>>>voting
>>> soon. Also, we need more contributors to this kind of tests.
>>>
>>>
>>> Integrated tests for python-cinderclient
>>>
>>> We need such tests to make sure that we won't break Nova, Heat or other
>>> python-cinderclient consumers with a next merged patch. There is a
>>>thread
>>> in openstack-dev ML about such tests [8] and proposal [9] to introduce
>>>them
>>> to python-cinderclient.
>>>
>>>
>>> Rally tests
>>>
>>> IMO, it would be good to have new Rally scenarios for every patches
>>>like
>>> 'improves performance', 'fixes concurrency issues', etc. Even if we as
>>>a
>>> Cinder community don't have enough time to implement them, we have to
>>>ask
>>> for them in reviews, openstack-dev ML, file Rally bugs and blueprints
>>>if
>>> needed.
>>>
>>>
>>> [1] https://review.openstack.org/#/c/282861/
>>> [2] http://paste.openstack.org/show/488925/
>>> [3] https://review.openstack.org/#/c/267801/
>>> [4] https://review.openstack.org/#/c/287115/
>>> [5] https://review.openstack.org/#/c/274471/
>>> [6] https://review.openstack.org/#/c/265811/
>>> [7] https://review.openstack.org/#/c/265925/
>>> [8]
>>> 
>>>http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.htm
>>>l
>>> [9] https://review.openstack.org/#/c/279432/
>>>
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>>
>>>
>>> ​We could just parse out the tox slowest tests output we already have.
>>> Do
>> something like pylint where we look at existing/current slowest test and
>> balk if that's exceeded.
>> 
>> Thoughts?
>> 
>> John​
>> 
>
>I'm not really sure that writing a "hacking" check for this is a
>worthwhile investment.  (It's not a hacking check really, but something
>more like what you're describing, but that's beside the point.)
>
>We should just be looking for large, complex unit tests in review, and
>the ones that we already have should be moving towards the functional
>test area anyway.
>
>So what would the objective here be exactly?
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list