[openstack-dev] [Heat] Integration Test Questions

Sergey Kraynev skraynev at mirantis.com
Sun Sep 13 13:10:54 UTC 2015


Hi Sabeen,

I think, that Pavlo described whole picture really nice.
So I'd like just to add couple my thoughts below:


Regards,
Sergey.

On 12 September 2015 at 15:08, Pavlo Shchelokovskyy <
pshchelokovskyy at mirantis.com> wrote:

> Hi Sabeen,
>
> thank you for the effort :) More tests is always better than less, but
> unfortunately we are limited by the power of VM and time available for
> gate jobs. This is why we do no exhaustive functional testing of all
> resource plugins APIs, as every time a test goes out and makes an
> async API call to OpenStack, like create a server, it always consumes
> time, and often consumes resources of VM that runs other tests of the
> same test suit as well (we do run them in parallel) making other tests
> also slower to some degree. Also, even for not-async/lightweight
> resources (e.g. SaharaNodeGroupTemplate), testing all of them requires
> running a corresponding OpenStack service on the gate job, which will
> consume its resources even further.
>

More over new additional services make devstack installation longer, so
as result we have less time for running tests.
Note, that we also should be careful during adding new tests, because as
Pavlo mentioned, we run them in parallel. I suppose, that everyone try
to use unique names for stacks and for internal resources or use only
random ids, but anyway want to remind do it carefully ;)


>
> Below are my thoughts and comments inline:
>
> On Fri, Sep 11, 2015 at 6:46 PM, Sabeen Syed <sabeen.syed at rackspace.com>
> wrote:
> > Hi All,
> >
> > My coworker and I would like to start filling out some gaps in api
> coverage
> > that we see in the functional integration tests. We have one patch up for
> > review (https://review.openstack.org/#/c/219025/). We got a comment
> saying
> > that any new stack creation will prolong the testing cycle. We agree with
> > that and it got us thinking about a few things -
>
> this test should use the TestResource (or even RandomString if you do
> not need to ensure a particular order of events), as there is no point
> of using an actual server for the assertions this test makes on
> stack/resource events.
>

I personally prefer TestResource, because it's more flexible and



>
> >
> > We are planning on adding tests for the following api's: event api's,
> > template api's, software config api's, cancel stack updates, check stack
> > resources and show resource data. These are the api's that we saw aren't
> > covered in our current integration tests. Please let us know if you feel
> we
> > need tests for these upstream, if we're missing something or if it's
> already
> > covered somewhere.
>
> Just make sure all (ideally) of them use TestResource/RandomStrings.
> You might still have to tweak it a bit to support a successful/failed
> check though. There is a test for SC/SD in functional (and I actually
> wonder what is it doing in functional but not scenario), is it not
> enough?
>
> > To conserve the creation of stacks would it make sense to add one test
> and
> > then under that we could call sub methods that will run tests against
> that
> > stack. So something like this:
> >
> > def _test_template_apis()
> >
> > def _test_softwareconfig_apis()
> >
> > def _test_event_apis()
> >
> > def test_event_template_softwareconfig_apis(self):
> >
> > stack_id = self.stack_create(…)
> >
> > self._test_template_apis(stack_id)
> >
> > self._test_event_apis(stack_id)
> >
> > self._test_softwareconfig_apis(stack_id)
>
> If you use TestResource and the like, the time to create a new stack
> for each test is not that long. And it is much better to have API
> tests separated as actual unit tests, otherwise failure in one API
> will fail the whole test which only leaves the developer wondering
> "what was that?" and makes it harder to find the root cause.
>
> >
> > The current tests are divided into two folders – scenario and
> functional. To
> > help with organization - under the functional folder, would it make
> sense to
> > add an 'api' folder, 'resource' folder and 'misc folder? Here is what
> we're
> > thinking about where each test can be put:
> >
> > API folder - test_create_update.py, test_preview.py
> >
> > Resource folder – test_autoscaling.py, test_aws_stack.py,
> > test_conditional_exposure.py, test_create_update_neutron_port.py,
> > test_encryption_vol_type.py, test_heat_autoscaling.py,
> > test_instance_group.py, test_resource_group.py, test_software_config.py,
> > test_swiftsignal_update.py
> >
> > Misc folder - test_default_parameters.py, test_encrypted_parameter.py,
> > test_hooks.py, test_notifications.py, test_reload_on_sighup.py,
> > test_remote_stack.py, test_stack_tags.py, test_template_resource.py,
> > test_validation.py
> >
> > Should we add to our README? For example, I see that we use TestResource
> as
> > a resource in some of our tests but we don't have an explanation of how
> to
> > set that up. I'd also like add explanations about the pre-testhook and
> > post-testhook file and how that works and what each line does/what test
> it's
> > attached to.
>
> By all means :) If it flattens the learning curve for new Heat
> contributors, it's even better.
>
> > For the tests that we're working on, should we be be adding a blueprint
> or
> > task somewhere to let everybody know that we're working on it so there
> is no
> > overlap?
>
> File a bug against Heat, make it a wishlist priority, and add a tag it
> 'functional-tests'. Assign to yourself at will :) but please check out
> what we already have filed:
>
> https://bugs.launchpad.net/heat/+bugs?field.tag=functional-tests
>
> > From our observations, we think it would be beneficial to add more
> comments
> > to the existing tests.  For example, we could have a minimum of a short
> > blurb for each method.  Comments?
>
> A (multi-line) doc string for module/test method would suffice. For
> longer scenario tests we already do this describing a scenario the
> test aim to pass through.
>
> > Should we add a 'high level coverage' summary in our README?  It could
> help
> > all of us know at a high level where we are at in terms of which
> resources
> > we have tests for and which api's, etc.
>
> As for APIs - I believe we could use some functional test coverage
> tool. I am not sure if there is a common thing already settled for in
> the community though. It might be a good cross-project topic to
> discuss during summit with Tempest community, they might already have
> something in the works.
>
> As for resources - we do try to exercise the native Heat ones that are
> there to provide the functionality of Heat itself (ASGs, RGs etc), but
> AFAIK we have no plans on deep testing all the other resources in a
> functional way.
>
> >
> > Let us know what you all think!
>
> Thanks again for bringing this up. "If it is not tested - it does not
> works" :)
>
> Best regards,
>
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/d896b653/attachment.html>


More information about the OpenStack-dev mailing list