[Openstack] [QA] openstack-integration-tests

Rohit Karajgi rohit.karajgi at vertex.co.in
Tue Nov 1 13:24:34 UTC 2011


Hi Gabe,

The proposed set of requirements for this tool that you've listed below look reasonable to me.
Parallel execution is a huge time saver but that also means having to write independent tests. Zodiac's object oriented-ness is a nice example of quick and easy test writing.

I feel reporting is key and could also be discussed a little more earlier on:

-          Is having xunit output from nosetests (or like) enough for reporting in our functional /integration tests?

-          Ability for the reports to pinpoint where test failures and errors occurred and using what test data.  Test data be captured even in success tests.

-          Ability to have detailed execution steps ,like logStep(), additional wrappers than log.info() or log.debug()

-          Ability to have some automated notification when some % threshold of failed tests has reached.

-          Easy debugging and logging of tests, such that test writer need not know internals of core test framework.

Thanks!
Rohit

From: openstack-bounces+rohit.karajgi=vertex.co.in at lists.launchpad.net [mailto:openstack-bounces+rohit.karajgi=vertex.co.in at lists.launchpad.net] On Behalf Of Gabe Westmaas
Sent: Thursday, October 27, 2011 7:53 AM
To: openstack at lists.launchpad.net
Subject: Re: [Openstack] [QA] openstack-integration-tests

Thanks to everyone for your input, based on these emails and the QA meeting today, I'd like to propose the following as our list of requirements for this suite, as well as a path forward with some code proposed prior to the next meeting.

 - Python as the language of focus for the test suite
 - The ability to have the tests be parallelizable - driven largely by a need for speed
 - The ability to quickly add tests.  For example, adding a new module to a directory should be sufficient to get those tests considered a part of the test suite
 - The ability to run the test via a widely accepted standard interface, for example nose
 - Testing both JSON and XML where the APIs support both JSON and XML
 - Support for xunit output for reporting purposes
 - Tests should include fully functional tests - for example SSH verification of instance states after API commands issued (not to be confused with ssh verification on the host system or control infrastructure)
  - Deployment agnosticism, meaning that no matter how you choose to install the openstack project you are testing with this particular run, as long as the API is functioning, you should be able to run this test suite against it.  In the case of nova that means no favoritism to Xen or KVM or whichever hypervisor, etc.
  - The ability to run a subset of tests via configuration or command line options
  - Tests that use httplib2 and novaclient lib are both critical, but we will start by focusing on httplib2 and adding CLI where it makes sense later

I think we can achieve deployment agnosticism using the methods available in the suites already for skipping tests and combining those with feature flags that specify whether a particular feature should or should not be tested.  Most things can default to on, but if, for example, you can't test resize because you are using KVM, you can run the tests with the resize feature disabled.  This way if you know the deployment you are running these against doesn't support certain features you can specify that via config or command line.

The parallel running is still somewhat contentious in that we have to figure out how we can get the speed gains we care about without affecting test correctness.  I think we can put some more thought into this long term and get started with independent tests that can be run in parallel, adding test dependencies later if necessary.

I think we can get a good run at a basic set of tests by taking what is in place now and applying these requirements (all of which are satisfied by some combination of what is there already) and then working to migrate the remaining tests to match that style.  Daryl has volunteered to do that work, and so if we can agree on this as a reasonable set of requirements, we can evaluate what gets merge propped with this in mind.

Gabe



From: Daryl Walleck <daryl.walleck at RACKSPACE.COM<mailto:daryl.walleck at RACKSPACE.COM>>
Date: Mon, 24 Oct 2011 02:17:04 +0000
To: Joseph Heck <heckj at mac.com<mailto:heckj at mac.com>>
Cc: "openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>" <openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>>
Subject: Re: [Openstack] [QA] openstack-integration-tests

Very interesting. Based on your initial outline, I see why you were thinking along the lines you were. Just to share, a few of my basic goals are:


 *   Environment agnostic tests. Whatever is environment specific should be easily and quickly configurable or detectable
 *   Tests that can run either against a clean or existing OpenStack environment
 *   Parameterized tests. Maximize the value of some tests  by running as many relevant data sets through them as needed
 *   Unless absolutely necessary, only using public or admin APIs for OpenStack apps (not counting helper libraries for ssh and ping and the like)
 *   Should run regardless of hypervisor. If differing hypervisors add/remove functionality, these tests should intelligently be skipped based on my initial test configuration
 *   Quick turnaround for results. Continuous deployment is something high on my list of wants, so the ability to quickly and thoroughly verify a build is critical

I haven't been involved in the Dashboard project as of yet, but I agree that it would be really helpful to build a nice Selenium suite around it as well as an even higher level of integration testing. In my quick glance at the project I didn't see anything of the sort in place. Where that should live would be another good question.

Daryl


On Oct 19, 2011, at 10:48 PM, Joseph Heck wrote:

Fair enough; let me outline the goals I'm pursuing:

In the integration tests, I'm looking for a suite of tests that are relatively agnostic to the implementation of the components of OpenStack, and which interact with OpenStack through the API's (and potentially nova-* swift-* command line).

I have a hardware environment that I want to "burn down and rebuild" on a regular basis (i.e. PXE boot and clean-slate the stuff), installing and configuring OpenStack from the trunk branches (or a release branch if we're at milestone time) and run test that use these systems together:
 - the integration authorization, authentication (keystone) with nova and swift
 - using glance in conjunction with swift
 - using glance in conjunction with nova
 - working the dashboard and it's UI mechanisms that interacts with everything else behind it through the keystone "endpoints" service catalog

I'm hoping that this framework would allow for supporting any number of installs - whether it's Fedora & GridDynamics RPMs, Ubuntu+KVM, or interesting and random variations on the themes. It's especially important to me that we can have this framework run, and expect to pass, against any OpenStack setup as we get into Quantum and Melange improvements over the Essex timeframe, which will be heavily reliant on deployment specific choices (Cisco vs. OpenVSwitch vs. whatever else might come available).

I'd prefer the test suite not choose or show preference to a specific install, or at the very minimum support identifying tests specific to an install (i.e. to highlight the interesting delta's that have grown in between Xen and KVM installations) - and be something that anyone can set up and run (since I don't want to expect any one entity to be responsible for having all the various OpenStack flavors).

My own personal aim is to have a stable back-end environment that I can run Selenium or equivalent tests against the dashboard that will in turn work everything else. This means having the environment in a known state to start with, and I'm expecting that a whole suite of these tests might well be very time consuming to run - up to a couple of hours.

My intention is to actually have multiple "swim lanes" of these hardware environments (I do already) and to automate the burn-down, install, configuration, and testing of those environments. I would like the "testing" portion of this to be consistent - and I'm looking to the OpenStack Integration Tests to be the basis for that - and to which we'll be submitting back tests working the Dashboard as it goes forward.

Something to note - (kind of buried in: http://wiki.openstack.org/openstack-integration-test-suites). The proboscis library (http://pypi.python.org/pypi/proboscis) for Nose is a NoseTest running that allows the annotation of tests with dependencies and groups, inspired by TestNG and maintiained by Tim Simpson @ rackspace.

I haven't written much with it yet, but would like to suggestion that tests could be organized with this to allow for dependencies and grouping to allow parallel execution where appropriate.

I've been currently generating functional tests using Lettuce (with a httplib2 based webdriver), and while I like it from a BDD perspective, I'm finding it more time consuming to generate the tests (as I end up messing with the DSL more than writing tests) today.

-joe

On Oct 19, 2011, at 8:25 PM, Daryl Walleck wrote:
Interesting analysis. I see a few issues though. I don't think that running tests in serial is a realistic option with a functional/integration test suite of significant size, given the time needed to create the resources needed for testing. We could, but the time needed to execute the suite and get feedback in a useful period of time would be prohibitive. If tests are self sufficient and create or intelligently share resources, parallelization should be doable.

I've heard the idea of forced test dependencies several times, which adds its own set of problems. Independent and flexible grouping/execution of tests is something I've relied on quite a bit in the past when troubleshooting. I'd also be concerned about the stability and reliability of the tests if each relies on the state generated by the tests before it. If test 4 out of 100 fails, that means the results of the 96 tests would either be false positives, or if all test dependent execution ends at that point numerous test cases would not be executed, possible hiding other issues until the first issue is resolved.

I also get the impression that I think we all may be fairly far apart about what it is that we each want from a testing framework (and even to some degree of what components make up a framework). It might be useful to take a step back and discuss what it is that we want from this test suite and what it should accomplish. For example, my understanding is that this suite will likely grow to hundreds, if not into the thousands of tests, which in my mind significantly changes how I would design the suite.

Daryl

On Oct 19, 2011, at 7:26 PM, Joseph Heck wrote:

What you've described is a great unit testing framework, but with integration testing you need recognition that some tests are dependent of specific system state - and therefore can not be run blindly in parallel.

Some can, just not all - and often the most expedient way to get a system in a known state is to "walk it there" through sequences of tests.

I believe we essentially already have this framework in place with the openstack-integration-tests repo and the proposed (but not yet implemented) set of tests using proboscis to enable dependencies and grouping in those tests.

-joe

On Oct 19, 2011, at 5:00 PM, "Ngo, Donald (HP Cloud Services : QA)" <donald.ngo at hp.com<mailto:donald.ngo at hp.com>> wrote:
My wish list for our proposed framework:


-          Create XML JUnit style run reports

-          Run tests in parallel

-          Should be able to run out of the box with little configuration (a single configuration file, everything in one place)

-          Run through standard runner like Nosetest (i.e. nosetests /Kong or nosetests /YourSuite). This will allow the suites to easily integrate in each company's framework.

-          Tests framework should support drop and run using reflection as a way to identify tests to run

Thanks,

Donald

From:openstack-bounces+donald.ngo=hp.com at lists.launchpad.net<mailto:openstack-bounces+donald.ngo=hp.com at lists.launchpad.net> [mailto:openstack-bounces+donald.ngo=hp.com at lists.launchpad.net<mailto:hp.com at lists.launchpad.net>] On Behalf Of Brebner, Gavin
Sent: Wednesday, October 19, 2011 10:39 AM
To: Daryl Walleck; Rohit Karajgi
Cc: openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Subject: Re: [Openstack] [QA] openstack-integration-tests


My 2c

To me, the end-customer facing part of the Openstack solution is in many ways the set of libraries and tools customers are likely to use - as such testing with them
is essential. If there's a bug that can be only exposed through some obscure API call that isn't readily available through one of the usual libraries, it mostly will be of
only minor importance as it will be rarely if ever get used, whereas e.g. a bug in a library that causes data corruption will not be good for Openstack no matter
how correct things are from the endpoint in. The whole solution needs to work; this is complex as we don't necessarily control all the libraries, and can't test everything
with every possible library, so we have to do the best we can to ensure we catch errors as early as possible e.g. via direct API testing for unit tests / low level
functional tests. Testing at multiple levels is required, and the art is in selecting how much effort to put at each level.

Re. framework we need a wide range of capabilities, hence keep it simple and flexible. One thing I'd really like to see would be a means to express parallelism - e.g. for
chaos monkey type tests, race conditions, realistic stress runs and so on. Support for tests written in any arbitrary language is also required. I can write all
this myself, but would love a framework to handle it for me, and leave me to concentrate on mimicking what I think our end customers are likely to do.

Gavin

From:openstack-bounces+gavin.brebner=hp.com at lists.launchpad.net<mailto:openstack-bounces+gavin.brebner=hp.com at lists.launchpad.net> [mailto:openstack-bounces+gavin.brebner=hp.com at lists.launchpad.net<mailto:hp.com at lists.launchpad.net>] On Behalf Of Daryl Walleck
Sent: Wednesday, October 19, 2011 6:27 PM
To: Rohit Karajgi
Cc: openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Subject: Re: [Openstack] [QA] openstack-integration-tests

Hi Rohit,

I'm glad to see so much interest in getting testing done right. So here's my thoughts. As far as the nova client/euca-tools portion, I think we absolutely need a series of tests that validate that these bindings work correctly. As a nice side effect they do test their respective APIs, which is good. I think duplication of testing between these two bindings and even what I'm envisioning as the "main" test suite is necessary, as we have to verify at least at a high level that they work correctly.

My thoughts for our core testing is that those would the ones that do not use language bindings. I think this is where the interesting architectural work can be done. Test framework is a very loose term that gets used a lot, but to me a framework includes:


 *   The test runner and it's capabilities
 *   How the test code is structured to assure maintainability/flexibility/ease of code re-use
 *   Any utilities provided to extend or ease the ability to test

I think we all have a lot of good ideas about this, it's just a matter of consolidating that and choosing one direction to go forward with.

Daryl

On Oct 19, 2011, at 9:58 AM, Rohit Karajgi wrote:

Hello Stackers,

I was at the design summit and the sessions that were 'all about QA' and had shown my interest in supporting this effort. Sorry I could not be present at the first QA IRC meeting due to a vacation.
I had a chance to eavesdrop at the meeting log and Nachi-san also shared his account of the outcome with me. Thanks Nachi!

Just a heads up to put some of my thoughts on ML before today's meeting.
I had a look at the various (7 and counting??) test frameworks out there to test OpenStack API.
Jay, Gabe and Tim put up a neat wiki (http://wiki.openstack.org/openstack-integration-test-suites) to compare many of these.

I looked at Lettuce<https://github.com/gabrielfalcao/lettuce> and felt it was quite effective. It's incredibly easy to write tests once the wrappers over the application are setup. Easy as in "Given a ttylinux image create a Server" would be how a test scenario would be written in a typical .feature file, (which is basically a list of test scenarios for a particular feature) in a natural language. It has nose support, and there's some neat documentation<http://lettuce.it/index.html> too. I was just curious if anyone has already tried out Lettuce with OpenStack? From the ODS, I think the Grid Dynamics guys already have their own implementation. It would be great if one of you guys join the meeting and throw some light on how you've got it to work.
Just for those who may be unaware, Soren's branch openstack-integration-tests<https://github.com/openstack/openstack-integration-tests> is actually a merge of Kong and Stacktester.

The other point I wanted to have more clarity on was on using both novaclient AND httplib2 to make the API requests. Though <wwkeyboard> did mention issues regarding spec bug proliferation into the client, how can we best utilize this dual approach and avoid another round of duplicate test cases. Maybe we target novaclient first and then the use httplib2 to fill in gaps? After-all novaclient does call httplib2 internally.

I would like to team up with Gabe and others for the unified test runner task. Please chip me in if you're doing some division of labor there.

Thanks!
Rohit

(NTT)
From: openstack-bounces+rohit.karajgi=vertex.co.in at lists.launchpad.net<mailto:openstack-bounces+rohit.karajgi=vertex.co.in at lists.launchpad.net> [mailto:openstack-bounces+rohit.karajgi=vertex.co.in at lists.launchpad.net<mailto:vertex.co.in at lists.launchpad.net>] On Behalf Of Gabe Westmaas
Sent: Monday, October 10, 2011 9:22 PM
To: openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Subject: [Openstack] [QA] openstack-integration-tests

I'd like to try to summarize and propose at least one next step for the content of the openstack-integration-tests git repository.  Note that this is only about the actual tests themselves, and says absolutely nothing about any gating decisions made in other sessions.

First, there was widespread agreement that in order for an integration suite to be run in the openstack jenkins, it should be included in the community github repository.

Second, it was agreed that there is value in having tests in multiple languages, especially in the case where those tests add value beyond the base language.  Examples of this may include testing using another set of bindings, and therefore testing the API.  Using a testing framework that just takes a different approach to testing.  Invalid examples include implanting the exact same test in another language simply because you don't like python.

Third, it was agreed that there is value in testing using novaclient as well as httplib2.  Similarly that there is value in testing both XML and JSON.

Fourth, for black box tests, any fixture setup that a suite of tests requires should be done via script that is close to but not within that suite - we want tests to be as agnostic to an implementation of openstack as possible, and anything you cannot do from the the API should not be inside the tests.

Fifth, there are suites of white box tests - we understand there can be value here, but we aren't sure how to approach that in this project, definitely more discussion needed here.  Maybe we have a separate directory for holding white and black box tests?

Sixth, no matter what else changes, we must maintain the ability to run a subset of tests through a common runner.  This can be done via command line or configuration, whichever makes the most sense.  I'd personally lean towards configuration with the ability to override on the command line.

If you feel I mischaracterized any of the agreements, please feel free to say so.

Next, we want to start moving away from the multiple entry points to write additional tests.  That means taking inventory of the tests that are there now, and figuring out what they are testing, and how we run them, and then working to combine what makes sense, into a directory structure that makes sense.  As often as possible, we should make sure the tests can be run in the same way.  I started a little wiki to start collecting information.  I think a short description of the general strategy of each suite and then details about the specific tests in that suite would be useful.

http://wiki.openstack.org/openstack-integration-test-suites

Hopefully this can make things a little easier to start contributing.

Gabe
This email may include confidential information. If you received it in error, please delete it.
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, please delete it.
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, please delete it.


This email may include confidential information. If you received it in error, please delete it.
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net> Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111101/6e127b96/attachment.html>


More information about the Openstack mailing list