[openstack-dev] HP OpenStack Stress Test Tool

Gong, Terry H terry.gong at hp.com
Mon Jul 1 18:06:42 UTC 2013


Hi All,
We have a stress tool (called qaStressTool) that we want to contribute to the OpenStack community to help improve the quality of the OpenStack. Initially it was written to stress the block storage driver, but it appear to touch several pieces of the OpenStack projects. We would like your feedback on which project to submit the tool. One suggestion is to submit qaStressTool into  the Tempest project. Is this the appropriate location? A preliminary copy of  qaStressTool is available from github at https://github.com/terry7/openstack-stress.git for you to review to help make this determination.

Here is a brief summary what qaStressTool can do and what issues were found while running qaStressTool.

qaStressTool is used to generate a load that tests the block storage drivers and the related OpenStack software. qaStressTool is written in Python and uses the Python Cinder Client and Nova Client API's.

To run qaStressTool, the user specify the number of threads, the number of servers, and the number of volumes on the command line along with the IP address of the controller node. The user name, the tenant name, and the user password must be configured in the environment variables before the start of the test.

qaStressTool will perform the following during the run:


*       Create the specified number of virtual machines (or instances) that will be used by all the threads.

*       Create the specified number of threads that will be generating the load.

*       Each thread will create the specified number of volumes.

*       After creating the specified number of volumes, create a snapshot for each of the volume.

*       After creating the snapshot, the thread will attach each volume to a randomly selected virtual machine until all volumes are used.

*       After attaching all the volumes, start detaching the volumes from the instances.

*       After detaching all the volumes, delete the snapshot from each volume.

*       After deleting all the snapshots, delete the volumes.

*       Finally, delete all the virtual machines.

*       Display the results of the test after performing any necessary cleanup.

*       Note that each thread run asynchronously performing the creating volumes, creating snapshots, attaching volumes to instances, detaching volumes from instances, deleting snapshots, and deleting volumes.

*       Initially, the test ran with no confirmation for each action. This has proven to be too stressful. However,  there is an command line option to turn off confirmation if one want to run in this mode.

We have found the following bugs from running qaStressTool:


*       Cinder Bug# 1157506 Snapshot in-use count not decrement after snapshot is deleted

*       Cinder Bug# 1172503 3par driver not synchronize causing duplicate hostname error

*       Nova Bug# 1175366 Fibre Channel Multipath attach race condition

*       Nova Bug# 1180497 FC attach code doesn't discover multipath device

*       Neutron Bug# 1182662 Cannot turn off quota checking

*       Nova Bug# 1192287 Creating server did not fail when exceeded Quantum quota limit

*       Nova Bug# 1192763 Removing FC device causes exception preventing detachment completion

We have found other issues like libvirt errors or dangling LUNs that we need to investigate further to determine whether we have a problem or not.

-Terry
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130701/aaf84079/attachment.html>


More information about the OpenStack-dev mailing list