[openstack-dev] HP OpenStack Stress Test Tool
dkranz at redhat.com
Mon Jul 1 19:41:27 UTC 2013
Thanks, Terry. Tempest would be a great place for stress tests. There
has already been some work on Tempest stress tests and there is
currently a blueprint
https://blueprints.launchpad.net/tempest/+spec/stress-tests and some code
code is a more general driver that does things similar to what you have
and includes things like checking logs for errors. There is a case for
volumes but more simple than your scenario. The idea was that more
stress scenarios could be provided in that framework. I suggest you look
at that code and the blueprint and let us know what you think.
This discussion would probably be better held just on the qa list.
Looking forward to your input.
On 07/01/2013 02:06 PM, Gong, Terry H wrote:
> Hi All,
> We have a stress tool (called qaStressTool) that we want to contribute
> to the OpenStack community to help improve the quality of the
> OpenStack. Initially it was written to stress the block storage
> driver, but it appear to touch several pieces of the OpenStack
> projects. We would like your feedback on which project to submit the
> tool. One suggestion is to submit qaStressTool into the Tempest
> project. Is this the appropriate location? A preliminary copy of
> qaStressTool is available from github at
> *https://github.com/terry7/openstack-stress.git***for you to review to
> help make this determination.
> Here is a brief summary what qaStressTool can do and what issues were
> found while running qaStressTool.
> qaStressTool is used to generate a loadthat tests the block storage
> drivers and the related OpenStack software. qaStressTool is written in
> Python and uses the Python Cinder Client and Nova Client API's.
> To run qaStressTool, the user specify the number of threads, the
> number of servers, and the number of volumes on the command line along
> with the IP address of the controller node. The user name, the tenant
> name, and the user password must be configured in the environment
> variables before the start of the test.
> qaStressTool will perform the following during the run:
> ·Create the specified number of virtual machines (or instances) that
> will be used by all the threads.
> ·Create the specified number of threads that will be generating the load.
> ·Each thread will create the specified number of volumes.
> ·After creating the specified number of volumes, create a snapshot for
> each of the volume.
> ·After creating the snapshot, the thread will attach each volume to a
> randomly selected virtual machine until all volumes are used.
> ·After attaching all the volumes, start detaching the volumes from the
> ·After detaching all the volumes, delete the snapshot from each volume.
> ·After deleting all the snapshots, delete the volumes.
> ·Finally, delete all the virtual machines.
> ·Display the results of the test after performing any necessary cleanup.
> ·Note that each thread run asynchronously performing the creating
> volumes, creating snapshots, attaching volumes to instances, detaching
> volumes from instances, deleting snapshots, and deleting volumes.
> ·Initially, the test ran with no confirmation for each action. This
> has proven to be too stressful. However, there is an command line
> option to turn off confirmation if one want to run in this mode.
> We have found the following bugs from running qaStressTool:
> ·Cinder Bug# 1157506 Snapshot in-use count not decrement after
> snapshot is deleted
> ·Cinder Bug# 1172503 3par driver not synchronize causing duplicate
> hostname error
> ·Nova Bug# 1175366 Fibre Channel Multipath attach race condition
> ·Nova Bug# 1180497 FC attach code doesn't discover multipath device
> ·Neutron Bug# 1182662 Cannot turn off quota checking
> ·Nova Bug# 1192287 Creating server did not fail when exceeded Quantum
> quota limit
> ·Nova Bug# 1192763 Removing FC device causes exception preventing
> detachment completion
> We have found other issues like libvirt errors or dangling LUNs that
> we need to investigate further to determine whether we have a problem
> or not.
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev