[openstack-dev] Interesting and representative performance workloads for OpenStack
Lokare, Bageshree
bageshree.lokare at hp.com
Thu Apr 25 20:31:42 UTC 2013
Ray,
Thanks for initiating this thread.
We are using Jmeter to solve some of these perf load generation issues:
> * load generation is driven by a config file, so you can have multiple
> different tests
> * once the test is complete, logs, tracing data, and config is all
> gathered and saved
> * a reporting tool that can show the results of one test run, or compare
> the results of at least two
Out performance testing is aimed towards scalability, workload and stress testing and we use Jmeter with some 3rd party-plugins as our main framework.
I will share the set-up and framework and we can discuss it further.
Thanks,
Bageshree
On Wed, Apr 24, 2013 at 11:15 AM, Timothy Daly <timjr at yahoo-inc.com<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>> wrote:
> So far we've made good progress just running loops that, for example, boot
> VMs in batches of 20. It has turned up performance problems and
> concurrency bugs.
>
I attended the OpenStack Summit session titled "Scaling the Boot Barrier:
Identifying & Eliminating Contention in Openstack" given by Peter Feiner.
It was very good and mentioned several VM boot bottlenecks that he
identified and fixed and one unfixed major bottleneck in libvirt. Peter
mentioned he would either drive or make that fix. If you have any input in
that area it might be good to contact Peter.
>
> In terms of in-VM testing, a coworker built a little thing does a
> benchmark at boot time, (you pass the command to install and start in your
> user data). It posts the benchmark results back to a redis database so you
> can collect them easily. It just uses simple benchmarks:
>
> * dns resolver latency with dig
> * disk bandwidth with dd
> * disk latency with ioping
> * download speed using wget from a CDN
> * cpu speed using bc calculating 2^2^20
>
> There are a lot of things that could be added, of course... the other
> thing we were thinking of doing is giving it a schedule so it runs
> regularly instead of just once at bootup. Not sure how useful that will
> be. At least a bit to run in a loop could be handy to test scheduler
> changes and interactions between multiple VMs...
>
Yes, there is a lot of opportunity for variation in this area. I have a
LAMP stack application (a variation of DVD Store used in VMMark) and load
generator that drives CPU, disk I/O and network resources. I think having
VMs run for a configurable amount of time to run such a workload would be
interesting. You could create a scenario that had an average number of VMs
by adjusting the run time, where run time = Average number of VMs divided
by VM boot rate. For example if you want 10 VMs on average divide that by
a VM boot rate of 10 per minute, the run time per VM should be 1 minute.
Run that for example for at least an hour. You could replace that with a
VM workload that sat idle for 1 minute and you would get an idea of how VM
boots impact VM workload and resource usage and how VM workload impacts VM
boots. You would want the load generator to be able to drive an "offered
load".
>
> Time to boot is done simply by posting the timestamp of the beginning of
> the benchmark run back to redis.
>
Nice, I was thinking along those lines.
>
> We want to move towards a test driver that has the following properties:
>
> * load generation is driven by a config file, so you can have multiple
> different tests
> * once the test is complete, logs, tracing data, and config is all
> gathered and saved
> * a reporting tool that can show the results of one test run, or compare
> the results of at least two
>
+1 to that
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130425/d78f0eb0/attachment.html>
More information about the OpenStack-dev
mailing list