[openstack-dev] [nova] Turbo-hipster
Robert Collins
robertc at robertcollins.net
Thu Jan 2 22:33:53 UTC 2014
On 3 January 2014 11:26, James E. Blair <jeblair at openstack.org> wrote:
> If you are able to do this and benchmark the performance of a cloud
> server reliably enough, we might be able to make progress on performance
> testing, which has been long desired. The large ops test is (somewhat
> accidentally) a performance test, and predictably, it has failed when we
> change cloud node provider configurations. A benchmark could make this
> test more reliable and other tests more feasible.
In bzr we found it much more reliable to do tests that isolate and
capture the *effort*, not the time: most [not all] performance issues
have both a time and effort domain, and the effort domain is usually
correlated with time in a particular environment, but itself
approximately constant across environments.
For instance, MB sent in a request, or messages on the message bus, or
writes to the file system, or queries sent to the DB.
So the structure we ended up with - which was quite successful - was:
- a cron job based benchmark that ran several versions through
functional scenarios and reporting timing data
- gating tests that tested effort for operations
- a human process whereby someone wanting to put a ratchet on some
aspect of performance would write an effort based test or three to
capture the status quo, then make it better and update the tests with
their improvements.
I think this would work well for OpenStack too - and infact we have
some things that are in this general direction already.
-Rob
--
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud
More information about the OpenStack-dev
mailing list