On 2019-07-19 14:59:38 -0700 (-0700), Michael Johnson wrote: [...]
In our OpenDev testing environment, we only have software emulation virtual machines available (Qemu running with the TCG engine) which performs extremely poorly. This means that the testing environment does not reflect how the software is used in real world deployments. An example of this is simply booting a VM can take up to ten minutes on Qemu with TCG when it takes about twenty seconds on a real OpenStack deployment.
With this resource limitation, we cannot effectively run performance benchmarking test jobs on the OpenDev environment. [...]
And even if we did have ubiquitous support for nested virtual machine acceleration across our providers, it still wouldn't provide a useful baseline because at least: 1. The hardware and software/hypervisors in the different donated environments (and even within some of them) vary significantly. 2. Most of these environments are in public service providers with mixed populations, and so many runs will be scheduled onto hosts with "noisy neighbor" situations leading to anomalous resource contention. 3. At peak load times, our job nodes may act as noisy neighbors to each other (especially in environments where we have dedicated host aggregates), leading to slower performance. We've optimized for test throughput, to make maximal use of the donations provided to us. Without carving out and dedicating environments with predictable performance characteristics, benchmarking is really a non-starter. However, it's also not something which needs to be run continuously, so can be done in an ad hoc fashion by interested individuals and the results published independently for comparison (as is often the case for other similar sorts of projects). -- Jeremy Stanley