[Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

Emilien Macchi emilien at redhat.com
Wed May 11 23:30:45 UTC 2016

On Tue, May 3, 2016 at 11:43 AM, Matt Riedemann
<mriedem at linux.vnet.ibm.com> wrote:
> On 5/3/2016 10:01 AM, Daniel P. Berrange wrote:
>> Hello Operators,
>> One of the things that constantly puzzles me when reading the user
>> survey results wrt hypervisor is the high number of respondants
>> claiming to be using QEMU (as distinct from KVM).
>> As a reminder, in Nova saying virt_type=qemu causes Nova to use
>> plain QEMU with pure CPU emulation which is many many times slower
>> to than native CPU performance, while virt_type=kvm causes Nova to
>> use QEMU with KVM hardware CPU acceleration which is close to native
>> performance.
>> IOW, virt_type=qemu is not something you'd ever really want to use
>> unless you had no other options due to the terrible performance it
>> would show. The only reasons to use QEMU are if you need non-native
>> architecture support (ie running arm/ppc on x86_64 host), or if you
>> can't do KVM due to hardware restrictions (ie ancient hardware, or
>> running compute hosts inside virtual machines)
>> Despite this, in the 2016 survey 10% claimed to be using QEMU in
>> production & 3% in PoC and dev, in 2014 it was even higher at 15%
>> in prod & 12% in PoC and 28% in dev.
>> Personally my gut feeling says that QEMU usage ought to be in very
>> low single figures, so I'm curious as to the apparent anomoly.
>> I can think of a few reasons
>>  1. Respondants are confused as to the difference between QEMU
>>     and KVM, so are saying QEMU, despite fact they are using KVM.
>>  2. Respondants are confused as to the difference between QEMU
>>     and KVM, so have mistakenly configured their nova hosts to
>>     use QEMU instead of KVM and suffering poor performance without
>>     realizing their mistake.
>>  3. There are more people than I expect who are running their
>>     cloud compute hosts inside virtual machines, and thus are
>>     unable to use KVM.
>>  4. There are more people than I expect who are providing cloud
>>     hosting for non-native architectures. eg ability to run an
>>     arm7/ppc guest image on an x86_64 host and so genuinely must
>>     use QEMU
>> If items 1 / 2 are the cause, then by implication the user survey
>> is likely under-reporting the (already huge) scale of the KVM usage.
>> I can see 3. being a likely explanation for high usage of QEMU in a
>> dev or PoC scenario, but it feels unlikely for a production deployment.
>> While 4 is technically possible, Nova doesn't really do a very good
>> job at mixed guest arch hosting - I'm pretty sure there are broken
>> pieces waiting to bite people who try it.
>> Does anyone have any thoughts on this topic ?
>> Indeed, is there anyone here who genuinely use virt_type=qemu in a
>> production deployment of OpenStack who might have other reasons that
>> I've missed ?
>> Regards,
>> Daniel
> Another thought is that deployment tools are just copying what devstack
> does, or what shows up in the configs in our dsvm gate jobs, and those are
> using qemu, so they assume that's what should be used since that's what we
> gate on.

In the case of Puppet OpenStack, kvm is default.
We set the parameter to qemu only in our gate, like devstack does but
people using puppet-nova to deploy will have KVM driver.


Should we send a warning if qemu is set?

> We should be more clear in our help text for the virt_type config option
> between using kvm vs qemu. Today it just says:
> # Libvirt domain type (string value)
> # Allowed values: kvm, lxc, qemu, uml, xen, parallels
> #virt_type = kvm
> It'd be good to point out the performance impacts and limitations of kvm vs
> qemu in that help text. There might already be a patch up for review that
> makes this better.
> --
> Thanks,
> Matt Riedemann
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Emilien Macchi

More information about the OpenStack-operators mailing list