[openstack-dev] [libvirt] [virt-tools-list] Project for profiles and defaults for libvirt domains
ehabkost at redhat.com
Wed Mar 21 18:00:41 UTC 2018
On Tue, Mar 20, 2018 at 03:10:12PM +0000, Daniel P. Berrangé wrote:
> On Tue, Mar 20, 2018 at 03:20:31PM +0100, Martin Kletzander wrote:
> > 1) Default devices/values
> > Libvirt itself must default to whatever values there were before any
> > particular element was introduced due to the fact that it strives to
> > keep the guest ABI stable. That means, for example, that it can't just
> > add -vmcoreinfo option (for KASLR support) or magically add the pvpanic
> > device to all QEMU machines, even though it would be useful, as that
> > would change the guest ABI.
> > For default values this is even more obvious. Let's say someone figures
> > out some "pretty good" default values for various HyperV enlightenment
> > feature tunables. Libvirt can't magically change them, but each one of
> > the projects building on top of it doesn't want to keep that list
> > updated and take care of setting them in every new XML. Some projects
> > don't even expose those to the end user as a knob, while others might.
> This gets very tricky, very fast.
> Lets say that you have an initial good set of hyperv config
> tunables. Now sometime passes and it is decided that there is a
> different, better set of config tunables. If the module that is
> providing this policy to apps like OpenStack just updates itself
> to provide this new policy, this can cause problems with the
> existing deployed applications in a number of ways.
> First the new config probably depends on specific versions of
> libvirt and QEMU, and you can't mandate to consuming apps which
> versions they must be using. [...]
This is true.
> [...] So you need a matrix of libvirt +
> QEMU + config option settings.
But this is not. If config options need support on the lower
levels of the stack (libvirt and/or QEMU and/or KVM and/or host
hardware), it already has to be represented by libvirt host
capabilities somehow, so management layers know it's available.
This means any new config generation system can (and must) use
host(s) capabilities as input before generating the
> Even if you have the matching libvirt & QEMU versions, it is not
> safe to assume the application will want to use the new policy.
> An application may need live migration compatibility with older
> versions. Or it may need to retain guaranteed ABI compatibility
> with the way the VM was previously launched and be using transient
> guests, generating the XML fresh each time.
Why is that a problem? If you want live migration or ABI
guarantees, you simply don't use this system to generate a new
configuration. The same way you don't use the "pc" machine-type
if you want to ensure compatibility with existing VMs.
> The application will have knowledge about when it wants to use new
> vs old hyperv tunable policy, but exposing that to your policy module
> is very tricky because it is inherantly application specific logic
> largely determined by the way the application code is written.
We have a huge set of features where this is simply not a
problem. For most virtual hardware features, enabling them is
not even a policy decision: it's just about telling the guest
that the feature is now available. QEMU have been enabling new
features in the "pc" machine-type for years.
Now, why can't higher layers in the stack do something similar?
The proposal is equivalent to what already happens when people
use the "pc" machine-type in their configurations, but:
1) the new defaults/features wouldn't be hidden behind a opaque
machine-type name, and would appear in the domain XML
2) the higher layers won't depend on QEMU introducing a new
machine-type just to have new features enabled by default;
3) features that depend on host capabilities but are available on
all hosts in a cluster can now be enabled automatically if
desired (which is something QEMU can't do because it doesn't
have enough information about the other hosts).
Choosing reasonable defaults might not be a trivial problem, but
the current approach of pushing the responsibility to management
layers doesn't improve the situation.
> > 2) Policies
> > 3) Abstracting the XML
> > 4) Identifying devices properly
> > 5) Generating the right XML snippet for device hot-(un)plug
These parts are trickier and I need to read the discussion more
carefully before replying.
More information about the OpenStack-dev