[openstack-dev] Host CPU feature exposure

Daniel P. Berrange berrange at redhat.com
Thu Jan 10 17:53:00 UTC 2013

On Thu, Jan 10, 2013 at 04:21:15PM +0000, Dugger, Donald D wrote:
> Well, the problem I'm trying to address is how to expose host features so that
> the scheduler can make decisions on those features.  A specific problem, for
> example, is how to create a special flavor that will start an instance on a
> machine that has the new Advanced Encryption Standard (`aes') instructions.
> I can create an ImagePropertiesFilter that specifies `aes' as a required
> feature for the image but the scheduler won't know which hosts are appropriate
> because the scheduler only knows that the host is a Westmere, not that `aes'
> is part of a Westmere system.
> What I'd like to see is all of the system features explicitly listed inside the
> scheduler.  Providing convenient short hands (model Westmere means `sse' and
> `sse2' and `aes' and ...) is fine but you also need to know exactly what is
> available.
> Note that there is no intent to be x86 specific, I would expect the same
> capability on a PPC or ARM system, just the specific names would change.

If you're making the scheduler apply logic in terms of CPU feature flags,
then you are definitely x86 specific, because there is no such concept
on other architectures.

> 1)  Host capabilities vs. guest VM capabilities.  Currently compute nodes
> send the `host` capabilities to the scheduler.  Although useful the
> capabilities for the `guest VM` is probably more important.  I'm not
> that familiar with libvirt but is it even possible to get a full set of
> guest features, I've looked at the output from `virConnectGetCapabilities'
> and the guest features don't seem to be listed.

Again we intentionally don't expose this information because applying
logic based on CPU feature flags is fundamentally non-portable

We provide an API which allows you to pass in a CPU description (where
a CPU description == a CPU model name + a list of features), and returns
status on whether the host can support that CPU description. This keeps
mgmt applications of the business of doing architecture specific CPU

> 2)  Guest VM type.  Currently, the type of guest to be created can be
> specified by the `libvirt_cpu_model' parameter in the `nova.conf' file.
> This means that a host will only support one guest type.  It would be
> more flexible to be able to specify the model type at run time.  A
> Westmere host can start either Westmere, Nehalem, Penryn or other
> guests, why restrict that host to just one guest type.

The only place where filtering based on CPU model takes place is in
the migration code, when it is trying to find a target host that is
compatible with what the guest is currently running on. Even before
the 'libvirt_cpu_model' parameter was introduced, this migration
code was doing an overly aggressive exact match on CPU models. This
clearly needs changing. The migration code is even more sucky because
when picking the target host, it picks the host, then invokes the
'compare_cpu' function. If that fails, it picks another host and
re-tries, and again, and again.

IMHO the interaction between schedular and hypervisor hosts wrt to
CPU model is flawed, not least because of the problems described
above, but also because the CPU information provoided is not
standardized across Nova hypervisor drivers at all.

I think that Nova needs to have a formal concept of CPU types, with
arbitrary names it decides upon, eg it might allow a list of CPU

  "Any Host"
  "Any Intel"
  "Any AMD"
  "Any AES"

Each hypervisor (libvirt, xen, hyper-v, esx, etc) would decide how
these CPU types map to their particular way of configuring CPUs
(libvirt would map them to CPU model + feature list, Xen would map
them to a CPUID string, VMWare would do whatever it does).

These CPU types would be asociated with instance flavours. The hypervisor
hosts would simply report which of the CPU types they are able to support
The schedular can then trivially to host selection based on CPU types and
not need to know about CPU model names or feature flags.

The only place needing to know about CPU model names / features flags is
the place in the virt driver where you do the mapping of CPU types to
the virt driver specific config format. This could be made admin customizable
so people deploying Nova can provide further CPU types as they see fit.

|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

More information about the OpenStack-dev mailing list