[openstack-dev] Host CPU feature exposure

Dugger, Donald D donald.d.dugger at intel.com
Thu Jan 10 16:21:15 UTC 2013


Well, the problem I'm trying to address is how to expose host features so that the scheduler can make decisions on those features.  A specific problem, for example, is how to create a special flavor that will start an instance on a machine that has the new Advanced Encryption Standard (`aes') instructions.  I can create an ImagePropertiesFilter that specifies `aes' as a required feature for the image but the scheduler won't know which hosts are appropriate because the scheduler only knows that the host is a Westmere, not that `aes' is part of a Westmere system.

What I'd like to see is all of the system features explicitly listed inside the scheduler.  Providing convenient short hands (model Westmere means `sse' and `sse2' and `aes' and ...) is fine but you also need to know exactly what is available.

Note that there is no intent to be x86 specific, I would expect the same capability on a PPC or ARM system, just the specific names would change.

You raise some other issues that I wanted to address later (start small, walk before you run, insert appropriate aphorism here):

1)  Host capabilities vs. guest VM capabilities.  Currently compute nodes send the `host` capabilities to the scheduler.  Although useful the capabilities for the `guest VM` is probably more important.  I'm not that familiar with libvirt but is it even possible to get a full set of guest features, I've looked at the output from `virConnectGetCapabilities' and the guest features don't seem to be listed.

2)  Guest VM type.  Currently, the type of guest to be created can be specified by the `libvirt_cpu_model' parameter in the `nova.conf' file.  This means that a host will only support one guest type.  It would be more flexible to be able to specify the model type at run time.  A Westmere host can start either Westmere, Nehalem, Penryn or other guests, why restrict that host to just one guest type.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


-----Original Message-----
From: Daniel P. Berrange [mailto:berrange at redhat.com] 
Sent: Thursday, January 10, 2013 2:38 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Host CPU feature exposure

On Wed, Jan 09, 2013 at 09:59:01PM +0000, Dugger, Donald D wrote:
> Currently, each compute node periodically sends info to the scheduler
> including a list of the CPU features available on that node.  Unfortunately
> the CPU features listed does not appear to be complete, making it extremely
> difficult to identify exactly what features are available on a host.
> 
> The problem is that compute nodes call the libvirt API `virConnectGetCapabilities'
> to get host features and then just send the output from this API to the scheduler.
> The issue is that libvirt reports the model name and only explicitly reports CPU
> features that are not part of that model.  For example, on my Westmere host the
> Advanced Encryption Standard (`aes') capability is not reported since that
> capability is available on all Westmere's.
> 
> This is a distinct problem as it makes it well nigh impossible for a user to know
> how to check for specific capabilities.  Asking the user to know that Westmere,
> SandyBridge and Haswell machines have `aes' while Conroe, Penryn and Nehalem
> machines don't is a little ridiculous, ignoring the fact that these are all
> internal Intel code names that end users shouldn't even know about in the first
> place.

Nowhere do we ask the user to know about this information. It is information that
is purely internal to OpenStack infrastructure, which the user does not need to
interact with.

The libvirt CPU API was explicitly designed so that you don't just provide
a list of features and have the virtualization app do feature flag matching
because this only works on x86. If you want to do comparisons & supportability
checks then there are libvirt APIs to enable this. You provide a description
of the desired model / features and ask libvirt if it is compatible with the
virtualization host & it will tell you. Alternatively when booting the guest
you can say only boot this guest if the host cna support model/feature xyz.
This approach works on x86, PPC, s390 and ARM, hence why it was chosen.

> I have a patch that fixes libvirt (the patch still reports the model name but
> explicitly lists ALL cpu features) but I'm not sure that is the proper way to
> fix the problem.  Personally, I don't like the fact that the host calls a
> virtualization API in order to discover info about the host itself, that
> just seems wrong.  I'd prefer to see OpenStack discover this info directly,
> not using libvirt APIs, avoiding this kind of problem now and in the future.

On the contrary, using the hypervisor APIs is the right way todo this. You can
not assume that the host CPU features are all available to the virtualization
technology. You must ask the virtualization technology to report what features
from the host it can use. Furthermore, as I describe above the approach libvirt
takes is based on the requirement to be portable across architectures. You are
proposing that we switch to an x86-specific approach which is not desirable
given OpenStack targets non-x86 platforms.

> Anyone have any thoughts on the proper way to discover CPU capabilities.

Can you explain the actual problem you are trying to solve, rather than
your desired solution.

Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list