[Openstack] Nova guest NUMA placement questions

Daniel P. Berrange berrange at redhat.com
Fri Sep 26 14:09:05 UTC 2014


On Fri, Sep 26, 2014 at 08:54:38AM -0400, Sean Toner wrote:
> Hello,
> 
> I'm new to OpenStack development and would like to test the upcoming 
> NUMA features in Nova, particularly with regards to this blueprint:
> 
> https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement
> 
> However, I'm a bit hazy with regards to several things.  For example:
> 
> 1. What happens on an overcommit for the number of guest CPUs?  
>    - For example, if I have a system with 2 sockets 4 cores each, what 
> would happen if I have 2 guests, each one wanting 6 cores (assuming this 
> is not in violation of the hw:cpu_max_cores)? 
> 2. What happens on an overcommit on memory?

The memory and CPU overcommit settings are still honoured when Nova
does NUMA placement. eg if you have a host with 2 NUMA nodes, each with
4 physical CPUs and CPU overcommit ratio of x3, then Nova will treat this 
as being a system with 2 NUMA nodes, each with 12 CPUs. Similarly for
memory overcommit.

> 3. It wasn't clear from the blueprint, but will the host NUMA stats be 
> revealed to the user so he can make a more informed decision about the 
> topology he wants?
>    - If yes, how is the host information retrieved?

The end user is never going to be exposed to information about compute
hosts since that is against the cloud model. It will be desirable to
expose NUMA info to the cloud admin via the host APIs, but this is not
done at this time. It is currently assumed the admin knows what hardware
they own and thus what NUMA setups are sane.

> 4. How to test whether a guest topology is actually using the memory 
> from the NUMA node it is specified to run on?
>    - Can this be verified at the libvirt level?
>    - Is there a way by querying /proc/PID the memory map/cpu allocation?

If we assume that libvirt + KVM are operating as billed, then it is
sufficient to just query the XML config for libvirt to see that Nova
has specified a suitable NUMA memory mask + vCPU pinning mask.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|




More information about the Openstack mailing list