[openstack-dev] [nova] Regarding NUMA Topology filtering logic.

Sudipto Biswas sbiswas7 at linux.vnet.ibm.com
Wed Sep 16 12:12:02 UTC 2015


Hi,

Currently the numa_topology filter code in openstack is going by
a decision of filtering out NUMA nodes based on the length of the cpusets
on the NUMA node of a host[1]. For example: if a VM with 8 VCPUs is 
requested, we seem
to be doing len(cputset_on_the_numa_node) should be greater than or 
equal to 8.

IMHO, the logic can be further improved if we start taking the threads 
and cores
into consideration instead of directly going by the cpuset length of the 
NUMA node.

This thought is derived from an architecture like ppc where each core 
can have 8 threads.
However in this case, libvirt reports only 1 thread out of the 8 (called 
the primary
thread). The host scheduling of the guests happen at the core level(as 
only primary
thread is online). The kvm scheduler exploits as many threads of the core
as needed by guest.

Consider an example for the ppc architecture.
In a given NUMA node 0 - with 40 threads - the following
cpusets would be reported by libvirt: 0, 8, 16, 24, 32. The length of 
the cpusets
would suggest that only 5 pcpus are available for pinning, however we 
could potentially
have 40 threads available (cores * threads).

This way we could at least solve the problem that arises if we just take 
the length of the
cpusets into consideration.

Thoughts?

[1] https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L772

Thanks,
Sudipto




More information about the OpenStack-dev mailing list