On Wed, 2019-09-25 at 07:49 +0000, Manuel Sopena Ballesteros wrote:
Dear openstack user group,
I have a server with 2 numa nodes and I am trying to setup nova numa affinity.
[root@zeus-53 ~]# numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 28 29 30 31 32 33 34 35 36 37 38 39 40 41 node 0 size: 262029 MB node 0 free: 2536 MB node 1 cpus: 14 15 16 17 18 19 20 21 22 23 24 25 26 27 42 43 44 45 46 47 48 49 50 51 52 53 54 55 node 1 size: 262144 MB node 1 free: 250648 MB node distances: node 0 1 0: 10 21 1: 21 10
openstack flavor create --public xlarge.numa.perf --ram 250000 --disk 700 --vcpus 25 --property hw:cpu_policy=dedicated --property hw:emulator_threads_policy=isolate --property hw:numa_nodes='1' --property pci_passthrough:alias='nvme:4' openstack server create --network hpc --flavor xlarge.numa.perf --image centos7.6-kudu-image --availability-zone nova:zeus-53.localdomain --key-name mykey kudu-1
This is the xmldump for the created vm
But for some reason the second VM fails to create with the error <name>instance-00000108</name> <uuid>5d278c90-27ab-4ee4-aeea-e1bf36ac246a</uuid>
[snip]
<vcpu placement='static'>25</vcpu> <cputune> <shares>25600</shares> <vcpupin vcpu='0' cpuset='27'/> <vcpupin vcpu='1' cpuset='55'/> <vcpupin vcpu='2' cpuset='50'/> <vcpupin vcpu='3' cpuset='22'/> <vcpupin vcpu='4' cpuset='49'/> <vcpupin vcpu='5' cpuset='21'/> <vcpupin vcpu='6' cpuset='48'/> <vcpupin vcpu='7' cpuset='20'/> <vcpupin vcpu='8' cpuset='25'/> <vcpupin vcpu='9' cpuset='53'/> <vcpupin vcpu='10' cpuset='18'/> <vcpupin vcpu='11' cpuset='46'/> <vcpupin vcpu='12' cpuset='51'/> <vcpupin vcpu='13' cpuset='23'/> <vcpupin vcpu='14' cpuset='19'/> <vcpupin vcpu='15' cpuset='47'/> <vcpupin vcpu='16' cpuset='26'/> <vcpupin vcpu='17' cpuset='54'/> <vcpupin vcpu='18' cpuset='42'/> <vcpupin vcpu='19' cpuset='14'/> <vcpupin vcpu='20' cpuset='17'/> <vcpupin vcpu='21' cpuset='45'/> <vcpupin vcpu='22' cpuset='43'/> <vcpupin vcpu='23' cpuset='15'/> <vcpupin vcpu='24' cpuset='24'/> <emulatorpin cpuset='14-15,17-27,42-43,45-51,53-55'/> </cputune>
[snip] This is what you want. See those 'cpuset' values? All of them are taken from cores on host NUMA node #1, as you noted previously.
node 1 cpus: 14 15 16 17 18 19 20 21 22 23 24 25 26 27 42 43 44 45 46 47 48 49 50 51 52 53 54 55
So to answer your questions:
Are my vcpus in the same numa node? if not why?
Yes.
How can I tell from the xmldump that all vcpus are assigned to the same numa node?
For a single guest topology, look at the cpuset value and ensure they all map to cores from a single host NUMA node. Stephen