numa affinity question

Sean Mooney smooney at redhat.com
Wed Sep 25 19:29:11 UTC 2019


On Wed, 2019-09-25 at 15:02 +0000, Manuel Sopena Ballesteros wrote:
> ok,
> 
> ok, but numa node 0 has much less free memory so it looks like the vm memory is assigned to node 0 while the vcpus are
> in node 1.
> 
> is that right?
no
>  how can I enforce openstack to tell kvm to put memory and vcpus local?
if you have a guest with a numa toplogy nova will require that hte guest memeroy is allcoated form the same numa nodes
as the vcpus. you cannot disable this behavior. so your vm will have its memroy also allcoated form host numa node 0

you can see that from the numatune element
  <numatune>
    <memory mode='strict' nodeset='1'/>
    <memnode cellid='0' mode='strict' nodeset='1'/>
  </numatune>

in this case guest numa node 0 is mapped to host numa node 1

so all guest memory and cpus are maped form host numa node 1
> 
> thank you
> ________________________________________
> From: Stephen Finucane [sfinucan at redhat.com]
> Sent: Wednesday, 25 September 2019 19:08
> To: Manuel Sopena Ballesteros; openstack-discuss at lists.openstack.org
> Subject: Re: numa affinity question
> 
> On Wed, 2019-09-25 at 07:49 +0000, Manuel Sopena Ballesteros wrote:
> > Dear openstack user group,
> > 
> > I have a server with 2 numa nodes and I am trying to setup nova numa affinity.
> > 
> > [root at zeus-53 ~]# numactl -H
> > available: 2 nodes (0-1)
> > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 28 29 30 31 32 33 34 35 36 37 38 39 40 41
> > node 0 size: 262029 MB
> > node 0 free: 2536 MB
> > node 1 cpus: 14 15 16 17 18 19 20 21 22 23 24 25 26 27 42 43 44 45 46 47 48 49 50 51 52 53 54 55
> > node 1 size: 262144 MB
> > node 1 free: 250648 MB
> > node distances:
> > node   0   1
> >   0:  10  21
> >   1:  21  10
> > 
> > openstack flavor create --public xlarge.numa.perf --ram 250000 --disk 700 --vcpus 25 --property
> > hw:cpu_policy=dedicated --property hw:emulator_threads_policy=isolate --property hw:numa_nodes='1' --property
> > pci_passthrough:alias='nvme:4'
> > openstack server create --network hpc --flavor xlarge.numa.perf --image centos7.6-kudu-image --availability-zone
> > nova:zeus-53.localdomain --key-name mykey kudu-1
> > 
> > This is the xmldump for the created vm
> > 
> > But for some reason the second VM fails to create with the error
> > <name>instance-00000108</name>
> >   <uuid>5d278c90-27ab-4ee4-aeea-e1bf36ac246a</uuid>
> 
> [snip]
> 
> >   <vcpu placement='static'>25</vcpu>
> >   <cputune>
> >     <shares>25600</shares>
> >     <vcpupin vcpu='0' cpuset='27'/>
> >     <vcpupin vcpu='1' cpuset='55'/>
> >     <vcpupin vcpu='2' cpuset='50'/>
> >     <vcpupin vcpu='3' cpuset='22'/>
> >     <vcpupin vcpu='4' cpuset='49'/>
> >     <vcpupin vcpu='5' cpuset='21'/>
> >     <vcpupin vcpu='6' cpuset='48'/>
> >     <vcpupin vcpu='7' cpuset='20'/>
> >     <vcpupin vcpu='8' cpuset='25'/>
> >     <vcpupin vcpu='9' cpuset='53'/>
> >     <vcpupin vcpu='10' cpuset='18'/>
> >     <vcpupin vcpu='11' cpuset='46'/>
> >     <vcpupin vcpu='12' cpuset='51'/>
> >     <vcpupin vcpu='13' cpuset='23'/>
> >     <vcpupin vcpu='14' cpuset='19'/>
> >     <vcpupin vcpu='15' cpuset='47'/>
> >     <vcpupin vcpu='16' cpuset='26'/>
> >     <vcpupin vcpu='17' cpuset='54'/>
> >     <vcpupin vcpu='18' cpuset='42'/>
> >     <vcpupin vcpu='19' cpuset='14'/>
> >     <vcpupin vcpu='20' cpuset='17'/>
> >     <vcpupin vcpu='21' cpuset='45'/>
> >     <vcpupin vcpu='22' cpuset='43'/>
> >     <vcpupin vcpu='23' cpuset='15'/>
> >     <vcpupin vcpu='24' cpuset='24'/>
> >     <emulatorpin cpuset='14-15,17-27,42-43,45-51,53-55'/>
> >   </cputune>
> 
> [snip]
> 
> This is what you want. See those 'cpuset' values? All of them are taken
> from cores on host NUMA node #1, as you noted previously.
> 
> > node 1 cpus: 14 15 16 17 18 19 20 21 22 23 24 25 26 27 42 43 44 45 46 47 48 49 50 51 52 53 54 55
> 
> So to answer your questions:
> 
> > Are my vcpus in the same numa node?  if not why?
> 
> Yes.
> 
> > How can I tell from the xmldump that all vcpus are assigned to the same numa node?
> 
> For a single guest topology, look at the cpuset value and ensure they
> all map to cores from a single host NUMA node.
> 
> Stephen
> 
> NOTICE
> Please consider the environment before printing this email. This message and any attachments are intended for the
> addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended
> recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this
> message in error please notify us at once by return email and then delete both messages. We accept no liability for
> the distribution of viruses or similar in electronic communications. This notice should not be removed.
> 




More information about the openstack-discuss mailing list