[nova] NUMA scheduling

Erik Olof Gunnar Andersson eandersson at blizzard.com
Sat Oct 17 04:04:57 UTC 2020


We have been running with NUMA configured for a long time and don't believe I have seen this behavior. It's important that you configure the flavors / aggregates correct.

I think this might be what you are looking for

penstack flavor set m1.large --property hw:cpu_policy=dedicated

https://docs.openstack.org/nova/pike/admin/cpu-topologies.html

Pretty sure we also set this for any flavor that only requires a single NUMA zone

openstack flavor set m1.large --property hw:numa_nodes=1

________________________________
From: Eric K. Miller <emiller at genesishosting.com>
Sent: Friday, October 16, 2020 8:47 PM
To: Laurent Dumont <laurentfdumont at gmail.com>
Cc: openstack-discuss <openstack-discuss at lists.openstack.org>
Subject: RE: [nova] NUMA scheduling

> As far as I know, numa_nodes=1 just means --> the resources for that VM should run on one NUMA node (so either NUMA0 or NUMA1). If there is space free on both, then it's probably going to pick one of the two?

I thought the same, but it appears that VMs are never scheduled on NUMA1 even though NUMA0 is full (causing OOM to trigger and kill running VMs).  I would have hoped that a NUMA node was treated like a host, and thus "VMs being balanced across nodes".

The discussion on NUMA handling is long, so I was hoping that there might be information about the latest solution to the problem - or to be told that there isn't a good solution other than using huge pages.

Eric

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20201017/eed1c490/attachment-0001.html>


More information about the openstack-discuss mailing list