[openstack-dev] [nova] Running large instances with CPU pinning and OOM

Blair Bethwaite blair.bethwaite at gmail.com
Wed Sep 27 08:40:56 UTC 2017

On 27 September 2017 at 18:14, Stephen Finucane <sfinucan at redhat.com> wrote:
> What you're probably looking for is the 'reserved_host_memory_mb' option. This
> defaults to 512 (at least in the latest master) so if you up this to 4192 or
> similar you should resolve the issue.

I don't see how this would help given the problem description -
reserved_host_memory_mb would only help avoid causing OOM when
launching the last guest that would otherwise fit on a host based on
Nova's simplified notion of memory capacity. It sounds like both CPU
and NUMA pinning are in play here, otherwise the host would have no
problem allocating RAM on a different NUMA node and OOM would be

Jakub, your numbers sound reasonable to me, i.e., use 60 out of 64GB
when only considering QEMU overhead - however I would expect that
might  be a problem on NUMA node0 where there will be extra reserved
memory regions for kernel and devices. In such a configuration where
you are wanting to pin multiple guests into each of multiple NUMA
nodes I think you may end up needing different flavor/instance-type
configs (using less RAM) for node0 versus other NUMA nodes. Suggest
freshly booting one of your hypervisors and then with no guests
running take a look at e.g. /proc/buddyinfo/ and /proc/zoneinfo to see
what memory is used/available and where.


More information about the OpenStack-dev mailing list