[Openstack-operators] [openstack-dev] [nova] Running large instances with CPU pinning and OOM

Blair Bethwaite blair.bethwaite at gmail.com
Wed Sep 27 08:43:05 UTC 2017


Also CC-ing os-ops as someone else may have encountered this before
and have further/better advice...

On 27 September 2017 at 18:40, Blair Bethwaite
<blair.bethwaite at gmail.com> wrote:
> On 27 September 2017 at 18:14, Stephen Finucane <sfinucan at redhat.com> wrote:
>> What you're probably looking for is the 'reserved_host_memory_mb' option. This
>> defaults to 512 (at least in the latest master) so if you up this to 4192 or
>> similar you should resolve the issue.
>
> I don't see how this would help given the problem description -
> reserved_host_memory_mb would only help avoid causing OOM when
> launching the last guest that would otherwise fit on a host based on
> Nova's simplified notion of memory capacity. It sounds like both CPU
> and NUMA pinning are in play here, otherwise the host would have no
> problem allocating RAM on a different NUMA node and OOM would be
> avoided.
>
> Jakub, your numbers sound reasonable to me, i.e., use 60 out of 64GB
> when only considering QEMU overhead - however I would expect that
> might  be a problem on NUMA node0 where there will be extra reserved
> memory regions for kernel and devices. In such a configuration where
> you are wanting to pin multiple guests into each of multiple NUMA
> nodes I think you may end up needing different flavor/instance-type
> configs (using less RAM) for node0 versus other NUMA nodes. Suggest
> freshly booting one of your hypervisors and then with no guests
> running take a look at e.g. /proc/buddyinfo/ and /proc/zoneinfo to see
> what memory is used/available and where.
>
> --
> Cheers,
> ~Blairo



-- 
Cheers,
~Blairo



More information about the OpenStack-operators mailing list