[openstack-dev] [nova] NUMA, huge pages, and scheduling

Paul Michali pc at michali.net
Mon Jun 13 20:17:36 UTC 2016


Hmm... I tried Friday and again today, and I'm not seeing the VMs being
evenly created on the NUMA nodes. Every Cirros VM is created on nodeid 0.

I have the m1/small flavor (@GB) selected and am using hw:numa_nodes=1 and
hw:mem_page_size=2048 flavor-key settings. Each VM is consuming 1024 huge
pages (of size 2MB), but is on nodeid 0 always. Also, it seems that when I
reach 1/2 of the total number of huge pages used, libvirt gives an error
saying there is not enough memory to create the VM. Is it expected that the
huge pages are "allocated" to each NUMA node?

I don't know why I cannot repeat what I did on 6/3, where I changed
 hw:mem_page_size from "large" to "2048"and it worked, allocation to each
of the two NUMA nodes. :(

Regards,

PCM


On Fri, Jun 10, 2016 at 9:16 AM Paul Michali <pc at michali.net> wrote:

> Actually, I had menm_page_size set to "large" and not "1024". However, it
> seemed like it was using 1024 pages per (small VM creation). Is there
> possibly some issue with large not using one of the supported values? I
> would have guessed it would have chosen 2M or 1G for the size.
>
> Any thoughts?
>
> PCM
>
> On Fri, Jun 10, 2016 at 9:05 AM Paul Michali <pc at michali.net> wrote:
>
>> Thanks Daniel and Chris! I think that was the problem, I had configured
>> Nova flavor with a mem_page_size of 1024, and it should have been one of
>> the supported values.
>>
>> I'll go through and check things out one more time, but I think that is
>> the problem. I still need to figure out what is going on with the neutron
>> port not being released - we have another person in my group who has seen
>> the same issue.
>>
>> Regards,
>>
>> PCM
>>
>> On Fri, Jun 10, 2016 at 4:41 AM Daniel P. Berrange <berrange at redhat.com>
>> wrote:
>>
>>> On Thu, Jun 09, 2016 at 12:35:06PM -0600, Chris Friesen wrote:
>>> > On 06/09/2016 05:15 AM, Paul Michali wrote:
>>> > > 1) On the host, I was seeing 32768 huge pages, of 2MB size.
>>> >
>>> > Please check the number of huge pages _per host numa node_.
>>> >
>>> > > 2) I changed mem_page_size from 1024 to 2048 in the flavor, and then
>>> when VMs
>>> > > were created, they were being evenly assigned to the two NUMA nodes.
>>> Each using
>>> > > 1024 huge pages. At this point I could create more than half, but
>>> when there
>>> > > were 1945 pages left, it failed to create a VM. Did it fail because
>>> the
>>> > > mem_page_size was 2048 and the available pages were 1945, even
>>> though we were
>>> > > only requesting 1024 pages?
>>> >
>>> > I do not think that "1024" is a valid page size (at least for x86).
>>>
>>> Correct, 4k, 2M and 1GB are valid page sizes.
>>>
>>> > Valid mem_page_size values are determined by the host CPU.  You do not
>>> need
>>> > a larger page size for flavors with larger memory sizes.
>>>
>>> Though note that page sizes should be a multiple of favour mem size
>>> unless you want to waste memory. eg if you have a flavour with 750MB
>>> RAM, then you probably don't want to use 1GB pages as it waste 250MB
>>>
>>> Regards,
>>> Daniel
>>> --
>>> |: http://berrange.com      -o-
>>> http://www.flickr.com/photos/dberrange/ :|
>>> |: http://libvirt.org              -o-
>>> http://virt-manager.org :|
>>> |: http://autobuild.org       -o-
>>> http://search.cpan.org/~danberr/ :|
>>> |: http://entangle-photo.org       -o-
>>> http://live.gnome.org/gtk-vnc :|
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160613/2a86d874/attachment.html>


More information about the OpenStack-dev mailing list