[Openstack] [Nova][virt-driver-numa-placement]How to enbale instance with numa ?
Chris Friesen
chris.friesen at windriver.com
Thu Feb 5 17:21:05 UTC 2015
On 02/05/2015 10:32 AM, Daniel P. Berrange wrote:
> On Thu, Feb 05, 2015 at 10:28:56AM -0600, Chris Friesen wrote:
>> For what it's worth, I was able to make hugepages work with an older qemu by
>> commenting out two lines in
>> virt.libvirt.config.LibvirtConfigGuestMemoryBacking.format_dom()
>>
>> def format_dom(self):
>> root = super(LibvirtConfigGuestMemoryBacking, self).format_dom()
>>
>> if self.hugepages:
>> hugepages = etree.Element("hugepages")
>> #for item in self.hugepages:
>> # hugepages.append(item.format_dom())
>> root.append(hugepages)
>>
>>
>> This results in XML that looks like:
>>
>> <memoryBacking>
>> <hugepages/>
>> </memoryBacking>
>>
>>
>> And a qemu commandline that looks like
>>
>> -mem-prealloc -mem-path /mnt/huge-2048kB/libvirt/qemu
>
> With that there is no guarantee that the huge pages are being allocated
> from the NUMA node on which the guest is actually placed by Nova, hence
> we did not intend to support that.
It's possible that the end-user didn't indicate a preference for NUMA. If they
just asked for hugepages and we have the ability to give it to them I think we
should do so.
In the likely common case of an instance with a single NUMA node, I think this
will likely give the desired behaviour since the default kernel behaviour is to
prefer allocating from the numa node that requested the memory. As long as qemu
affinity is set before it allocates memory we should be okay.
The only case that isn't covered is if the flavor specifies multiple numa nodes.
In that case maybe the scheduler filters should be aware of that and refuse to
assign an instance with multiple numa nodes to a compute node with an older qemu.
Chris
More information about the Openstack
mailing list