[Openstack] vCPU -> pCPU MAPPING

Brent Troge brenttroge2016 at gmail.com
Sat Jul 9 19:43:49 UTC 2016


Based on my testing you can control NUMA node placement if you are also
using SRIOV and each PF is grouped in its own
physical network.

How did I test ?

First set your flavor to use a single numa node.

hw_numa_nodes=1

Then when creating your neutron port(s), call a neutron network created
against the physical network which resides in the desired NUMA node.

For example say you want to drive resource alignment within NUMA 1

Because of 'hw_numa_nodes=1'  and because your neutron ports are created
against physcial network in NUMA 1, nova will try to grab memory/cpu from
the same.
If not enough resources available in NUMA1 on all applicable hosts then
instantiation fails.

I tested numerous times and tried to account for multiple variables, in
each test I was able to select NUMA alignment using the above method.





On Fri, Jul 8, 2016 at 2:43 PM, Kaustubh Kelkar <
kaustubh.kelkar at casa-systems.com> wrote:

> Although it contradicts the idea of a cloud, I believe the CPU mapping
> between the guest and the host is a valid case for NFV applications. The
> best that one can do is to ensure vCPU and virtual memory are mapped to
> single NUMA node within the host and to make sure the CPUs don’t float
> within the NUMA.
>
> A while back, I was able to do this on a Kilo based lab for performance
> benchmarking:
>
> https://ask.openstack.org/en/question/87711/numa-awareness-during-instance-placement/
>
>
>
> While the answer may not be up to date with respect to newer versions of
> OpenStack, in addition to numa_* extra specs, you could look at cpu_policy
> and cpu_thread_policy as well.
>
>
>
>
>
> -Kaustubh
>
>
>
> *From:* Arne Wiebalck [mailto:arne.wiebalck at cern.ch]
> *Sent:* Friday, July 8, 2016 3:11 PM
> *To:* Brent Troge <brenttroge2016 at gmail.com>
> *Cc:* openstack at lists.openstack.org
> *Subject:* Re: [Openstack] vCPU -> pCPU MAPPING
>
>
>
> We have use cases in our cloud which require vCPU-to-NUMA_node pinning
>
> to maximise the CPU performance available in the guests. From what we’ve
>
> seen, there was no further improvement when the vCPUs were mapped
>
> one-to-one to pCPUs (we did not study this in detail, though, as with the
>
> NUMA node pinning the performance was sufficiently close to the physical
>
> one).
>
>
>
> To implement this, we specify the numa_nodes extra_spec for the
> corresponding
>
> flavor and rely on nova’s placement policy.
>
>
>
> HTH,
>
>  Arne
>
>
>
>>
> Arne Wiebalck
>
> CERN IT
>
>
>
>
>
>
>
> On 08 Jul 2016, at 19:22, Steve Gordon <sgordon at redhat.com> wrote:
>
>
>
> ----- Original Message -----
>
> From: "Brent Troge" <brenttroge2016 at gmail.com>
> To: openstack at lists.openstack.org
> Sent: Friday, July 8, 2016 9:59:58 AM
> Subject: [Openstack] vCPU -> pCPU MAPPING
>
> context - high performance private cloud with cpu pinning
>
> Is it possible to map vCPUs to specific pCPUs ?
> Currently I see you can only direct which vCPUs are mapped to a specific
> NUMA node
>
> hw:numa_cpus.0=1,2,3,4
>
>
> Just in addition to Jay's comment, the above does not do what I suspect
> you think it does. The above tells Nova to expose vCPUs 1, 2, 3, and 4 in
> *guest* NUMA node 0 when building the guest NUMA topology in the Libvirt
> XML. Nova will endeavor to map these vCPUs to pCPUs on the same NUMA node
> on the host as *each other* but that will not necessarily be NUMA node *0*
> on the host depending on resource availability.
>
> Thanks,
>
> Steve
>
>
> However, to get even more granular, is it possible to create a flavor which
> maps vCPU to specific pCPU within a numa node ?
>
> Something like:
> hw:numa_cpus.<NUMA-NODE>-<pCPU>=<vCPU>
>
> hw:numa_cpus.0-1=1
> hw:numa_cpus.0-2=2
>
>
> Thanks!
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160709/04747e93/attachment.html>


More information about the Openstack mailing list