[Openstack] vCPU -> pCPU MAPPING

Kaustubh Kelkar kaustubh.kelkar at casa-systems.com
Fri Jul 8 19:43:34 UTC 2016


Although it contradicts the idea of a cloud, I believe the CPU mapping between the guest and the host is a valid case for NFV applications. The best that one can do is to ensure vCPU and virtual memory are mapped to single NUMA node within the host and to make sure the CPUs don’t float within the NUMA.

A while back, I was able to do this on a Kilo based lab for performance benchmarking:
https://ask.openstack.org/en/question/87711/numa-awareness-during-instance-placement/

While the answer may not be up to date with respect to newer versions of OpenStack, in addition to numa_* extra specs, you could look at cpu_policy and cpu_thread_policy as well.


-Kaustubh

From: Arne Wiebalck [mailto:arne.wiebalck at cern.ch]
Sent: Friday, July 8, 2016 3:11 PM
To: Brent Troge <brenttroge2016 at gmail.com>
Cc: openstack at lists.openstack.org
Subject: Re: [Openstack] vCPU -> pCPU MAPPING

We have use cases in our cloud which require vCPU-to-NUMA_node pinning
to maximise the CPU performance available in the guests. From what we’ve
seen, there was no further improvement when the vCPUs were mapped
one-to-one to pCPUs (we did not study this in detail, though, as with the
NUMA node pinning the performance was sufficiently close to the physical
one).

To implement this, we specify the numa_nodes extra_spec for the corresponding
flavor and rely on nova’s placement policy.

HTH,
 Arne

—
Arne Wiebalck
CERN IT



On 08 Jul 2016, at 19:22, Steve Gordon <sgordon at redhat.com<mailto:sgordon at redhat.com>> wrote:

----- Original Message -----

From: "Brent Troge" <brenttroge2016 at gmail.com<mailto:brenttroge2016 at gmail.com>>
To: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Sent: Friday, July 8, 2016 9:59:58 AM
Subject: [Openstack] vCPU -> pCPU MAPPING

context - high performance private cloud with cpu pinning

Is it possible to map vCPUs to specific pCPUs ?
Currently I see you can only direct which vCPUs are mapped to a specific
NUMA node

hw:numa_cpus.0=1,2,3,4

Just in addition to Jay's comment, the above does not do what I suspect you think it does. The above tells Nova to expose vCPUs 1, 2, 3, and 4 in *guest* NUMA node 0 when building the guest NUMA topology in the Libvirt XML. Nova will endeavor to map these vCPUs to pCPUs on the same NUMA node on the host as *each other* but that will not necessarily be NUMA node *0* on the host depending on resource availability.

Thanks,

Steve


However, to get even more granular, is it possible to create a flavor which
maps vCPU to specific pCPU within a numa node ?

Something like:
hw:numa_cpus.<NUMA-NODE>-<pCPU>=<vCPU>

hw:numa_cpus.0-1=1
hw:numa_cpus.0-2=2


Thanks!

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160708/629e9e07/attachment.html>


More information about the Openstack mailing list