[openstack-dev] [nova] Core pinning

Jiang, Yunhong yunhong.jiang at intel.com
Wed Nov 13 17:40:35 UTC 2013

> -----Original Message-----
> From: Tuomas Paappanen [mailto:tuomas.paappanen at tieto.com]
> Sent: Wednesday, November 13, 2013 4:46 AM
> To: openstack-dev at lists.openstack.org
> Subject: [openstack-dev] [nova] Core pinning
> Hi all,
> I would like to hear your thoughts about core pinning in Openstack.
> Currently nova(with qemu-kvm) supports usage of cpu set of PCPUs what
> can be used by instances. I didn't find blueprint, but I think this
> feature is for isolate cpus used by host from cpus used by
> instances(VCPUs).
> But, from performance point of view it is better to exclusively dedicate
> PCPUs for VCPUs and emulator. In some cases you may want to guarantee
> that only one instance(and its VCPUs) is using certain PCPUs.  By using
> core pinning you can optimize instance performance based on e.g. cache
> sharing, NUMA topology, interrupt handling, pci pass through(SR-IOV) in
> multi socket hosts etc.

My 2 cents.
When you talking about " performance point of view", are you talking about guest performance, or overall performance? Pin PCPU is sure to benefit guest performance, but possibly not for overall performance, especially if the vCPU is not consume 100% of the CPU resources. 

I think pin CPU is common to data center virtualization, but not sure if it's in scope of cloud, which provide computing power, not hardware resources.

And I think part of your purpose can be achieved through https://wiki.openstack.org/wiki/CPUEntitlement and https://wiki.openstack.org/wiki/InstanceResourceQuota . Especially I hope a well implemented hypervisor will avoid needless vcpu migration if the vcpu is very busy and required most of the pCPU's computing capability (I knew Xen used to have some issue in the scheduler to cause frequent vCPU migration long before).


> We have already implemented feature like this(PoC with limitations) to
> Nova Grizzly version and would like to hear your opinion about it.
> The current implementation consists of three main parts:
> - Definition of pcpu-vcpu maps for instances and instance spawning
> - (optional) Compute resource and capability advertising including free
> pcpus and NUMA topology.
> - (optional) Scheduling based on free cpus and NUMA topology.
> The implementation is quite simple:
> (additional/optional parts)
> Nova-computes are advertising free pcpus and NUMA topology in same
> manner than host capabilities. Instances are scheduled based on this
> information.
> (core pinning)
> admin can set PCPUs for VCPUs and for emulator process, or select NUMA
> cell for instance vcpus, by adding key:value pairs to flavor's extra specs.
> instance has 4 vcpus
> <key>:<value>
> vcpus:1,2,3,4 --> vcpu0 pinned to pcpu1, vcpu1 pinned to pcpu2...
> emulator:5 --> emulator pinned to pcpu5
> or
> numacell:0 --> all vcpus are pinned to pcpus in numa cell 0.
> In nova-compute, core pinning information is read from extra specs and
> added to domain xml same way as cpu quota values(cputune).
> <cputune>
>        <vcpupin vcpu='0' cpuset='1'/>
>        <vcpupin vcpu='1' cpuset='2'/>
>        <vcpupin vcpu='2' cpuset='3'/>
>        <vcpupin vcpu='3' cpuset='4'/>
>        <emulatorpin cpuset='5'/>
> </cputune>
> What do you think? Implementation alternatives? Is this worth of
> blueprint? All related comments are welcome!
> Regards,
> Tuomas
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list