[openstack-dev] [nova] Core pinning

Tuomas Paappanen tuomas.paappanen at tieto.com
Thu Nov 14 14:03:05 UTC 2013


On 13.11.2013 20:20, Jiang, Yunhong wrote:
>
>> -----Original Message-----
>> From: Chris Friesen [mailto:chris.friesen at windriver.com]
>> Sent: Wednesday, November 13, 2013 9:57 AM
>> To: openstack-dev at lists.openstack.org
>> Subject: Re: [openstack-dev] [nova] Core pinning
>>
>> On 11/13/2013 11:40 AM, Jiang, Yunhong wrote:
>>
>>>> But, from performance point of view it is better to exclusively
>>>> dedicate PCPUs for VCPUs and emulator. In some cases you may want
>>>> to guarantee that only one instance(and its VCPUs) is using certain
>>>> PCPUs.  By using core pinning you can optimize instance performance
>>>> based on e.g. cache sharing, NUMA topology, interrupt handling, pci
>>>> pass through(SR-IOV) in multi socket hosts etc.
>>> My 2 cents. When you talking about " performance point of view", are
>>> you talking about guest performance, or overall performance? Pin PCPU
>>> is sure to benefit guest performance, but possibly not for overall
>>> performance, especially if the vCPU is not consume 100% of the CPU
>>> resources.
>> It can actually be both.  If a guest has several virtual cores that both
>> access the same memory, it can be highly beneficial all around if all
>> the memory/cpus for that guest come from a single NUMA node on the
>> host.
>>    That way you reduce the cross-NUMA-node memory traffic, increasing
>> overall efficiency.  Alternately, if a guest has several cores that use
>> lots of memory bandwidth but don't access the same data, you might want
>> to ensure that the cores are on different NUMA nodes to equalize
>> utilization of the different NUMA nodes.
> I think the Tuomas is talking about " exclusively dedicate PCPUs for VCPUs", in that situation, that pCPU can't be shared by other vCPU anymore. If this vCPU like cost only 50% of the PCPU usage, it's sure to be a waste of the overall performance.
>
> As to the cross NUMA node access, I'd let hypervisor, instead of cloud OS, to reduce the cross NUMA access as much as possible.
>
> I'm not against such usage, it's sure to be used on data center virtualization. Just question if it's for cloud.
>
>
>> Similarly, once you start talking about doing SR-IOV networking I/O
>> passthrough into a guest (for SDN/NFV stuff) for optimum efficiency it
>> is beneficial to be able to steer interrupts on the physical host to the
>> specific cpus on which the guest will be running.  This implies some
>> form of pinning.
> Still, I think hypervisor should achieve this, instead of openstack.
>
>
>>> I think pin CPU is common to data center virtualization, but not sure
>>> if it's in scope of cloud, which provide computing power, not
>>> hardware resources.
>>>
>>> And I think part of your purpose can be achieved through
>>> https://wiki.openstack.org/wiki/CPUEntitlement and
>>> https://wiki.openstack.org/wiki/InstanceResourceQuota . Especially I
>>> hope a well implemented hypervisor will avoid needless vcpu migration
>>> if the vcpu is very busy and required most of the pCPU's computing
>>> capability (I knew Xen used to have some issue in the scheduler to
>>> cause frequent vCPU migration long before).
>> I'm not sure the above stuff can be done with those.  It's not just
>> about quantity of resources, but also about which specific resources
>> will be used so that other things can be done based on that knowledge.
> With the above stuff, it ensure the QoS and the compute capability for the guest, I think.
>
> --jyh
>   
>> Chris
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Hi,

thank you for your comments. I am talking about quest performance. We 
are using openstack for managing Telco cloud applications where quest 
performance optimization is needed.
That example where pcpus are dedicated exclusively for vcpus is not a 
problem. It can be implemented by using scheduling filters and if you 
need that feature you can take the filter in use. Without it, pcpus are 
shared in normal way.

As Chris said, core pinning e.g. depending on NUMA topology is 
beneficial and I think its beneficial with or without exclusive 
dedication of pcpu.

Regards,
Tuomas



More information about the OpenStack-dev mailing list