[Openstack] CPU pinning question
Satish Patel
satish.txt at gmail.com
Tue Dec 15 18:23:37 UTC 2015
If i enable "NUMATopologyFilter", Does JUNO support pinning?
FYI, i am following this link:
http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
On Tue, Dec 15, 2015 at 1:11 PM, Satish Patel <satish.txt at gmail.com> wrote:
> @chris,
>
> I already have "hw:cpu_policy": "dedicated"
>
> [root at control ~(keystone_admin)]# nova flavor-show 8
> +----------------------------+---------------------------------------------------------------------------------+
> | Property | Value
> |
> +----------------------------+---------------------------------------------------------------------------------+
> | OS-FLV-DISABLED:disabled | False
> |
> | OS-FLV-EXT-DATA:ephemeral | 0
> |
> | disk | 20
> |
> | extra_specs |
> {"aggregate_instance_extra_specs:pinned": "true", "hw:cpu_policy":
> "dedicated"} |
> | id | 8
> |
> | name | pinned.medium
> |
> | os-flavor-access:is_public | True
> |
> | ram | 2048
> |
> | rxtx_factor | 1.0
> |
> | swap |
> |
> | vcpus | 2
> |
> +----------------------------+---------------------------------------------------------------------------------+
>
> On Tue, Dec 15, 2015 at 11:11 AM, Chris Friesen
> <chris.friesen at windriver.com> wrote:
>> Actually no, I don't think that's right. When pinning is enabled each vCPU
>> will be affined to a single host CPU. What is showing below is what I would
>> expect if the instance was using non-dedicated CPUs.
>>
>> To the original poster, you should be using
>>
>> 'hw:cpu_policy': 'dedicated'
>>
>> in your flavor extra-specs to enable CPU pinning. And you should enable the
>> NUMATopologyFilter scheduler filter.
>>
>> Chris
>>
>>
>>
>>
>> On 12/15/2015 09:23 AM, Arne Wiebalck wrote:
>>>
>>> The pinning seems to have done what you asked for, but you probably
>>> want to confine your vCPUs to NUMA nodes.
>>>
>>> Cheers,
>>> Arne
>>>
>>>
>>>> On 15 Dec 2015, at 16:12, Satish Patel <satish.txt at gmail.com> wrote:
>>>>
>>>> Sorry forgot to reply all :)
>>>>
>>>> This is what i am getting
>>>>
>>>> [root at compute-1 ~]# virsh vcpupin instance-00000043
>>>> VCPU: CPU Affinity
>>>> ----------------------------------
>>>> 0: 2-3,6-7
>>>> 1: 2-3,6-7
>>>>
>>>>
>>>> Following numa info
>>>>
>>>> [root at compute-1 ~]# numactl --hardware
>>>> available: 2 nodes (0-1)
>>>> node 0 cpus: 0 3 5 6
>>>> node 0 size: 2047 MB
>>>> node 0 free: 270 MB
>>>> node 1 cpus: 1 2 4 7
>>>> node 1 size: 2038 MB
>>>> node 1 free: 329 MB
>>>> node distances:
>>>> node 0 1
>>>> 0: 10 20
>>>> 1: 20 10
>>>>
>>>> On Tue, Dec 15, 2015 at 8:36 AM, Arne Wiebalck <Arne.Wiebalck at cern.ch>
>>>> wrote:
>>>>>
>>>>> The pinning we set up goes indeed into the <cputune> block:
>>>>>
>>>>> —>
>>>>> <vcpu placement='static'>32</vcpu>
>>>>> <cputune>
>>>>> <shares>32768</shares>
>>>>> <vcpupin vcpu='0' cpuset='0-7,16-23'/>
>>>>> <vcpupin vcpu='1' cpuset='0-7,16-23'/>
>>>>> …
>>>>> <—
>>>>>
>>>>> What does “virsh vcpupin <domain>” give for your instance?
>>>>>
>>>>> Cheers,
>>>>> Arne
>>>>>
>>>>>
>>>>>> On 15 Dec 2015, at 13:02, Satish Patel <satish.txt at gmail.com> wrote:
>>>>>>
>>>>>> I am running JUNO version with qemu-kvm-ev-2.1.2-23.el7_1.9.1.x86_64
>>>>>> on CentOS7.1
>>>>>>
>>>>>> I am trying to configure CPU pinning because my application is cpu
>>>>>> hungry. this is what i did.
>>>>>>
>>>>>> in /etc/nova/nova.conf
>>>>>>
>>>>>> vcpu_pin_set=2,3,6,7
>>>>>>
>>>>>>
>>>>>> I have created aggregated host with pinning=true and created flavor
>>>>>> with pinning, after that when i start VM on Host following this i can
>>>>>> see in guest
>>>>>>
>>>>>> ...
>>>>>> ...
>>>>>> <vcpu placement='static' cpuset='2-3,6-7'>2</vcpu>
>>>>>> ...
>>>>>> ...
>>>>>>
>>>>>> But i am not seeing any <cputune> info.
>>>>>>
>>>>>> Just want to make sure does my pinning working correctly or something
>>>>>> is wrong. How do i verify my pinning config is correct?
>>>>>>
>>>>>> _______________________________________________
>>>>>> Mailing list:
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>> Post to : openstack at lists.openstack.org
>>>>>> Unsubscribe :
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>
>>>>>
>>>>> --
>>>>> Arne Wiebalck
>>>>> CERN IT
>>>>>
>>>
>>> --
>>> Arne Wiebalck
>>> CERN IT
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
More information about the Openstack
mailing list