[Openstack] [Fuel] nova cpu pinning and dpdk

Kostyantyn Volenbovskyi volenbovsky at yandex.ru
Mon Feb 13 20:41:04 UTC 2017


Hi, 

1. as I mentioned - not allocating anything to host OS (/hypervisor)  won’t work at all with very high probability
That is I think ‘you can’t (or ’shouldn’t) have all physical CPUs in isolcpus in Linux, )unless you are doing some special
things afterwards to pin host processes to those)
 
So I would say - give hypervisor at least 2 CPUs (=1 core, I can somehow assume that hyperthreading is on looking at your number of CPUs), that leaves you with 28 CPUs for Nova.
Check Mirantis capacity guidelines on that (/maybe also of RedHat/other vendors)

2. There is one thing that you should be aware of. That is - those Nova CPUs will end up in isolcpus as I see.
Not using pinning afterwards will cause misbehavior, described in [1]
So all in all that means ‘current DPDK implementation in MOS 9.x seem to require Nova CPU pinning/its subsequent use for VMs (which it shouldn’t in my opinion)


BR, 
Konstantin
[1]  https://bugzilla.redhat.com/show_bug.cgi?id=1393576#c3




> On Feb 12, 2017, at 6:50 PM, - - <super at sxyninja.com> wrote:
> 
> Thanks for the info.
> 
> For these hosts specifically, NUMA shows a total of 32 cores.  Given the way you explained it, I would map something like 2 cpus to dpdk (for a 3gb interface, although I could probably use 1 cpu) and allocate all the remaining 30 cpus for nova usage, correct?
> 
> Basically, it seems the nova pinning value is total number of numa cores minus what was assigned to dpdk, like you said, just taking the dpdk cpus out of the numa pool.
> 
> I think I get what the intention is for this now, and how to configure it.
> 
> Thanks,
> 
> DG 
> 
>> On February 12, 2017 at 6:27 AM Kostyantyn Volenbovskyi <volenbovsky at yandex.ru> wrote:
>> 
>> Hi, 
>>>  When a vm is mapped to a flavor using dpdk, do all the vcpus the VM uses have to 'pinned' to use dpdk, thus this value means the total number of vcpus available to VMs needing dpdk?
>>> 
>> I don’t think that Nova CPU pinning should be mandated once you choose to use DPDK on that Compute Host.
>> However, page
>> https://specs.openstack.org/openstack/fuel-specs/specs/9.0/support-numa-cpu-pinning.html
>> really indicate it is mandated.
>> 
>> But I think that the idea could be that you are enforced to specify total number number explicitly, because the guideline configuration 
>> is to enforce some CPUs for host OS ‘separately’?
>>> When a vm is mapped to a flavor using dpdk, do all the vcpus the VM uses have to 'pinned' to use dpdk, thus this value means the total number of vcpus available to VMs needing dpdk?
>>> 
>> I would say that it is ‘all host CPUs that should be available to _all_ VMs’. First of all, I think that configuration where you have both DPDK and non-DPDK physical adapters is uncommon
>> (hmmmm, vNIC plugging could be trickier…). Second of all, this perspective of Nova doesn’t have such direct relation with DPDK acceleration.
>> So to CPUs running on 'all minus n’ host CPUs where n is number of CPUs you have for DPDK and probably you must decide on having something for host OS.
>> Otherwise all CPUs will end up in isolcpus and most likely host Linux won’t boot/will misbehave.
>> 
>> But forcing you to indicate Nova CPUs doesn’t mean that you shouldn't enforced to use actual 1 guest CPU -1 host CPU pinning.
>> In case you won’t touch flavor keys defining then I expect that Nova just will put range like 
>> <vcpu placement='static' cpuset='3-5,9-11'>1</vcpu> for all your VMs in Libvirt.
>> And then from your perspective the behaviour won’t change significantly. 
>> 
>> 
>> BR, 
>> Konstantin
>> 
>> 
>> 
>> 
>>> On Feb 12, 2017, at 12:39 AM, - - <super at sxyninja.com> wrote:
>>> 
>>> Hello,
>>> 
>>> I am adding some new hosts to my MOS 9.2 fuel deployment, and I want to implement dpdk on the 2 new hosts.  I have fuel setup with the experimental options so I can see the settings when adding the new hosts.
>>> 
>>> My question is relating to the node attributes section.  In the documents for showing how to setup dpdk, it states to setup nova cpu pinning along with the dpdk cpu pinning.
>>> 
>>> I get that the dpdk cpu pinning is the number of cpus allocated for processing of the packets based on nic speed.
>>> 
>>> I don't see any information on exactly what the value of nova cpu pinning should be when doing only dpdk (no sriov).   When a vm is mapped to a flavor using dpdk, do all the vcpus the VM uses have to 'pinned' to use dpdk, thus this value means the total number of vcpus available to VMs needing dpdk?
>>> 
>>> I just want to make sure I understand these values when setting this up, so I do it correctly.
>>> 
>>> Thanks,
>>> 
>>> DG
>>> 
>>> _______________________________________________
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





More information about the Openstack mailing list