OVS-DPDK poor performance with Intel 82599

Satish Patel satish.txt at gmail.com
Thu Nov 26 21:56:54 UTC 2020


Folks,

I am playing with DPDK on my openstack with NIC model 82599 and seeing
poor performance, i may be wrong with my numbers so want to see what
the community thinks about these results.

Compute node hardware:

CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
Memory: 64G
NIC: Intel 82599 (dual 10G port)

[root at compute-lxb-3 ~]# ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.13.2
DPDK 19.11.3

VM dpdk (DUT):
8vCPU / 8GB memory

I have configured my computer node for all best practice available on
the internet to get more performance out.

1. Used isolcpus to islocate CPUs
2. 4 dedicated core for PMD
3. echo isolated_cores=1,9,25,33 >> /etc/tuned/cpu-partitioning-variables.conf
4. Huge pages
5. CPU pinning for VM
6. increase  ( ovs-vsctl set interface dpdk-1 options:n_rxq=4 )
7. VM virtio_ring = 1024

After doing all above I am getting the following result using the Trex
packet generator using 64B UDP stream (Total-PPS       :     391.93
Kpps)  Do you think it's an acceptable result or should it be higher
on these NIC models?

On the internet folks say it should be a million packets per second so
not sure what and how those people reached there or i am missing
something in my load test profile.

Notes: I am using 8vCPU core on VM do you think adding more cores will
help? OR should i add more PMD?

Cpu Utilization : 2.2  %  1.8 Gb/core
 Platform_factor : 1.0
 Total-Tx        :     200.67 Mbps
 Total-Rx        :     200.67 Mbps
 Total-PPS       :     391.93 Kpps
 Total-CPS       :     391.89 Kcps

 Expected-PPS    :     700.00 Kpps
 Expected-CPS    :     700.00 Kcps
 Expected-BPS    :     358.40 Mbps


This is my all configuration:

grub.conf:
GRUB_CMDLINE_LINUX="vmalloc=384M crashkernel=auto
rd.lvm.lv=rootvg01/lv01 console=ttyS1,118200 rhgb quiet intel_iommu=on
iommu=pt spectre_v2=off nopti pti=off nospec_store_bypass_disable
spec_store_bypass_disable=off l1tf=off default_hugepagesz=1GB
hugepagesz=1G hugepages=60 transparent_hugepage=never selinux=0
isolcpus=2,3,4,5,6,7,10,11,12,13,14,15,26,27,28,29,30,31,34,35,36,37,38,39"


[root at compute-lxb-3 ~]# ovs-appctl dpif/show
netdev at ovs-netdev: hit:605860720 missed:2129
  br-int:
    br-int 65534/3: (tap)
    int-br-vlan 1/none: (patch: peer=phy-br-vlan)
    patch-tun 2/none: (patch: peer=patch-int)
    vhu1d64ea7d-d9 5/6: (dpdkvhostuserclient: configured_rx_queues=8,
configured_tx_queues=8, mtu=1500, requested_rx_queues=8,
requested_tx_queues=8)
    vhu9c32faf6-ac 6/7: (dpdkvhostuserclient: configured_rx_queues=8,
configured_tx_queues=8, mtu=1500, requested_rx_queues=8,
requested_tx_queues=8)
  br-tun:
    br-tun 65534/4: (tap)
    patch-int 1/none: (patch: peer=patch-tun)
    vxlan-0a410071 2/5: (vxlan: egress_pkt_mark=0, key=flow,
local_ip=10.65.0.114, remote_ip=10.65.0.113)
  br-vlan:
    br-vlan 65534/1: (tap)
    dpdk-1 2/2: (dpdk: configured_rx_queues=4,
configured_rxq_descriptors=2048, configured_tx_queues=5,
configured_txq_descriptors=2048, lsc_interrupt_mode=false, mtu=1500,
requested_rx_queues=4, requested_rxq_descriptors=2048,
requested_tx_queues=5, requested_txq_descriptors=2048,
rx_csum_offload=true, tx_tso_offload=false)
    phy-br-vlan 1/none: (patch: peer=int-br-vlan)


[root at compute-lxb-3 ~]# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
  isolated : false
  port: dpdk-1            queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhu1d64ea7d-d9    queue-id:  3 (enabled)   pmd usage:  0 %
  port: vhu1d64ea7d-d9    queue-id:  4 (enabled)   pmd usage:  0 %
  port: vhu9c32faf6-ac    queue-id:  3 (enabled)   pmd usage:  0 %
  port: vhu9c32faf6-ac    queue-id:  4 (enabled)   pmd usage:  0 %
pmd thread numa_id 0 core_id 9:
  isolated : false
  port: dpdk-1            queue-id:  1 (enabled)   pmd usage:  0 %
  port: vhu1d64ea7d-d9    queue-id:  2 (enabled)   pmd usage:  0 %
  port: vhu1d64ea7d-d9    queue-id:  5 (enabled)   pmd usage:  0 %
  port: vhu9c32faf6-ac    queue-id:  2 (enabled)   pmd usage:  0 %
  port: vhu9c32faf6-ac    queue-id:  5 (enabled)   pmd usage:  0 %
pmd thread numa_id 0 core_id 25:
  isolated : false
  port: dpdk-1            queue-id:  3 (enabled)   pmd usage:  0 %
  port: vhu1d64ea7d-d9    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhu1d64ea7d-d9    queue-id:  7 (enabled)   pmd usage:  0 %
  port: vhu9c32faf6-ac    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhu9c32faf6-ac    queue-id:  7 (enabled)   pmd usage:  0 %
pmd thread numa_id 0 core_id 33:
  isolated : false
  port: dpdk-1            queue-id:  2 (enabled)   pmd usage:  0 %
  port: vhu1d64ea7d-d9    queue-id:  1 (enabled)   pmd usage:  0 %
  port: vhu1d64ea7d-d9    queue-id:  6 (enabled)   pmd usage:  0 %
  port: vhu9c32faf6-ac    queue-id:  1 (enabled)   pmd usage:  0 %
  port: vhu9c32faf6-ac    queue-id:  6 (enabled)   pmd usage:  0 %



More information about the openstack-discuss mailing list