DPDK Poor performance

Satish Patel satish.txt at gmail.com
Wed Sep 29 16:33:14 UTC 2021


Folk,

I have deployed DPDK on one of my compute nodes to reply to my sriov
deployment which doesn't support bonding.

This is what I am doing to do a performance benchmark.  Spun up VM
with 8 vCPU and 8GB memory with single virtual nic. Then install
nuttcp tool and start nuttcp -S (server mode) and then use Physical
server outside my lab and use following command to blast some traffic

nuttcp -T 30 -i -u -w4m -R 2G 10.69.7.130

I found 120kpps rate working fine but after that I noticed a packet
drop on nuttcp.

Here is the full details of my deployment and load-test result -
https://paste.opendev.org/show/809675/

Small confusion related hugepage - I am seeing following in /dev/hugepages/

I believe rtemap_0 created by DPDK

root at ovn-lab-comp-dpdk-1:~# ls -l /dev/hugepages/rtemap_0
-rw------- 1 root root 1073741824 Sep 29 02:28 /dev/hugepages/rtemap_0

Why is the qemu instance file empty? Does that mean it's not using a
huge page for dpdk performance?

root at ovn-lab-comp-dpdk-1:~# ls -l
/dev/hugepages/libvirt/qemu/1-instance-00000045/
total 0

I have hw:mem_page_size='large' setting in flavor so it should use
hugepage. am i missing something?



More information about the openstack-discuss mailing list