Hi, Sometimes disabling offloading on the host helps: ethtool -K <interface> tso off Cheers On Wed, 2025-07-30 at 12:57 +0700, Nguyễn Hữu Khôi wrote:
Hello. I have a similar case: communication between VMs using the provider network works fine, but not with the self-service network. Using jumbo frames will improve performance significantly.
Nguyen Huu Khoi
On Wed, Jul 30, 2025 at 10:36 AM engineer2024 <engineerlinux2024@gmail.com> wrote:
Did this work better with LXB plugin earlier?
On Wed, 30 Jul 2025, 09:01 Jeff Yang, <yjf1970231893@gmail.com> wrote:
Hi all,
I'm running OpenStack Zed with OVN as the Neutron backend. I've encountered a severe VM network performance degradation issue and would appreciate any insights from the community.
🔹 **Environment** - OpenStack version: Zed - Neutron backend: OVN - Two VMs (each 4 vCPUs, 8 GB RAM) launched on two different compute nodes - Multi-queue is enabled on the VM vNICs
🔹 **Issue** The network performance inside the VMs is significantly worse than on the host nodes, with high packet retransmissions observed.
🔹 **Test results**
**Host-to-Host (VTEP IPs)**
``` $ iperf3 -c 192.168.152.152 Connecting to host 192.168.152.152, port 5201 [ 5] local 192.168.152.153 port 45352 connected to 192.168.152.152 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.38 GBytes 11.8 Gbits/sec 0 3.10 MBytes [ 5] 1.00-2.00 sec 1.37 GBytes 11.8 Gbits/sec 0 3.10 MBytes [ 5] 2.00-3.00 sec 1.42 GBytes 12.2 Gbits/sec 0 3.10 MBytes [ 5] 3.00-4.00 sec 1.39 GBytes 11.9 Gbits/sec 0 3.10 MBytes [ 5] 4.00-5.00 sec 1.38 GBytes 11.8 Gbits/sec 0 3.10 MBytes [ 5] 5.00-6.00 sec 1.43 GBytes 12.3 Gbits/sec 0 3.10 MBytes [ 5] 6.00-7.00 sec 1.41 GBytes 12.1 Gbits/sec 0 3.10 MBytes [ 5] 7.00-8.00 sec 1.41 GBytes 12.1 Gbits/sec 0 3.10 MBytes [ 5] 8.00-9.00 sec 1.41 GBytes 12.1 Gbits/sec 0 3.10 MBytes [ 5] 9.00-10.00 sec 1.42 GBytes 12.2 Gbits/sec 0 3.10 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 14.0 GBytes 12.0 Gbits/sec 0 sender [ 5] 0.00-10.04 sec 14.0 GBytes 12.0 Gbits/sec receiver
iperf Done. ```
**VM-to-VM (through overlay network)**
``` $ iperf3 -c 10.0.6.10 Connecting to host 10.0.6.10, port 5201 [ 5] local 10.0.6.37 port 56710 connected to 10.0.6.10 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 499 MBytes 4.19 Gbits/sec 263 463 KBytes [ 5] 1.00-2.00 sec 483 MBytes 4.05 Gbits/sec 467 367 KBytes [ 5] 2.00-3.00 sec 482 MBytes 4.05 Gbits/sec 491 386 KBytes [ 5] 3.00-4.00 sec 483 MBytes 4.05 Gbits/sec 661 381 KBytes [ 5] 4.00-5.00 sec 472 MBytes 3.95 Gbits/sec 430 391 KBytes [ 5] 5.00-6.00 sec 480 MBytes 4.03 Gbits/sec 474 350 KBytes [ 5] 6.00-7.00 sec 510 MBytes 4.28 Gbits/sec 567 474 KBytes [ 5] 7.00-8.00 sec 521 MBytes 4.37 Gbits/sec 565 387 KBytes [ 5] 8.00-9.00 sec 509 MBytes 4.27 Gbits/sec 632 483 KBytes [ 5] 9.00-10.00 sec 514 MBytes 4.30 Gbits/sec 555 495 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 4.84 GBytes 4.15 Gbits/sec 5105 sender [ 5] 0.00-10.05 sec 4.84 GBytes 4.14 Gbits/sec receiver
iperf Done. ```
As you can see, there's a sharp drop in performance inside the VMs (from ~12 Gbps down to ~4 Gbps), with a lot of retransmissions in 10 seconds.
Has anyone encountered a similar issue with OVN and VM networking performance? What could be the potential causes?
Any suggestions or debugging tips are welcome.
Thanks in advance!
Best regards,