[openstack][neutron][openvswitch] Openvswitch Packet loss when high throughput (pps)

Satish Patel satish.txt at gmail.com
Wed Sep 6 15:43:42 UTC 2023


Damn! We have noticed the same issue around 40k to 55k PPS. Trust me
nothing is wrong in your config. This is just a limitation of the software
stack and kernel itself.

On Wed, Sep 6, 2023 at 9:21 AM Ha Noi <hanoi952022 at gmail.com> wrote:

> Hi Satish,
>
> Actually, our customer get this issue when the tx/rx above only 40k pps.
> So what is the threshold of this throughput for OvS?
>
>
> Thanks and regards
>
> On Wed, 6 Sep 2023 at 20:19 Satish Patel <satish.txt at gmail.com> wrote:
>
>> Hi,
>>
>> This is normal because OVS or LinuxBridge wire up VMs using TAP interface
>> which runs on kernel space and that drives higher interrupt and that makes
>> the kernel so busy working on handling packets. Standard OVS/LinuxBridge
>> are not meant for higher PPS.
>>
>> If you want to handle higher PPS then look for DPDK or SRIOV deployment.
>> ( We are running everything in SRIOV because of high PPS requirement)
>>
>> On Tue, Sep 5, 2023 at 11:11 AM Ha Noi <hanoi952022 at gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> I'm using Openstack Train and Openvswitch for ML2 driver and GRE for
>>> tunnel type. I tested our network performance between two VMs and suffer
>>> packet loss as below.
>>>
>>> VM1: IP: 10.20.1.206
>>>
>>> VM2: IP: 10.20.1.154 <https://10.20.1.154/24>
>>>
>>> VM3: IP: 10.20.1.72
>>>
>>>
>>> Using iperf3 to testing performance between VM1 and VM2.
>>>
>>> Run iperf3 client and server on both VMs.
>>>
>>> On VM2: iperf3 -t 10000 -b 130M -l 442 -P 6 -u -c 10.20.1.206
>>>
>>> On VM1: iperf3 -t 10000 -b 130M -l 442 -P 6 -u -c 10.20.1.154
>>> <https://10.20.1.154/24>
>>>
>>>
>>> Using VM3 ping into VM1, then the packet is lost and the latency is
>>> quite high.
>>>
>>>
>>> ping -i 0.1 10.20.1.206
>>>
>>> PING 10.20.1.206 (10.20.1.206) 56(84) bytes of data.
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=1 ttl=64 time=7.70 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=2 ttl=64 time=6.90 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=3 ttl=64 time=7.71 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=4 ttl=64 time=7.98 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=6 ttl=64 time=8.58 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=7 ttl=64 time=8.34 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=8 ttl=64 time=8.09 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=10 ttl=64 time=4.57 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=11 ttl=64 time=8.74 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=12 ttl=64 time=9.37 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=14 ttl=64 time=9.59 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=15 ttl=64 time=7.97 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=16 ttl=64 time=8.72 ms
>>>
>>> 64 bytes from 10.20.1.206: icmp_seq=17 ttl=64 time=9.23 ms
>>>
>>> ^C
>>>
>>> --- 10.20.1.206 ping statistics ---
>>>
>>> 34 packets transmitted, 28 received, 17.6471% packet loss, time 3328ms
>>>
>>> rtt min/avg/max/mdev = 1.396/6.266/9.590/2.805 ms
>>>
>>>
>>>
>>> Does any one get this issue ?
>>>
>>> Please help me. Thanks
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230906/83237d40/attachment-0001.htm>


More information about the openstack-discuss mailing list