[neutron] Why network performance is extremely bad and linearly related with number of VMs?

=?gb2312?B?WWkgWWFuZyAo0e6gRCkt1Ma3/s7xvK/NxQ==?= yangyi01 at inspur.com
Fri Feb 21 00:38:36 UTC 2020


Hi, All

Anybody has noticed network performance between VMs is extremely bad, it is
basically linearly related with numbers of VMs in same compute node. In my
case, if I launch one VM per compute node and run iperf3 tcp and udp,
performance is good, it is about 4Gbps and 1.7Gbps, for 16 bytes small UDP
packets, it can reach 180000 pps (packets per second), but if I launch two
VMs per compute node (note: they are in the same subnet) and only run pps
test case, that will be decrease to about 90000 pps, if I launch 3 VMs per
compute node, that will be about 50000 pps, I tried to find out the root
cause, other VMs in this subnet (they are in the same compute node as iperf3
client) can receive all the packets iperf3 client VM sent out although
destination MAC isn’t broadcast MAC or multicast MAC, actually it is MAC of
iperf3 server VM in another compute node, by further check, I did find qemu
instances of these VMs have higher CPU utilization and corresponding vhost
kernel threads also also higher CPU utilization, to be importantly, I did
find ovs was broadcasting these packets because all the ovs bridges didn’t
learn this destination MAC. I tried this in Queens and Rocky, the same issue
is there. By the way, we’re using linux bridge for security group, so VM
tap interface is attached into linux bridge which is connected to br-int by
veth pair.

Here is output of “ovs-appctl dpif/dump-flows br-int” after I launched
many VMs:

recirc_id(0),in_port(12),eth(src=fa:16:3e:49:26:51,dst=fa:16:3e:a7:0a:3a),et
h_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:11012944, bytes:726983412,
used:0.000s, flags:SP.,
actions:push_vlan(vid=1,pcp=0),2,set(tunnel(tun_id=0x49,src=10.3.2.17,dst=10
.3.2.16,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,9,8,11,13,14,15,16,17,18
,19

$ sudo ovs-appctl fdb/show br-floating | grep fa:16:3e:49:26:51
$ sudo ovs-appctl fdb/show br-tun | grep fa:16:3e:49:26:51
$ sudo ovs-appctl fdb/show br-bond1 | grep fa:16:3e:49:26:51
$ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:49:26:51

All the bridges can’t learn this MAC.

My question is why ovs bridges can’t learn MACs of other compute nodes, is
this common issue of all the Openstack versions? Is there any known existing
way to fix it? Look forward to hearing your insights and solutions, thank
you in advance and have a good day.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3600 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200221/083cc268/attachment.bin>


More information about the openstack-discuss mailing list