答复: [neutron] Why network performance is extremely bad and linearly related with number of VMs?

Yi Yang (杨燚)-云服务集团 yangyi01 at inspur.com
Tue Feb 25 01:30:12 UTC 2020


Satish, do you know how to get max age time by brctl cmd? Is it setageing or setmaxage to set max age?

 

 

 

发件人: Satish Patel [mailto:satish.txt at gmail.com] 
发送时间: 2020年2月24日 7:10
收件人: Donny Davis <donny at fortnebula.com>
抄送: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>; openstack-discuss at lists.openstack.org
主题: Re: [neutron] Why network performance is extremely bad and linearly related with number of VMs?

 

What is max age time in Linux bridge? If it’s zero then it won’t learn Mac and flush arp table. 

Sent from my iPhone





On Feb 23, 2020, at 12:52 AM, Donny Davis <donny at fortnebula.com <mailto:donny at fortnebula.com> > wrote:



So I am curious as to what your question is. Are you asking about ovs bridges learning MAC's of other compute nodes or why network performance is affected when you run more than one instance per node. 

 

I have not observed this behaviour in my experience. 

Could you tell us more about the configuration of your deployment?

I understand you are currently using linux bridges that are connected to openvswitch bridges? Why not just use ovs? OVS can handle security groups.

 

 

 

On Fri, Feb 21, 2020 at 9:48 AM Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com <mailto:yangyi01 at inspur.com> > wrote:

Hi, All

Anybody has noticed network performance between VMs is extremely bad, it is
basically linearly related with numbers of VMs in same compute node. In my
case, if I launch one VM per compute node and run iperf3 tcp and udp,
performance is good, it is about 4Gbps and 1.7Gbps, for 16 bytes small UDP
packets, it can reach 180000 pps (packets per second), but if I launch two
VMs per compute node (note: they are in the same subnet) and only run pps
test case, that will be decrease to about 90000 pps, if I launch 3 VMs per
compute node, that will be about 50000 pps, I tried to find out the root
cause, other VMs in this subnet (they are in the same compute node as iperf3
client) can receive all the packets iperf3 client VM sent out although
destination MAC isn’t broadcast MAC or multicast MAC, actually it is MAC of
iperf3 server VM in another compute node, by further check, I did find qemu
instances of these VMs have higher CPU utilization and corresponding vhost
kernel threads also also higher CPU utilization, to be importantly, I did
find ovs was broadcasting these packets because all the ovs bridges didn’t
learn this destination MAC. I tried this in Queens and Rocky, the same issue
is there. By the way, we’re using linux bridge for security group, so VM
tap interface is attached into linux bridge which is connected to br-int by
veth pair.

Here is output of “ovs-appctl dpif/dump-flows br-int” after I launched
many VMs:

recirc_id(0),in_port(12),eth(src=fa:16:3e:49:26:51,dst=fa:16:3e:a7:0a:3a),et
h_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:11012944, bytes:726983412,
used:0.000s, flags:SP.,
actions:push_vlan(vid=1,pcp=0),2,set(tunnel(tun_id=0x49,src=10.3.2.17,dst=10
.3.2.16,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,9,8,11,13,14,15,16,17,18
,19

$ sudo ovs-appctl fdb/show br-floating | grep fa:16:3e:49:26:51
$ sudo ovs-appctl fdb/show br-tun | grep fa:16:3e:49:26:51
$ sudo ovs-appctl fdb/show br-bond1 | grep fa:16:3e:49:26:51
$ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:49:26:51

All the bridges can’t learn this MAC.

My question is why ovs bridges can’t learn MACs of other compute nodes, is
this common issue of all the Openstack versions? Is there any known existing
way to fix it? Look forward to hearing your insights and solutions, thank
you in advance and have a good day.



-- 

~/DonnyD

C: 805 814 6800

"No mission too difficult. No sacrifice too great. Duty First"

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200225/f8c3428e/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3600 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200225/f8c3428e/attachment-0001.bin>


More information about the openstack-discuss mailing list