<div dir="ltr">Hi Darragh,<div><br></div><div>Yes, Instances are getting MTU 1400.</div><div><br></div><div>I'm using LibvirtHybridOVSBridgeDriver at my Compute Nodes. I'll check BG 1223267 right now! </div><div><br>
</div><div><br></div><div>The <span style="font-family:arial,sans-serif;font-size:12.800000190734863px">LibvirtOpenVswitchDriver doesn't work, look:</span></div><div><span style="font-family:arial,sans-serif;font-size:12.800000190734863px"><br>
</span></div><div><a href="http://paste.openstack.org/show/49709/">http://paste.openstack.org/show/49709/</a><span style="font-family:arial,sans-serif;font-size:12.800000190734863px"><br></span></div><div><br></div><div>
<a href="http://paste.openstack.org/show/49710/">http://paste.openstack.org/show/49710/</a><br>
</div><div><span style="font-family:arial,sans-serif;font-size:12.800000190734863px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:12.800000190734863px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:12.800000190734863px">My NICs are "</span><font face="arial, sans-serif">RTL8111/8168/8411 PCI Express Gigabit Ethernet", Hypervisors motherboard are MSI-890FXA-GD70.</font></div>
<div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">The command "ethtool -K eth1 gro off" did not had any effect on the communication between instances on different hypervisors, still poor, around 248Mbit/sec, when its physical path reach 1Gbit/s (where GRE is built).</font></div>
<div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">My Linux version is "Linux hypervisor-1 3.8.0-32-generic #47~precise1-Ubuntu", same kernel on Network Node" and others nodes too (Ubuntu 12.04.3 installed from scratch for this Havana deployment).</font></div>
<div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">The only difference I can see right now, between my two hypervisors, is that my second is just a spare machine, with a slow CPU but, I don't think it will have a negative impact at the network throughput, since I have only 1 Instance running into it (plus a qemu-nbd process eating 90% of its CPU). I'll replace this CPU tomorrow, to redo this tests again but, I don't think that this is the source of my problem. The MOBOs of two hypervisors are identical, 1 3Com (manageable) switch connecting the two.</font></div>
<div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">Thanks!</font></div><div><font face="arial, sans-serif">Thiago</font></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On 25 October 2013 07:15, Darragh O'Reilly <span dir="ltr"><<a href="mailto:dara2002-openstack@yahoo.com" target="_blank">dara2002-openstack@yahoo.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Thiago,<br>
<br>
you have configured DHCP to push out a MTU of 1400. Can you confirm that the 1400 MTU is actually getting out to the instances by running 'ip link' on them?<br>
<br>
There is an open problem where the veth used to connect the OVS and Linux bridges causes a performance drop on some kernels - <a href="https://bugs.launchpad.net/nova-project/+bug/1223267" target="_blank">https://bugs.launchpad.net/nova-project/+bug/1223267</a> . If you are using the LibvirtHybridOVSBridgeDriver VIF driver, can you try changing to LibvirtOpenVswitchDriver and repeat the iperf test between instances on different compute-nodes.<br>
<br>
What NICs (maker+model) are you using? You could try disabling any off-load functionality - 'ethtool -k <iface-used-for-gre>'.<br>
<br>
What kernal are you using: 'uname -a'?<br>
<br>
Re, Darragh.<br>
<div class="im HOEnZb"><br>
> Hi Daniel,<br>
<br>
><br>
> I followed that page, my Instances MTU is lowered by DHCP Agent but, same<br>
> result: poor network performance (internal between Instances and when<br>
> trying to reach the Internet).<br>
><br>
> No matter if I use "dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf +<br>
> "dhcp-option-force=26,1400"" for my Neutron DHCP agent, or not (i.e. MTU =<br>
> 1500), the result is almost the same.<br>
><br>
> I'll try VXLAN (or just VLANs) this weekend to see if I can get better<br>
> results...<br>
><br>
> Thanks!<br>
> Thiago<br>
<br>
</div><div class="HOEnZb"><div class="h5">_______________________________________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
</div></div></blockquote></div><br></div>