<div dir="ltr"><div>(these comments are from work we did with nova-network a while ago. These comments are only focused on the underlying kvm performance, not the gymnastics to get neutron to build out the right configuration)</div>
<div><br></div>We've seen similar performance (around 3 Gbps) for GRE tunnels on machines that can easily flatten 10GE in more efficient configurations, and the right tuning. With vlan or untagged bridges, we could easily saturate a 10GE link from a single VM with multiple streams. The main difference that we saw with this was a drop in single stream TCP performance, as you'd expect. We were only able to get about 4 gbit out of one stream, where we could get upwards of 9 on bare metal. <div>
<br></div><div>I get that GRE is easy to test with, and it is probably easier to setup, but I don't think it makes sense to be a default configuration choice. The performance implications of that choice are pretty serious.</div>
<div><br></div><div>Incidentally, you'll need to do more than just tuning the MTU if you want good performance; you'll need to increase your buffers, window size, etc. Full details for what we did are are:</div><div>
- <a href="http://buriedlede.blogspot.com/2012/11/driving-100-gigabit-network-with.html">http://buriedlede.blogspot.com/2012/11/driving-100-gigabit-network-with.html</a></div><div>Much of the tuning was cribbed from here:</div>
<div> - <a href="http://fasterdata.es.net/host-tuning/linux/">http://fasterdata.es.net/host-tuning/linux/</a></div><div><br></div><div>hth</div><div> -nld</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Mon, Jan 27, 2014 at 1:39 AM, Li, Chen <span dir="ltr"><<a href="mailto:chen.li@intel.com" target="_blank">chen.li@intel.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div lang="EN-US" link="blue" vlink="purple">
<div>
<p class="MsoNormal">Hi list,<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">I’m working under CentOS 6.4 + Havana + Neutron + OVS + gre.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">I’m testing performance for gre.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">I have a 10Gb/s NIC for compute Node.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">While, the max bandwidth I can get is small then 3Gb/s, even I have enough instances.<u></u><u></u></p>
<p class="MsoNormal">I noticed the reason the bandwidth can’t reach higher is due to the utilization for one CPU core is already 100%.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">So, I want to try if I can get higher bandwidth if I have bigger MTU, because the default MTU = 1500.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">But, after I set <b><span style="color:red">network_device_mtu=8500</span></b><span style="color:red">
</span>in "/etc/nova/nova.conf", and restart openstack-nova-compute service and re-create a new instance, the MTU for devices is still 1500:<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal" style="margin-left:.5in">202: qbr053ac004-d6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
<b><span style="color:red">1500</span></b><span style="color:red"> </span>qdisc noqueue state UNKNOWN<u></u><u></u></p>
<p class="MsoNormal" style="margin-left:.5in"> link/ether da:c0:8d:c2:d5:1c brd ff:ff:ff:ff:ff:ff<u></u><u></u></p>
<p class="MsoNormal" style="margin-left:.5in">203: qvo053ac004-d6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
<b><span style="color:red">1500</span></b><span style="color:red"> </span>qdisc pfifo_fast state UP qlen 1000<u></u><u></u></p>
<p class="MsoNormal" style="margin-left:.5in"> link/ether f6:0b:04:3f:9d:41 brd ff:ff:ff:ff:ff:ff<u></u><u></u></p>
<p class="MsoNormal" style="margin-left:.5in">204: qvb053ac004-d6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
<b><span style="color:red">1500</span></b><span style="color:red"> </span>qdisc pfifo_fast state UP qlen 1000<u></u><u></u></p>
<p class="MsoNormal" style="margin-left:.5in"> link/ether da:c0:8d:c2:d5:1c brd ff:ff:ff:ff:ff:ff<u></u><u></u></p>
<p class="MsoNormal" style="margin-left:.5in">205: tap053ac004-d6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
<b><span style="color:red">1500</span></b><span style="color:red"> </span>qdisc htb state UNKNOWN qlen 500<u></u><u></u></p>
<p class="MsoNormal" style="margin-left:.5in"> link/ether fe:18:3e:c2:e9:84 brd ff:ff:ff:ff:ff:ff<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Anyone know why is this happen ?<u></u><u></u></p>
<p class="MsoNormal">How can I solve it ??<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Thanks.<span class="HOEnZb"><font color="#888888"><u></u><u></u></font></span></p><span class="HOEnZb"><font color="#888888">
<p class="MsoNormal">-chen<u></u><u></u></p>
</font></span></div>
</div>
<br>_______________________________________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
<br></blockquote></div><br></div>