<div dir="ltr">my company just got an environment which is running 10Gbps, but I need to wait for my time window to use it.<div><br></div><div>thanks for the data.</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Wed, Dec 4, 2013 at 11:18 PM, Édouard Thuleau <span dir="ltr"><<a href="mailto:edouard.thuleau@gmail.com" target="_blank">edouard.thuleau@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Just an update of the subject.<br>
<br>
Any ideas ?<br>
Édouard.<br>
<br>
On Tue, Dec 3, 2013 at 3:49 PM, Édouard Thuleau<br>
<<a href="mailto:edouard.thuleau@gmail.com">edouard.thuleau@gmail.com</a>> wrote:<br>
> Hi all,<br>
><br>
> I try to set up a plateforme with plugin ML2 and mechanism driver<br>
> openvswitch (VXLAN) and l2-pop with a Jumbo frame physical network.<br>
> I've got a Cisco Nexus fabric of 10Gb/s and the jumbo frame size is 9216 octets.<br>
><br>
> I set the MTU on the physical interface used for VXLAN tunneling to 9216 octets.<br>
><br>
> The vif driver is the OVS hybrid one to be able to do security groups.<br>
> My test case is:<br>
><br>
> - 2 physical compute nodes: CN#1 and CN#2<br>
> - One virtual network: NET#1 with subnet <a href="http://10.0.0.0/24" target="_blank">10.0.0.0/24</a><br>
> - 2 VMs plug onto the virtual network NET#1:<br>
> - VM#1 (10.0.0.3) on CN#1<br>
> - VM#2 (10.0.0.4) on CN#2<br>
><br>
> After the VM creation, I set manually the MTU on veth interfaces (qvo<br>
> and qvb) to 9166 (9216 minus 50 (the VXLAN overhead)) on both compute<br>
> nodes.<br>
> I don't change anything on the interface offloading configuration [1]<br>
> and in the guest OS VM the MTU is 1500.<br>
><br>
> 1/ ICMP test OK<br>
> When I do a ping with the largest packet size ('ping -s 9138<br>
> 10.0.0.4', ping VM#2 from VM#1(9138 + 8 ICMP + 20 IP = 9166)), I saw<br>
> ICMP packet going into the VXLAN tunnel with the maximum MTU size<br>
> (9234: 9216 + 14 ETH + 4 802.1q).<br>
> The packet is fragmented by the VM (packets of 1514 (1500 + 14 ETH)<br>
> captured from the tap interface) but the qbr defragment them (packets<br>
> of 9180 (9166 + 14 ETH) captured from qbr, qvb or qvo interfaces) and<br>
> then packet are encapsulated into VXLAN tunnel on the wire (packets of<br>
> 9234 ( 9180 + 50 VXLAN (14 ETH + 20 IP + 8 UDP + 8 VXLAN) + 4 802.1q<br>
> captured on the physical interface of the VXLAN tunnel)<br>
><br>
> VM#1 tap <-- ~7 x 1514 --> qbr <-- 9180 --> qvb <-- 9180 --> qvo <--<br>
> OVS flows + tunneling --> ethx <-- 9234 --> wire<br>
><br>
> 2/ TCP test KO<br>
> With the same configuration as above, if I make a TCP perf test<br>
> (iperf), the packets size still have a classic MTU value (1500 + VXLAN<br>
> overhead) on the wire.<br>
><br>
> VM#1 tap <-- ~44000 --> qbr <-- ~44000 --> qvb <-- 1514 --> qvo <--<br>
> OVS flows + tunneling --> ethx <-- 1568 --> wire<br>
><br>
> 3/ UDP test KO<br>
><br>
> VM#1 tap <-- 1512 ??? --> qbr <-- ~1512 --> qvb <-- 1512 --> qvo <--<br>
> OVS flows + tunneling --> ethx <-- 1566 --> wire<br>
><br>
><br>
> It seems like an offloading problem. I use intel NIC card with driver ixgbe [2].<br>
> Any thought ?<br>
><br>
> [1] <a href="http://paste.openstack.org/show/54347/" target="_blank">http://paste.openstack.org/show/54347/</a><br>
> [2] <a href="http://paste.openstack.org/show/54357/" target="_blank">http://paste.openstack.org/show/54357/</a><br>
><br>
> Regards,<br>
> Édouard.<br>
</blockquote></div><br></div>