[Openstack] [OpenStack] [Neutron] OVS and jumbo frame

Yongsheng Gong gongysh at unitedstack.com
Thu Dec 5 02:57:10 UTC 2013


my company just got an environment which is running 10Gbps, but I need to
wait for my time window to use it.

thanks for the data.


On Wed, Dec 4, 2013 at 11:18 PM, Édouard Thuleau
<edouard.thuleau at gmail.com>wrote:

> Just an update of the subject.
>
> Any ideas ?
> Édouard.
>
> On Tue, Dec 3, 2013 at 3:49 PM, Édouard Thuleau
> <edouard.thuleau at gmail.com> wrote:
> > Hi all,
> >
> > I try to set up a plateforme with plugin ML2 and mechanism driver
> > openvswitch (VXLAN) and l2-pop with a Jumbo frame physical network.
> > I've got a Cisco Nexus fabric of 10Gb/s and the jumbo frame size is 9216
> octets.
> >
> > I set the MTU on the physical interface used for VXLAN tunneling to 9216
> octets.
> >
> > The vif driver is the OVS hybrid one to be able to do security groups.
> > My test case is:
> >
> > - 2 physical compute nodes: CN#1 and CN#2
> > - One virtual network: NET#1 with subnet 10.0.0.0/24
> > - 2 VMs plug onto the virtual network NET#1:
> >     - VM#1 (10.0.0.3) on CN#1
> >     - VM#2 (10.0.0.4) on CN#2
> >
> > After the VM creation, I set manually the MTU on veth interfaces (qvo
> > and qvb) to 9166 (9216 minus 50 (the VXLAN overhead)) on both compute
> > nodes.
> > I don't change anything on the interface offloading configuration [1]
> > and in the guest OS VM the MTU is 1500.
> >
> > 1/ ICMP test OK
> > When I do a ping with the largest packet size ('ping -s 9138
> > 10.0.0.4', ping VM#2 from VM#1(9138 + 8 ICMP + 20 IP = 9166)), I saw
> > ICMP packet going into the VXLAN tunnel with the maximum MTU size
> > (9234: 9216 + 14 ETH + 4 802.1q).
> > The packet is fragmented by the VM (packets of 1514 (1500 + 14 ETH)
> > captured from the tap interface) but the qbr defragment them (packets
> > of 9180 (9166 + 14 ETH) captured from qbr, qvb or qvo interfaces) and
> > then packet are encapsulated into VXLAN tunnel on the wire (packets of
> > 9234 ( 9180 + 50 VXLAN (14 ETH + 20 IP + 8 UDP + 8 VXLAN) + 4 802.1q
> > captured on the physical interface of the VXLAN tunnel)
> >
> > VM#1 tap <-- ~7 x 1514 --> qbr <-- 9180 --> qvb <-- 9180 --> qvo <--
> > OVS flows + tunneling --> ethx <-- 9234 --> wire
> >
> > 2/ TCP test KO
> > With the same configuration as above, if I make a TCP perf test
> > (iperf), the packets size still have a classic MTU value (1500 + VXLAN
> > overhead) on the wire.
> >
> > VM#1 tap <-- ~44000 --> qbr <-- ~44000 --> qvb <-- 1514 --> qvo <--
> > OVS flows + tunneling --> ethx <-- 1568 --> wire
> >
> > 3/ UDP test KO
> >
> > VM#1 tap <-- 1512 ??? --> qbr <-- ~1512 --> qvb <-- 1512 --> qvo <--
> > OVS flows + tunneling --> ethx <-- 1566 --> wire
> >
> >
> > It seems like an offloading problem. I use intel NIC card with driver
> ixgbe [2].
> > Any thought ?
> >
> > [1] http://paste.openstack.org/show/54347/
> > [2] http://paste.openstack.org/show/54357/
> >
> > Regards,
> > Édouard.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131205/df08a947/attachment.html>


More information about the Openstack mailing list