[Openstack-operators] Increase MTU from VM to VM, pass through physical Network devic
Dustin Lundquist
dustin at null-ptr.net
Fri Aug 29 04:01:20 UTC 2014
The tap interfaces from QEMU, and the veth pair from the qbr linux bridge
to the br-int OVS integration bridge also need to be configured for jumbo
frames. This article may help to get the lay of the land:
http://openstack.redhat.com/Networking_in_too_much_detail It's a good idea
to configure all interfaces on an L2 segment with the same MTU (at least
all the interfaces with IP addresses, and ensure any L2 interfaces have the
same or larger MTU) otherwise path MTU detection doesn't work.
Dustin Lundquist
On Thu, Aug 28, 2014 at 7:47 PM, Tong Manh Cuong <cuongtm at vdc.com.vn> wrote:
> Dear experts,
>
>
>
> My issue is that:
>
> 1. I set up OpenStack in multi Nodes model, with ML2 Plugin and GRE
> mode.
>
> 2. I created 2 VMs in the same Compute node. (1 Web and 1 DB).
>
> 3. My App needs fully support packet 1500 Bytes.
>
>
>
> How can I configure Openstack to do that?
>
>
>
> ----
>
> I already configured:
>
> 1. Set MTU of Ethernet_interface in VM
>
>
>
> VM1# ifconfig eth0 mtu 1600
>
> VM2# ifconfig eth0 mtu 1600
>
>
>
> root at cuong-vm-01:~# netstat -i
>
> Kernel Interface table
>
> Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR
> Flg
>
> eth0 1600 0 15958 0 0 0 10317 0
> 0 0 BMRU
>
> lo 65536 0 0 0 0 0 0 0
> 0 0 LRU
>
> root at cuong-vm-02:~# netstat -i
>
> Kernel Interface table
>
> Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR
> Flg
>
> eth0 1600 0 15936 0 0 0 10009 0
> 0 0 BMRU
>
> lo 65536 0 0 0 0 0 0 0
> 0 0 LRU
>
>
>
> 2. Set MTU of every port in BR-INT to 1600Bytes
>
> root at controller:~# netstat -i
>
> Kernel Interface table
>
> Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR
> Flg
>
> br-ex 1500 0 21680 0 0 0 2928 0
> 0 0 BRU
>
> br-int 1600 0 308 0 0 0 6 0
> 0 0 BRU
>
> br-tun 1500 0 0 0 0 0 6 0
> 0 0 BRU
>
> eth0 1500 0 483582 0 2 0 22859 0
> 0 0 BMPRU
>
> eth1 1500 0 220 0 0 0 6 0
> 0 0 BMRU
>
> lo 65536 0 174389 0 0 0 174389 0
> 0 0 LRU
>
> qbr480003c8-57 1500 0 29 0 0 0 6 0
> 0 0 BMRU
>
> qbr59db81a4-93 1500 0 46 0 0 0 6 0
> 0 0 BMRU
>
> qvb480003c8-57 1500 0 15907 0 0 0 9877 0
> 0 0 BMPRU
>
> qvb59db81a4-93 1500 0 15773 0 0 0 10087 0
> 0 0 BMPRU
>
> qvo480003c8-57 1600 0 9877 0 0 0 15907 0
> 0 0 BMPRU
>
> qvo59db81a4-93 1600 0 10087 0 0 0 15773
> 0 0 0 BMPRU
>
> tap480003c8-57 1500 0 9995 0 0 0 15912 0
> 0 0 BMRU
>
> tap59db81a4-93 1500 0 10169 0 0 0 15764 0
> 0 0 BMRU
>
> virbr0 1500 0 0 0 0 0 0 0
> 0 0 BMU
>
>
>
> However, I clouldn’t ping from VM 01 to VM02 without fragment
>
> root at cuong-vm-01:~# traceroute --mtu 172.16.10.13
>
> traceroute to 172.16.10.13 (172.16.10.13), 30 hops max, 65000 byte packets
>
> 1 * F=1600 * *
>
> 2 * * *
>
> 3 * * *
>
> 4 * *^C
>
> ---
>
> root at cuong-vm-02:~# ping -s 1500 -M do 172.16.10.12
>
> PING 172.16.10.12 (172.16.10.12) 1500(1528) bytes of data.
>
>
>
> Thank you very much.
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140828/1415faf8/attachment.html>
More information about the OpenStack-operators
mailing list