<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body style="background-color: rgb(255, 255, 255); color: rgb(0, 0,
0); font-family: Tahoma; font-size: 16px;" bgcolor="#FFFFFF"
text="#000000">
<font face="Tahoma">Hi all,<br>
<br>
<font face="Tahoma">I'm running <font face="Tahoma">Openstack <font
face="Tahoma">Ocata <font face="Tahoma">(Deployed with <font
face="Tahoma">openstack-ansible), w<font face="Tahoma">ith
the following configuration<font face="Tahoma">:<br>
<br>
<font face="Tahoma">* Compute n<font face="Tahoma">odes<font
face="Tahoma"><font face="Tahoma"> running
nova and neutron agent<br>
<font face="Tahoma">* <font face="Tahoma">2
x Controllers<font face="Tahoma">
running n<font face="Tahoma">eutron<font
face="Tahoma"> server/agents in
LXC containe<font face="Tahoma">rs
(as deployed by <font
face="Tahoma">openstack-ansible
playbooks)<br>
* Underlying host<font
face="Tahoma">s have a sin<font
face="Tahoma">gle NIC (<font
face="Tahoma">MTU 9000)
with multi<font
face="Tahoma">ple VLAN
subinterafces, which
in turn are connected
to bridges br-vxlan,
br-<font face="Tahoma">vlan,
br-management<br>
<br>
<br>
</font></font></font></font></font><font
face="Tahoma">I've <font
face="Tahoma">encount<font
face="Tahoma">ered the
fol<font face="Tahoma">lowing
problem<font
face="Tahoma">:<br>
<br>
<font face="Tahoma">1.
Whe<font
face="Tahoma">n
I create in <font
face="Tahoma">instance
in a vxlan
tenant
network, <font
face="Tahoma">with<font
face="Tahoma">out
changing any
configuration
files, the
instance
(linux
default)
assumes an MTU
of 1500, but
in reality
only has an
MTU of 1<font
face="Tahoma">450
(be<font
face="Tahoma">cause
of the VXLAN
overhead).
Instances
cannot <font
face="Tahoma">ping
each other or
their gateway
(a <font
face="Tahoma">neutron
router) </font>with
> 1<font
face="Tahoma">450
MTU.</font></font><br>
<br>
<font
face="Tahoma">2.
<font
face="Tahoma">While
I <font
face="Tahoma">_could_</font>
push an MTU of
1<font
face="Tahoma">450
to my
instances via
DHCP, this is
(a) not always
reliable
depending on
the guest OS,
<font
face="Tahoma">and
(b) <font
face="Tahoma">breaks
docker<font
face="Tahoma">
on instances,
which defaults
to an <font
face="Tahoma">MTU
of 1500 for
docker0<br>
<br>
<font
face="Tahoma">3.
So<font
face="Tahoma">,
I attempted
the
configuratio<font
face="Tahoma">n
changes
described at
<a class="moz-txt-link-freetext" href="http://serverascode.com/2017/06/06/neutron-vxlan-tenant-mtu-1500.html">http://serverascode.com/2017/06/06/neutron-vxlan-tenant-mtu-1500.html</a>,
<font
face="Tahoma">increasing
my global MTU
to 1550 in
neutron.conf /
ml2_conf.ini,
on the compute
nodes, and the
neutron client
& server <font
face="Tahoma">LXC
containers on
the controller<font
face="Tahoma">,
so that a
default MTU of
1500 in my
instances
would always
wo<font
face="Tahoma">rk.</font></font><br>
<br>
</font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font></font>4.
The effect of step #3 above is that now my instances can communicate
with _each other_ at up to 1500 MTU, _but_ they still can't ping
their gateway (the neutron router) at anything over 1450 MTU.<br>
<br>
5. When I examine my compute nodes (underlying host OS), I note that
the bridge "br-vxlan" contains the vlan subinterface (MTU 9000) plus
a veth interface for connectivity to the neutron-agents LXC
container (e.g. "04063403_eth10"). The veth interface has an MTU of
1500. The corresponding interface within the neutron-agents LXC
container (eth10) also has an MTU of 1500.<br>
<br>
6. Assuming that #5 is the cause of my MTU fault (i.e., a 1500-byte
packet from the instance over the tentant network = 1500+50=1550,
can't pass through the veth interface), I manually changed the veth
interface (and the corresponding interface within the LXC container)
to MTU 1550.<br>
<br>
7. Now I can pass packets from my instances to the neutron router as
large as 1468 bytes (previous limit was 1448), but still not the
1500 bytes I expected.<br>
<br>
8. Increasing the MTU again (per #6 above) to 1600 makes no
difference to the result in #7 above.<br>
<br>
<br>
So, I'm thinking I've missed something, and the most likely issue is
the definition of the LXC container (and veth interfaces) for
neutron-agents on the controller. I thought it was a simple fix
(manually change MTU per #6), but I'm baffled re why increasing MTU
on the veth interfaces by 50 bytes only got me 20 bytes more
overhead (1468), and even if this _was_ the fix, it's obviously only
temporary, so I wonder what is the correct way to address the MTU
issue under openstack-ansible?<br>
<br>
Can anybody shed some light on this?<br>
<br>
Thanks!<br>
David<br>
<br>
<br>
</body>
</html>