[openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

Clark Boylan cboylan at sapwetik.org
Thu Feb 18 18:07:51 UTC 2016

On Wed, Feb 10, 2016, at 09:52 AM, Sean M. Collins wrote:
> Ihar Hrachyshka wrote:
> > Also, I added some interface state dump for worlddump, and here is how the
> > main node networking setup looks like:
> > 
> > http://logs.openstack.org/59/265759/20/experimental/gate-grenade-dsvm-neutron-multinode/d64a6e6/logs/worlddump-2016-01-30-164508.txt.gz
> > 
> > br-ex: mtu = 1450
> > inside router: qg mtu = 1450, qr = 1450
> > 
> > So should be fine in this regard. I also set devstack locally enforcing
> > network_device_mtu, and it seems to pass packets of 1450 size through. So
> > it’s probably something tunneling packets to the subnode that fails for us,
> > not local router-to-tap bits.
> Yeah! That's right. So is it the case that we need to do 1500 less the
> GRE overhead less the VXLAN overhead? So 1446? Since the traffic gets
> enacpsulated in VXLAN then encapsulated in GRE (yo dawg, I heard u like
> tunneling).

Looks like you made progress further debugging the problems here and
metadata service is the culprit. But I want to point out that we
shouldn't be nesting tunnels here (at least not in a way that is exposed
to us, the underlying cloud could be doing whatever). br-int is the
neutron managed tunnel using vxlan and that is the only layer of
tunneling for br-int. br-ex is part of the devstack-gate managed VXLAN
tunnel (formerly GRE until new clouds started rejecting GRE packets) on
the DVR jobs but not the normal multinode or grenade jobs because the
DVR job is the only one with more than one router.

All that to say 1450 should be a sufficiently small MTU.


More information about the OpenStack-dev mailing list