[Openstack-operators] Setting default MTU fro project networks?

Kevin Benton kevin at benton.pub
Mon Nov 7 22:12:14 UTC 2016


Which version of Neutron are you on now? Changing the config options had no
impact on existing networks in Mitaka. After updating the config, only new
networks will be affected. You will need to use an SQL query to update the
existing network MTUs.

This was changed in Newton (https://review.openstack.org/#/c/336805/) but
we couldn't back-port it because of the behavior change.



On Fri, Nov 4, 2016 at 10:34 AM, Jonathan Proulx <jon at csail.mit.edu> wrote:

> Hi All,
>
>
> So long story short how do I get my ml2/ovs GRE tenant network to default
> to
> MTU 9000 in Mitaka - or - get dhcp agents on on netork node to give
> out different MTUs to different networks?
>
>
> Seems between Kilo (my last release) and Mitaka (my  current production
> world) Neutron got a lot cleverer about MTUs and teh simple
> workarounds I had to make by jumbo frames go are now causing some
> issues for newly created project networks.
>
> Because I'm setting 'dhcp-option=26,9000' in /etc/neutron/dnsmasq.conf
> everything get an MTU of 9000 inside the guest OS. I only *really*
> care about this for our provider vlans, for project networks I only
> care that they work.
>
> CUrrently when a new project network is created it get an MTU of 1458
> (1500 less GRE overhead) this is reflected in teh neutron DB and the
> various virtual interfaces on the hypervisor and network node, but
> DHCP configures inside the host to be 9000 and hilarity ensues.
>
> I tried setting DEFAULT/global_physnet_mtu=9134 in neutron.conf and
> ml2/path_mtu=9134 in ml2.ini (which is the actual MTU of L2 links),
> agent/veth_mtu=9134 was previously set. I thought this would result in
> virtualdevices large enough to pass the 9000 traffic but seems to have
> made no difference.
>
> I can kludge around by specifying MTU on network creation (or some
> post facto DB hackery) but this isn't do able through my Horizon UI so
> my users won't do it.
>
> Thanks,
> -Jon
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161107/49b6ca7f/attachment.html>


More information about the OpenStack-operators mailing list