[Openstack-operators] Setting default MTU for project networks?
Jonathan D. Proulx
jon at csail.mit.edu
Mon Nov 21 15:28:15 UTC 2016
On Sun, Nov 20, 2016 at 02:59:14AM -0800, Kevin Benton wrote:
:Sorry about the delay, a couple of questions.
No worries "working" was the important bit (which I got). Working
correctly, well we can take our time :)
:You're not setting network_device_mtu, right?
no though maybe I should read what that is.
:Also, when you see the 1458 MTU, is that in the API response from neutron
:on a 'neutron net-show', Or is that just what you are seeing in the
:interfaces on the compute nodes?
this is how the interfaces are getting created and what teh instaces
are getting from DHCP (now anyway, fixing that was my pressing issue).
:Setting the following in your server config (not agent), should be enough
:for VXLAN networks to use a jumbo MTU.
:global_physnet_mtu = 9000
Got that one
:path_mtu = 9000
So I don't have an [ml2] section in neutron.conf referenced by the
neutron-server processes. I do have that in the agent.ini referenced
by neutron-openvswitch-agent on the network node.
I additonally have:
in agent.ini on network node.
On hypervisors I have:
path_mtu = 0
Obviously the hypervisor stuff won't effect how networks are created,
but don't want that to start biting me in a different way if I get
server side doing what I want.
Note 9004 is the pysical interface MTU in this example. We have
provider netorks that are VLAN based so their MTU should be (is)
9000. The pre-existing provider networks are properly set though
manual hackery I've only added 1 since the cloud was initially created
4 years ago, so not a common action. Am I right in setting 9004 above
or should I still lie a little and provide the untagged MTU of 9000?
:On Tue, Nov 8, 2016 at 8:31 AM, Jonathan Proulx <jon at csail.mit.edu> wrote:
:> On Mon, Nov 07, 2016 at 02:12:14PM -0800, Kevin Benton wrote:
:> :Which version of Neutron are you on now? Changing the config options had
:> :impact on existing networks in Mitaka. After updating the config, only new
:> :networks will be affected. You will need to use an SQL query to update the
:> :existing network MTUs.
:> I understand that old MTU's won't change, but new overlays are gettign
:> created with 1458 MTU despite the configs I thnk should tell it the
:> jumbo underlay size, so I'm probably missing something :)
:> I did discover since neutron is now MTU aware I can simply drop the
:> dhcp-option=26,9000 and (after poking the DB for the existing jumbo
:> networks which had 'Null' MTUs) the old stuff and new stuff work just
:> new stuff has overly restrictive MTU.
:> :This was changed in Newton (https://review.openstack.org/#/c/336805/) but
:> :we couldn't back-port it because of the behavior change.
:> Neat I didn't know support form changing MTU was even planned, but I
:> gues it's here (well not quite *here* but...)
:> :On Fri, Nov 4, 2016 at 10:34 AM, Jonathan Proulx <jon at csail.mit.edu>
:> :> Hi All,
:> :> So long story short how do I get my ml2/ovs GRE tenant network to
:> :> to
:> :> MTU 9000 in Mitaka - or - get dhcp agents on on netork node to give
:> :> out different MTUs to different networks?
:> :> Seems between Kilo (my last release) and Mitaka (my current production
:> :> world) Neutron got a lot cleverer about MTUs and teh simple
:> :> workarounds I had to make by jumbo frames go are now causing some
:> :> issues for newly created project networks.
:> :> Because I'm setting 'dhcp-option=26,9000' in /etc/neutron/dnsmasq.conf
:> :> everything get an MTU of 9000 inside the guest OS. I only *really*
:> :> care about this for our provider vlans, for project networks I only
:> :> care that they work.
:> :> CUrrently when a new project network is created it get an MTU of 1458
:> :> (1500 less GRE overhead) this is reflected in teh neutron DB and the
:> :> various virtual interfaces on the hypervisor and network node, but
:> :> DHCP configures inside the host to be 9000 and hilarity ensues.
:> :> I tried setting DEFAULT/global_physnet_mtu=9134 in neutron.conf and
:> :> ml2/path_mtu=9134 in ml2.ini (which is the actual MTU of L2 links),
:> :> agent/veth_mtu=9134 was previously set. I thought this would result in
:> :> virtualdevices large enough to pass the 9000 traffic but seems to have
:> :> made no difference.
:> :> I can kludge around by specifying MTU on network creation (or some
:> :> post facto DB hackery) but this isn't do able through my Horizon UI so
:> :> my users won't do it.
:> :> Thanks,
:> :> -Jon
:> :> _______________________________________________
:> :> OpenStack-operators mailing list
:> :> OpenStack-operators at lists.openstack.org
:> :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
More information about the OpenStack-operators