[Openstack-operators] Setting default MTU for project networks?

Kostyantyn Volenbovskyi volenbovsky at yandex.ru
Mon Nov 21 20:31:04 UTC 2016


Hi,

it really sounds that it path_mtu = 9000 was done in ‘wrong place’, see inline below 
The fact that you indicate ml2.ini <https://www.google.com/search?client=safari&rls=en&q=ml2.ini&ie=UTF-8&oe=UTF-8> and not ml2_conf.ini is a bit ‘suspicious'

Reading the recent it sounds that in your exact case
global_physnet_mtu should be equal to path_mtu and that should be 9000. That will result in MTU for VXLAN to be 8950 and MTU for VLAN - 9000

So I don’t reason to use ‘9134’ or 9004 in your environment.

All in all I think that [1] and [2] will be helpful.

(and yes, it should be noted that my description is for Mitaka onwards)

BR, 
Konstantin

[1] http://docs.openstack.org/draft/networking-guide/config-mtu.html <http://docs.openstack.org/draft/networking-guide/config-mtu.html>
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1374795 <https://bugzilla.redhat.com/show_bug.cgi?id=1374795>

> 
> :Setting the following in your server config (not agent), should be enough
> :for VXLAN networks to use a jumbo MTU.
> :
> :[DEFAULT]
> :global_physnet_mtu = 9000
> 
> Got that one
> 
> :[ml2]
> :path_mtu = 9000
> 
> So I don't have an [ml2] section in neutron.conf referenced by the
> neutron-server processes.  I do have that in the agent.ini referenced
> by neutron-openvswitch-agent on the network node.
Well, typically you would have /etc/neutron/plugins/ml2/ml2_conf.ini, symlinked 
from /etc/neutron/plugin.ini and neutron-server should use that.


> I additonally have:
> 
> [agent]
> veth_mtu=9004
> 
> in agent.ini on network node.
> 
> On hypervisors I have:
> 
> [ml2]
> path_mtu = 0
> physical_network_mtus =trunk:9004
> 
> [agent]
> veth_mtu=9004
> 
> Obviously the hypervisor stuff won't effect how networks are created,
> but don't want that to start biting me in a different way if I get
> server side doing what I want.
> 
> 
> Note 9004 is the pysical interface MTU in this example.  We have
> provider netorks that are VLAN based so their MTU should be (is)
> 9000.  The pre-existing provider networks are properly set though
> manual hackery I've only added 1 since the cloud was initially created
> 4 years ago, so not a common action.  Am I right in setting 9004 above
> or should I still lie a little and provide the untagged MTU of 9000?
> 
> Thanks,
> -Jon
> 
> :
> :On Tue, Nov 8, 2016 at 8:31 AM, Jonathan Proulx <jon at csail.mit.edu <mailto:jon at csail.mit.edu>> wrote:
> :
> :> On Mon, Nov 07, 2016 at 02:12:14PM -0800, Kevin Benton wrote:
> :> :Which version of Neutron are you on now? Changing the config options had
> :> no
> :> :impact on existing networks in Mitaka. After updating the config, only new
> :> :networks will be affected. You will need to use an SQL query to update the
> :> :existing network MTUs.
> :>
> :> Mitaka
> :>
> :> I understand that old MTU's won't change, but new overlays are gettign
> :> created with 1458 MTU despite the configs I thnk should tell it the
> :> jumbo underlay size, so I'm probably missing something :)
> :>
> :> I did discover since neutron is now MTU aware I can simply drop the
> :> dhcp-option=26,9000 and (after poking the DB for the existing jumbo
> :> networks which had 'Null' MTUs) the old stuff and new stuff work just
> :> new stuff has overly restrictive MTU.
> :>
> :> :This was changed in Newton (https://review.openstack.org/#/c/336805/ <https://review.openstack.org/#/c/336805/>) but
> :> :we couldn't back-port it because of the behavior change.
> :>
> :> Neat I didn't know support form changing MTU was even planned, but I
> :> gues it's here (well not quite *here* but...)
> :>
> :> -Jon
> :>
> :> :
> :> :
> :> :On Fri, Nov 4, 2016 at 10:34 AM, Jonathan Proulx <jon at csail.mit.edu <mailto:jon at csail.mit.edu>>
> :> wrote:
> :> :
> :> :> Hi All,
> :> :>
> :> :>
> :> :> So long story short how do I get my ml2/ovs GRE tenant network to
> :> default
> :> :> to
> :> :> MTU 9000 in Mitaka - or - get dhcp agents on on netork node to give
> :> :> out different MTUs to different networks?
> :> :>
> :> :>
> :> :> Seems between Kilo (my last release) and Mitaka (my  current production
> :> :> world) Neutron got a lot cleverer about MTUs and teh simple
> :> :> workarounds I had to make by jumbo frames go are now causing some
> :> :> issues for newly created project networks.
> :> :>
> :> :> Because I'm setting 'dhcp-option=26,9000' in /etc/neutron/dnsmasq.conf
> :> :> everything get an MTU of 9000 inside the guest OS. I only *really*
> :> :> care about this for our provider vlans, for project networks I only
> :> :> care that they work.
> :> :>
> :> :> CUrrently when a new project network is created it get an MTU of 1458
> :> :> (1500 less GRE overhead) this is reflected in teh neutron DB and the
> :> :> various virtual interfaces on the hypervisor and network node, but
> :> :> DHCP configures inside the host to be 9000 and hilarity ensues.
> :> :>
> :> :> I tried setting DEFAULT/global_physnet_mtu=9134 in neutron.conf and
> :> :> ml2/path_mtu=9134 in ml2.ini (which is the actual MTU of L2 links),
> :> :> agent/veth_mtu=9134 was previously set. I thought this would result in
> :> :> virtualdevices large enough to pass the 9000 traffic but seems to have
> :> :> made no difference.
> :> :>
> :> :> I can kludge around by specifying MTU on network creation (or some
> :> :> post facto DB hackery) but this isn't do able through my Horizon UI so
> :> :> my users won't do it.
> :> :>
> :> :> Thanks,
> :> :> -Jon
> :> :>
> :> :>
> :> :> _______________________________________________
> :> :> OpenStack-operators mailing list
> :> :> OpenStack-operators at lists.openstack.org <mailto:OpenStack-operators at lists.openstack.org>
> :> :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
> :> :>
> :>
> :> --
> :>
> 
> -- 
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org <mailto:OpenStack-operators at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161121/eb358be4/attachment.html>


More information about the OpenStack-operators mailing list