Network MTU setting in ovn/ovs
Hi, I have a question about increasing MTU for tenant network. Now it set to 1442 by default on geneve baked tenant network. Is there any drawbacks or it is some kind of bad practice to increase this MTU to 1500 bytes or higher ? We had an issue with one migrated instance from older version of openstack (where was linuxbrigde configured), there mtu was set to 1500, so the settings for i.e. docker inside the vm was set respectively to the higher value. -- Best regards Lukasz Chrustek
With this setting in neutron.conf, instance will get MTU 1500. [DEFAULT] global_physnet_mtu = 1558 Thanks! Tony ________________________________________ From: Łukasz Chrustek <lukasz@chrustek.net> Sent: September 23, 2024 11:22 AM To: openstack-discuss Subject: Network MTU setting in ovn/ovs Hi, I have a question about increasing MTU for tenant network. Now it set to 1442 by default on geneve baked tenant network. Is there any drawbacks or it is some kind of bad practice to increase this MTU to 1500 bytes or higher ? We had an issue with one migrated instance from older version of openstack (where was linuxbrigde configured), there mtu was set to 1500, so the settings for i.e. docker inside the vm was set respectively to the higher value. -- Best regards Lukasz Chrustek
Hi Tony, yes, I'm aware that, and already found it, thanks anyway :), but - is there any reason why this settings isn't default ?
With this setting in neutron.conf, instance will get MTU 1500.
[DEFAULT] global_physnet_mtu = 1558
Thanks! Tony ________________________________________ From: Łukasz Chrustek <lukasz@chrustek.net> Sent: September 23, 2024 11:22 AM To: openstack-discuss Subject: Network MTU setting in ovn/ovs
Hi,
I have a question about increasing MTU for tenant network. Now it set to 1442 by default on geneve baked tenant network. Is there any drawbacks or it is some kind of bad practice to increase this MTU to 1500 bytes or higher ?
We had an issue with one migrated instance from older version of openstack (where was linuxbrigde configured), there mtu was set to 1500, so the settings for i.e. docker inside the vm was set respectively to the higher value.
-- Best regards Lukasz Chrustek
-- Regards Lukasz Chrustek
On 2024-09-23 23:07:53 +0200 (+0200), Łukasz Chrustek wrote:
yes, I'm aware that, and already found it, thanks anyway :), but - is there any reason why this settings isn't default ? [...]
Not everyone's equipment is set for (or even supports) "jumbo frames" though I expect it's a lot more common than it was years ago. -- Jeremy Stanley
Hi Tony, For our clouds, we always use jumbo frames (9000) for the tenant networks. We have one cloud with a lot of Oracle RAC workloads, the large MTU helps a lot, it was a negative to use 1500 during the build out of it. Our switches are 25Gb by default, if that matters. As for our management, tooling networks, we stick with 1500, likewise with our Ceph networks. I know in a past life, where I was doing a lot of VMware, it was always jumbo frames. The layered tenant networks tend to benefit from a larger MTU. It's not bad practice to go to 9000 MTU, you will just need to do some validation on your fabric and make sure the changes result in an outcome that works for your cloud's workloads and the changes are supported by the switching hardware. Make sure to check if the switch vendor is exactly 1500 or 9000 (we have one Cisco ACI setup where it is a bit over 9000 due to Cisco padding). Hope that helps. Cheers Michael On Mon, Sep 23, 2024 at 5:08 PM Łukasz Chrustek <lukasz@chrustek.net> wrote:
Hi Tony,
yes, I'm aware that, and already found it, thanks anyway :), but - is there any reason why this settings isn't default ?
With this setting in neutron.conf, instance will get MTU 1500.
[DEFAULT] global_physnet_mtu = 1558
Thanks! Tony ________________________________________ From: Łukasz Chrustek <lukasz@chrustek.net> Sent: September 23, 2024 11:22 AM To: openstack-discuss Subject: Network MTU setting in ovn/ovs
Hi,
I have a question about increasing MTU for tenant network. Now it set to 1442 by default on geneve baked tenant network. Is there any drawbacks or it is some kind of bad practice to increase this MTU to 1500 bytes or higher ?
We had an issue with one migrated instance from older version of openstack (where was linuxbrigde configured), there mtu was set to 1500, so the settings for i.e. docker inside the vm was set respectively to the higher value.
-- Best regards Lukasz Chrustek
-- Regards Lukasz Chrustek
On 2024-09-23 18:42:43 -0400 (-0400), Michael Knox wrote:
For our clouds, we always use jumbo frames (9000) for the tenant networks. We have one cloud with a lot of Oracle RAC workloads, the large MTU helps a lot, it was a negative to use 1500 during the build out of it. Our switches are 25Gb by default, if that matters. As for our management, tooling networks, we stick with 1500, likewise with our Ceph networks. I know in a past life, where I was doing a lot of VMware, it was always jumbo frames. The layered tenant networks tend to benefit from a larger MTU. [...]
My point was, the guest interfaces can't be 1500 MTU when their Ethernet frames are tunneled over another protocol, unless the underlying network on which they reside supports frame sizes sufficiently *larger* than that in order to accommodate the additional protocol headers such tunneling implies. When I was last doing these things, it was typically handled by setting the physical switching hardware for jumbo frames, and that way the outermost frames could be plenty large enough to handle the additional overhead of tunneling 1500 MTU Ethernet inside the inner protocol(s). 9000-byte frame sizes are pretty huge for this particular purpose, but back then a lot of switch gear either did "traditional" frame sizes or "jumbo" (with nothing in between), and so later on we tended to call any larger nontraditional frame size settings "jumbo frame Ethernet" even in cases where they weren't necessarily a full 9000 bytes. -- Jeremy Stanley
Hi Tony, The Oracle RAC environment is 9000 at the VM. I thought I had suggested that, if not, I am sorry. We use 9000 MTU at the VM level in some of our clouds, with no ill effect. Cheers. On Mon, Sep 23, 2024 at 7:07 PM Tony Liu <tonyliu0592@hotmail.com> wrote:
This is about instance (VM) interface, not hypervisor physical interface.
Thanks! Tony
On Sep 23, 2024 3:42 PM, Michael Knox <michael@knox.net.nz> wrote:
Hi Tony,
For our clouds, we always use jumbo frames (9000) for the tenant networks. We have one cloud with a lot of Oracle RAC workloads, the large MTU helps a lot, it was a negative to use 1500 during the build out of it. Our switches are 25Gb by default, if that matters. As for our management, tooling networks, we stick with 1500, likewise with our Ceph networks. I know in a past life, where I was doing a lot of VMware, it was always jumbo frames. The layered tenant networks tend to benefit from a larger MTU.
It's not bad practice to go to 9000 MTU, you will just need to do some validation on your fabric and make sure the changes result in an outcome that works for your cloud's workloads and the changes are supported by the switching hardware. Make sure to check if the switch vendor is exactly 1500 or 9000 (we have one Cisco ACI setup where it is a bit over 9000 due to Cisco padding).
Hope that helps.
Cheers Michael
On Mon, Sep 23, 2024 at 5:08 PM Łukasz Chrustek <lukasz@chrustek.net> wrote:
Hi Tony,
yes, I'm aware that, and already found it, thanks anyway :), but - is there any reason why this settings isn't default ?
With this setting in neutron.conf, instance will get MTU 1500.
[DEFAULT] global_physnet_mtu = 1558
Thanks! Tony ________________________________________ From: Łukasz Chrustek <lukasz@chrustek.net> Sent: September 23, 2024 11:22 AM To: openstack-discuss Subject: Network MTU setting in ovn/ovs
Hi,
I have a question about increasing MTU for tenant network. Now it set to 1442 by default on geneve baked tenant network. Is there any drawbacks or it is some kind of bad practice to increase this MTU to 1500 bytes or higher ?
We had an issue with one migrated instance from older version of openstack (where was linuxbrigde configured), there mtu was set to 1500, so the settings for i.e. docker inside the vm was set respectively to the higher value.
-- Best regards Lukasz Chrustek
-- Regards Lukasz Chrustek
On 2024-09-23 16:05:40 -0700 (-0700), Tony Liu wrote:
Default or not, depends on how you see it. 1500 is the most common MTU, so that setting is 1500 by default. Makes perfect sense to me. [...]
I think we're talking past one another. The point I was trying to make is that if you are tunneling the traffic to your guests but force them to 1500 MTU, things will *break* unless your network equipment has been configured to handle the additional overhead of the tunneling, because the tunneling protocol adds outer headers which will make full-size tunneled packets longer than your Ethernet frame size and then they won't pass through your switches (or depending on the tunneling implementation they might simply get fragmented and then your performance will just be terrible instead). -- Jeremy Stanley
Hi, thank You for the answers. So to sum up the thread: There is no drawback or problems with higher MTU *if* the lower level of network equipment can handle this higher MTU. For the record (for network guys it is obvious, but it is worth to mention once again): I confirmed it on our test environment, and all devices in path need to have higher mtu, physical servers, controllers (especially network controllers) and also switches. — Regards Lukasz
Yes, we are on the same page. That's why 1500 is the default for that setting.
1500 for VM requires customized setting and proper support from underlay.
Thanks! Tony On Sep 23, 2024 4:11 PM, Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2024-09-23 16:05:40 -0700 (-0700), Tony Liu wrote:
Default or not, depends on how you see it. 1500 is the most common MTU, so that setting is 1500 by default. Makes perfect sense to me. [...]
I think we're talking past one another. The point I was trying to make is that if you are tunneling the traffic to your guests but force them to 1500 MTU, things will *break* unless your network equipment has been configured to handle the additional overhead of the tunneling, because the tunneling protocol adds outer headers which will make full-size tunneled packets longer than your Ethernet frame size and then they won't pass through your switches (or depending on the tunneling implementation they might simply get fragmented and then your performance will just be terrible instead).
participants (4)
-
Jeremy Stanley
-
Michael Knox
-
Tony Liu
-
Łukasz Chrustek