[openstack-dev] [neutron][kilo] - vxlan's max bandwidth

Ihar Hrachyshka ihrachys at redhat.com
Mon Apr 18 11:33:59 UTC 2016


Akihiro Motoki <amotoki at gmail.com> wrote:

> 2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka <ihrachys at redhat.com>:
>> Sławek Kapłoński <slawek at kaplonski.pl> wrote:
>>
>>> Hello,
>>>
>>> What MTU have You got configured on VMs? I had issue with performance on
>>> vxlan network with standard MTU (1500) but when I configured Jumbo
>>> frames on vms and on hosts then it was much better.
>>
>>
>> Right. Note that custom MTU works out of the box only starting from  
>> Mitaka.
>> You can find details on how to configure Neutron for Jumbo frames in the
>> official docs:
>>
>> http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html
>
> If you want to advertise MTU using DHCP in releases before Mitaka,
> you can prepare your custom dnsmasq config file like below and
> set it to dhcp-agent dnsmasq_config_file config option.
> You also need to set network_device_mtu config parameters appropriately.
>
> sample dnsmasq config file:
> --
> dhcp-option-force=26,8950
> --
> dhcp option 26 specifies MTU.

Several notes:

- In Liberty, above can be achieved by setting advertise_mtu in  
neutron.conf on nodes hosting DHCP agents.
- You should set [ml2] segment_mtu on controller nodes to MTU value for  
underlying physical networks. After that, DHCP agents will advertise  
correct MTU for all new networks created after the configuration applied.
- It won’t work in OVS hybrid setup, where intermediate devices (qbr) will  
still have mtu = 1500, that will result in Jumbo frames dropped. We have  
backports to fix it in Liberty at: https://review.openstack.org/305782 and  
https://review.openstack.org/#/c/285710/

Ihar



More information about the OpenStack-dev mailing list