[neutron] Mellanox Connectx5 ASAP2+LAG over VF+vxlan
Zoltan Langi
zoltan.langi at namecheap.com
Fri Jun 7 19:44:42 UTC 2019
Hello Moshe,
OS is Ubuntu 18.04.2 LTS, Kernel is: 4.18.0-21-generic
According to Mellanox this os is definitely supported.
Zoltan
On 07.06.19 21:05, Moshe Levi wrote:
> Hi Zoltan,
>
> What OS and kernel are you using?
>
> -----Original Message-----
> From: Zoltan Langi <zoltan.langi at namecheap.com>
> Sent: Friday, June 7, 2019 3:54 PM
> To: openstack-discuss at lists.openstack.org
> Subject: [neutron] Mellanox Connectx5 ASAP2+LAG over VF+vxlan
>
> Hello everyone, I hope someone more experienced can help me with this problem I've been struggling for a while now.
>
> I'm trying to set up ASAP2 ovs vxlan offload on a dual port Mellanox
> ConnectX5 card between 2 hosts using LACP link aggregation and Rocky release.
>
> When the LAG is not there, only one pf is being used, the offload works just fine, getting the line speed out of the vf-s.
>
> (I initially followed this ASAP2 guide, works well:
> https://community.mellanox.com/s/article/getting-started-with-mellanox-asap-2)
>
> Decided, to provide HA for the vf-s, may as well use LACP as it's supported for ASAP2 according to Mellanox:
>
> https://www.mellanox.com/related-docs/prod_software/ASAP2_Hardware_Offloading_for_vSwitches_User_Manual_v4.4.pdf
> (page15)
>
> So what I've done I've created a systemd script that puts the eswitch into switchdev mode on both ports before the networking starts at boot time, the bond0 comes up after the mode was changed just like in the docs.
>
> The bond0 interface wasn't added to the ovs as the doc recommends as it keeps the vxlan tunnel, only the vf is there after openstack creates the vm.
>
> The problem is only one direction of the traffic is offloaded when LAG is being used.
>
> I opened a mellanox case and they recommended to install the latest ovs version which I did:
>
> https://community.mellanox.com/s/question/0D51T00006kXkRzSAK/connectx5-asap2-vxlan-offload-bond-openstack-problem
>
> After using the latest OVS from master, the problem still exist, the offload simply doesn't work properly. The speed I am getting is way far away from the values that I get when only a single port is used.
>
> Does anyone has any experience or any idea what should I look our for or check?
>
>
> Thank you very much, anything is appreciated!
>
> Zoltan
>
>
>
>
>
More information about the openstack-discuss
mailing list