On Thu, 2021-06-24 at 08:07 +0200, Laszlo Angyal wrote:
Hi,
we share the same interface between OVS tunnels and VLAN-based provider networks like this: bondA - management / ceph frontend traffic (not interesting for now) bondB - plugged into br-ex, no ip, provider VLANs br-ex - we configured ip here and we use it in VXLAN overlay configuration as local_ip
yep this is a pretty standard an more or less optimal configuration for kernel ovs wehre you want to share one interface for both vlan and vxlan networks. if you have only one interface or bond avaiable you would create macvlan or vlan interface for management and ceph and add the bond/interface to ovs directly.
Laci
On Wed, Jun 23, 2021 at 10:14 AM Krzysztof Klimonda < kklimonda@syntaxhighlighted.com> wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
-- Krzysztof Klimonda kklimonda@syntaxhighlighted.com