[neutron] OVS tunnels and VLAN provider networks on the same interface
Hi All, What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas: 1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests. 2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel? I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices? Best Regards, -- Krzysztof Klimonda kklimonda@syntaxhighlighted.com
Hello Krzysztof: If I understand correctly, what you need is to share a single interface to handle VLAN and tunneled traffic. IMO, you can replicate the same scenario as with OVS-DPDK: https://docs.openvswitch.org/en/latest/howto/userspace-tunneling/ - The VLAN traffic exits the host using the physical bridge that is connected to the external interface. - The tunneled traffic is sent to br-tun. There the traffic is tagged and sent to the physical bridge and then through the physical interface. Regards. On Wed, Jun 23, 2021 at 10:14 AM Krzysztof Klimonda < kklimonda@syntaxhighlighted.com> wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
-- Krzysztof Klimonda kklimonda@syntaxhighlighted.com
On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
you assign the ovs tunnel interface ip to the bridge with the physical interfaces. this is standard practice when using ovs-dpdk for example as otherwise the tunnel traffic will not be dpdk acclerated. i suspect the same requirement exits for hardware offloaded ovs. the bridge local port e.g. br-ex is a interface type internal port. ovs uses a chace of the host routing table to determin what interface to send the (vxlan,gre,geneve) encapsulated packet too based on the next hop interface in the routing table. if you assgign the tunnel local endpoint ip to an ovs bride it enable an internal optimisation that usesa a spescial out_port action that encuse the encapped packet on the bridge port's recive quene then simple mac learing enables it to forward the packet via the physical interface. that is the openflow view a thte dataplant view with ovs-dpctl (or ovs- appctl for dpdk) you will see that the actual datapath flow will just encap the packet and transmit it via physical interface although for this to hapen theere must be a path between the br-tun and tbe br-ex via the br-int that is interconnected via patch ports. creating a patch port via the br-ex and br-int and another pair between the br-tun and br-int can be done automaticaly by the l2 agent wtih teh correct fconfiguration and that allows ovs to collapse the bridge into a singel datapath instnace and execut this optimisation. this has been implemented in the network-ovs-dpdk devstack plugin and then we had it prot to fuel and tripleo depending on you installer it may already support this optimisation but its perfectly valid for kernel ovs also.
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
that inefficent since it required the packet to be rpcessed by ovs then sent to the kernel networking stack to finally be set via the vlan interface.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
this would also work simiar t what i suggested above but its simpelr to just use the bridge local port instead. the packtes shoudl not leave ovs and renter in this case. and you can verify that by looking at the dataplane flows.
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
Thanks, Does this assume that the ovs tunnel traffic is untagged, and there are no other tagged vlans that we want to direct to the host instead of ovs? What if I want ovs to handle only a subset of VLANs and have other directed to the host? That would probably work with with my second option (modulo possible loss of connectivity if vswitchd goes down?) but I'm not sure how to do that with ovs bridges - with normal bridge, I can make it vlan-aware but I'm not sure how this would work with ovs. Best Regards, Krzysztof On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote:
On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
you assign the ovs tunnel interface ip to the bridge with the physical interfaces. this is standard practice when using ovs-dpdk for example as otherwise the tunnel traffic will not be dpdk acclerated. i suspect the same requirement exits for hardware offloaded ovs.
the bridge local port e.g. br-ex is a interface type internal port. ovs uses a chace of the host routing table to determin what interface to send the (vxlan,gre,geneve) encapsulated packet too based on the next hop interface in the routing table. if you assgign the tunnel local endpoint ip to an ovs bride it enable an internal optimisation that usesa a spescial out_port action that encuse the encapped packet on the bridge port's recive quene then simple mac learing enables it to forward the packet via the physical interface.
that is the openflow view a thte dataplant view with ovs-dpctl (or ovs- appctl for dpdk) you will see that the actual datapath flow will just encap the packet and transmit it via physical interface although for this to hapen theere must be a path between the br-tun and tbe br-ex via the br-int that is interconnected via patch ports.
creating a patch port via the br-ex and br-int and another pair between the br-tun and br-int can be done automaticaly by the l2 agent wtih teh correct fconfiguration and that allows ovs to collapse the bridge into a singel datapath instnace and execut this optimisation.
this has been implemented in the network-ovs-dpdk devstack plugin and then we had it prot to fuel and tripleo depending on you installer it may already support this optimisation but its perfectly valid for kernel ovs also.
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
that inefficent since it required the packet to be rpcessed by ovs then sent to the kernel networking stack to finally be set via the vlan interface.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
this would also work simiar t what i suggested above but its simpelr to just use the bridge local port instead. the packtes shoudl not leave ovs and renter in this case. and you can verify that by looking at the dataplane flows.
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
-- Krzysztof Klimonda kklimonda@syntaxhighlighted.com
On Wed, 2021-06-23 at 15:50 +0200, Krzysztof Klimonda wrote:
Thanks,
Does this assume that the ovs tunnel traffic is untagged, and there are no other tagged vlans that we want to direct to the host instead of ovs? you can do takgin with openflow rules or by taggin the interface in ovs.
the l2 agent does not manage flows on the br-ex or your phsyical bridge so you as an operator are allowed to tag them
What if I want ovs to handle only a subset of VLANs and have other directed to the host?
you can do that with a vlan subport on the ovs port but you should ensure that its outside of the range in the ml2 driver config for the avaible vlans on the phsynet.
That would probably work with with my second option (modulo possible loss of connectivity if vswitchd goes down?) but I'm not sure how to do that with ovs bridges - with normal bridge, I can make it vlan- aware but I'm not sure how this would work with ovs.
Best Regards, Krzysztof
On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote:
On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
you assign the ovs tunnel interface ip to the bridge with the physical interfaces. this is standard practice when using ovs-dpdk for example as otherwise the tunnel traffic will not be dpdk acclerated. i suspect the same requirement exits for hardware offloaded ovs.
the bridge local port e.g. br-ex is a interface type internal port. ovs uses a chace of the host routing table to determin what interface to send the (vxlan,gre,geneve) encapsulated packet too based on the next hop interface in the routing table. if you assgign the tunnel local endpoint ip to an ovs bride it enable an internal optimisation that usesa a spescial out_port action that encuse the encapped packet on the bridge port's recive quene then simple mac learing enables it to forward the packet via the physical interface.
that is the openflow view a thte dataplant view with ovs-dpctl (or ovs- appctl for dpdk) you will see that the actual datapath flow will just encap the packet and transmit it via physical interface although for this to hapen theere must be a path between the br-tun and tbe br- ex via the br-int that is interconnected via patch ports.
creating a patch port via the br-ex and br-int and another pair between the br-tun and br-int can be done automaticaly by the l2 agent wtih teh correct fconfiguration and that allows ovs to collapse the bridge into a singel datapath instnace and execut this optimisation.
this has been implemented in the network-ovs-dpdk devstack plugin and then we had it prot to fuel and tripleo depending on you installer it may already support this optimisation but its perfectly valid for kernel ovs also.
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
that inefficent since it required the packet to be rpcessed by ovs then sent to the kernel networking stack to finally be set via the vlan interface.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
this would also work simiar t what i suggested above but its simpelr to just use the bridge local port instead. the packtes shoudl not leave ovs and renter in this case. and you can verify that by looking at the dataplane flows.
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
Hi, On Wed, Jun 23, 2021, at 16:04, Sean Mooney wrote:
On Wed, 2021-06-23 at 15:50 +0200, Krzysztof Klimonda wrote:
Thanks,
Does this assume that the ovs tunnel traffic is untagged, and there are no other tagged vlans that we want to direct to the host instead of ovs? you can do takgin with openflow rules or by taggin the interface in ovs.
In this case, I'd no longer set IP on the bridge, but instead create and tag internal interfaces in vswitchd (basically my second scenario), or can the bridge be somehow tagged from ovs side?
the l2 agent does not manage flows on the br-ex or your phsyical bridge so you as an operator are allowed to tag them
What if I want ovs to handle only a subset of VLANs and have other directed to the host?
you can do that with a vlan subport on the ovs port but you should ensure that its outside of the range in the ml2 driver config for the avaible vlans on the phsynet.
Right, that's something I have a control over so it shouldn't be a problem. Thanks.
That would probably work with with my second option (modulo possible loss of connectivity if vswitchd goes down?) but I'm not sure how to do that with ovs bridges - with normal bridge, I can make it vlan- aware but I'm not sure how this would work with ovs.
Best Regards, Krzysztof
On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote:
On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
you assign the ovs tunnel interface ip to the bridge with the physical interfaces. this is standard practice when using ovs-dpdk for example as otherwise the tunnel traffic will not be dpdk acclerated. i suspect the same requirement exits for hardware offloaded ovs.
the bridge local port e.g. br-ex is a interface type internal port. ovs uses a chace of the host routing table to determin what interface to send the (vxlan,gre,geneve) encapsulated packet too based on the next hop interface in the routing table. if you assgign the tunnel local endpoint ip to an ovs bride it enable an internal optimisation that usesa a spescial out_port action that encuse the encapped packet on the bridge port's recive quene then simple mac learing enables it to forward the packet via the physical interface.
that is the openflow view a thte dataplant view with ovs-dpctl (or ovs- appctl for dpdk) you will see that the actual datapath flow will just encap the packet and transmit it via physical interface although for this to hapen theere must be a path between the br-tun and tbe br- ex via the br-int that is interconnected via patch ports.
creating a patch port via the br-ex and br-int and another pair between the br-tun and br-int can be done automaticaly by the l2 agent wtih teh correct fconfiguration and that allows ovs to collapse the bridge into a singel datapath instnace and execut this optimisation.
this has been implemented in the network-ovs-dpdk devstack plugin and then we had it prot to fuel and tripleo depending on you installer it may already support this optimisation but its perfectly valid for kernel ovs also.
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
that inefficent since it required the packet to be rpcessed by ovs then sent to the kernel networking stack to finally be set via the vlan interface.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
this would also work simiar t what i suggested above but its simpelr to just use the bridge local port instead. the packtes shoudl not leave ovs and renter in this case. and you can verify that by looking at the dataplane flows.
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
-- Krzysztof Klimonda kklimonda@syntaxhighlighted.com
On Wed, 2021-06-23 at 16:54 +0200, Krzysztof Klimonda wrote:
Hi,
On Wed, Jun 23, 2021, at 16:04, Sean Mooney wrote:
On Wed, 2021-06-23 at 15:50 +0200, Krzysztof Klimonda wrote:
Thanks,
Does this assume that the ovs tunnel traffic is untagged, and there are no other tagged vlans that we want to direct to the host instead of ovs? you can do takgin with openflow rules or by taggin the interface in ovs.
In this case, I'd no longer set IP on the bridge, but instead create and tag internal interfaces in vswitchd (basically my second scenario), or can the bridge be somehow tagged from ovs side? i would still assign the ip to the bridge an yes you can tag on the ovs side although i would not
i route all my tenant traffic over a vlan sub inteface crated in a linux bond and add it as the only interface to my ovs. this means i cant use vlan network in my gues really as it will be duble taged but vxlan is confine din my case to vlan4 by the vlan sub interface. if i was not useing a kernel bond could also vlan tag inside ovs but since i want the bound to be on the host i cant use a macvlan or ipvlan since that will not work for arp reasons. all reponces for the cloud will go to the bond since the macvlan mac is different from the vm/router mac. you can just add the port or bound to ovs and then create a macvlan or vlan for the host if you want too. that works but for arp to work for you vms as i said the bound has to be attach to ovs directly and the subport used for host networking
the l2 agent does not manage flows on the br-ex or your phsyical bridge so you as an operator are allowed to tag them
What if I want ovs to handle only a subset of VLANs and have other directed to the host?
you can do that with a vlan subport on the ovs port but you should ensure that its outside of the range in the ml2 driver config for the avaible vlans on the phsynet.
Right, that's something I have a control over so it shouldn't be a problem.
Thanks.
That would probably work with with my second option (modulo possible loss of connectivity if vswitchd goes down?) but I'm not sure how to do that with ovs bridges - with normal bridge, I can make it vlan- aware but I'm not sure how this would work with ovs.
Best Regards, Krzysztof
On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote:
On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
you assign the ovs tunnel interface ip to the bridge with the physical interfaces. this is standard practice when using ovs-dpdk for example as otherwise the tunnel traffic will not be dpdk acclerated. i suspect the same requirement exits for hardware offloaded ovs.
the bridge local port e.g. br-ex is a interface type internal port. ovs uses a chace of the host routing table to determin what interface to send the (vxlan,gre,geneve) encapsulated packet too based on the next hop interface in the routing table. if you assgign the tunnel local endpoint ip to an ovs bride it enable an internal optimisation that usesa a spescial out_port action that encuse the encapped packet on the bridge port's recive quene then simple mac learing enables it to forward the packet via the physical interface.
that is the openflow view a thte dataplant view with ovs-dpctl (or ovs- appctl for dpdk) you will see that the actual datapath flow will just encap the packet and transmit it via physical interface although for this to hapen theere must be a path between the br-tun and tbe br- ex via the br-int that is interconnected via patch ports.
creating a patch port via the br-ex and br-int and another pair between the br-tun and br-int can be done automaticaly by the l2 agent wtih teh correct fconfiguration and that allows ovs to collapse the bridge into a singel datapath instnace and execut this optimisation.
this has been implemented in the network-ovs-dpdk devstack plugin and then we had it prot to fuel and tripleo depending on you installer it may already support this optimisation but its perfectly valid for kernel ovs also.
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
that inefficent since it required the packet to be rpcessed by ovs then sent to the kernel networking stack to finally be set via the vlan interface.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
this would also work simiar t what i suggested above but its simpelr to just use the bridge local port instead. the packtes shoudl not leave ovs and renter in this case. and you can verify that by looking at the dataplane flows.
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
Krzysztof; You've gotten a number of very good answers to your question, but I think we have a similar network to yours. Our network is heavily VLANed, and we wanted tenant networks to be VxLAN tunneled (over a VLAN). Most of our OpenStack hosts need access to several VLANs. Here's how we did it: We started out by not assigning an IP address to the physical port. We defined VLAN ports in the OS for the VLANs that the host needs (OpenStack management & Service, and Ceph public, plus the tunneling VLAN), and assigned them IP addresses. Then, in /etc/neutron/plugins/ml2_config.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch extension_drivers = port_security [ml2_type_vxlan] vni_ranges = 1:1000 [ml2_type_vlan] network_vlan_ranges = provider_core:<VLAN#>:<VLAN#>{, provider_core:<VLAN#>:<VLAN#>} And, in /etc/neutron/plugins/ml2/openvswitch_agent.ini [agent] tunnel_types = vxlan [ovs] local_ip = <tunnel network IP> bridge_mappings = provider_core:<physical network name> I don't know if this works better for you than previous answers, but it's what we decided to do. Thank you, Dominic L. Hilsbos, MBA Vice President - Information Technology Perform Air International Inc. DHilsbos@PerformAir.com www.PerformAir.com -----Original Message----- From: Krzysztof Klimonda [mailto:kklimonda@syntaxhighlighted.com] Sent: Wednesday, June 23, 2021 6:50 AM To: openstack-discuss@lists.openstack.org Subject: Re: [neutron] OVS tunnels and VLAN provider networks on the same interface Thanks, Does this assume that the ovs tunnel traffic is untagged, and there are no other tagged vlans that we want to direct to the host instead of ovs? What if I want ovs to handle only a subset of VLANs and have other directed to the host? That would probably work with with my second option (modulo possible loss of connectivity if vswitchd goes down?) but I'm not sure how to do that with ovs bridges - with normal bridge, I can make it vlan-aware but I'm not sure how this would work with ovs. Best Regards, Krzysztof On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote:
On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
you assign the ovs tunnel interface ip to the bridge with the physical interfaces. this is standard practice when using ovs-dpdk for example as otherwise the tunnel traffic will not be dpdk acclerated. i suspect the same requirement exits for hardware offloaded ovs.
the bridge local port e.g. br-ex is a interface type internal port. ovs uses a chace of the host routing table to determin what interface to send the (vxlan,gre,geneve) encapsulated packet too based on the next hop interface in the routing table. if you assgign the tunnel local endpoint ip to an ovs bride it enable an internal optimisation that usesa a spescial out_port action that encuse the encapped packet on the bridge port's recive quene then simple mac learing enables it to forward the packet via the physical interface.
that is the openflow view a thte dataplant view with ovs-dpctl (or ovs- appctl for dpdk) you will see that the actual datapath flow will just encap the packet and transmit it via physical interface although for this to hapen theere must be a path between the br-tun and tbe br-ex via the br-int that is interconnected via patch ports.
creating a patch port via the br-ex and br-int and another pair between the br-tun and br-int can be done automaticaly by the l2 agent wtih teh correct fconfiguration and that allows ovs to collapse the bridge into a singel datapath instnace and execut this optimisation.
this has been implemented in the network-ovs-dpdk devstack plugin and then we had it prot to fuel and tripleo depending on you installer it may already support this optimisation but its perfectly valid for kernel ovs also.
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
that inefficent since it required the packet to be rpcessed by ovs then sent to the kernel networking stack to finally be set via the vlan interface.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
this would also work simiar t what i suggested above but its simpelr to just use the bridge local port instead. the packtes shoudl not leave ovs and renter in this case. and you can verify that by looking at the dataplane flows.
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
-- Krzysztof Klimonda kklimonda@syntaxhighlighted.com
Hi, we share the same interface between OVS tunnels and VLAN-based provider networks like this: bondA - management / ceph frontend traffic (not interesting for now) bondB - plugged into br-ex, no ip, provider VLANs br-ex - we configured ip here and we use it in VXLAN overlay configuration as local_ip Laci On Wed, Jun 23, 2021 at 10:14 AM Krzysztof Klimonda < kklimonda@syntaxhighlighted.com> wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
-- Krzysztof Klimonda kklimonda@syntaxhighlighted.com
On Thu, 2021-06-24 at 08:07 +0200, Laszlo Angyal wrote:
Hi,
we share the same interface between OVS tunnels and VLAN-based provider networks like this: bondA - management / ceph frontend traffic (not interesting for now) bondB - plugged into br-ex, no ip, provider VLANs br-ex - we configured ip here and we use it in VXLAN overlay configuration as local_ip
yep this is a pretty standard an more or less optimal configuration for kernel ovs wehre you want to share one interface for both vlan and vxlan networks. if you have only one interface or bond avaiable you would create macvlan or vlan interface for management and ceph and add the bond/interface to ovs directly.
Laci
On Wed, Jun 23, 2021 at 10:14 AM Krzysztof Klimonda < kklimonda@syntaxhighlighted.com> wrote:
Hi All,
What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas:
1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests.
2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel?
I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices?
Best Regards,
-- Krzysztof Klimonda kklimonda@syntaxhighlighted.com
participants (5)
-
DHilsbos@performair.com
-
Krzysztof Klimonda
-
Laszlo Angyal
-
Rodolfo Alonso Hernandez
-
Sean Mooney