[neutron] OVS tunnels and VLAN provider networks on the same interface

DHilsbos at performair.com DHilsbos at performair.com
Wed Jun 23 16:25:11 UTC 2021


Krzysztof;

You've gotten a number of very good answers to your question, but I think we have a similar network to yours.

Our network is heavily VLANed, and we wanted tenant networks to be VxLAN tunneled (over a VLAN).  Most of our OpenStack hosts need access to several VLANs.

Here's how we did it:
We started out by not assigning an IP address to the physical port.
We defined VLAN ports in the OS for the VLANs that the host needs (OpenStack management & Service, and Ceph public, plus the tunneling VLAN), and assigned them IP addresses.

Then, in /etc/neutron/plugins/ml2_config.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
extension_drivers = port_security

[ml2_type_vxlan]
vni_ranges = 1:1000

[ml2_type_vlan]
network_vlan_ranges = provider_core:<VLAN#>:<VLAN#>{, provider_core:<VLAN#>:<VLAN#>}

And, in /etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
tunnel_types = vxlan

[ovs]
local_ip = <tunnel network IP>
bridge_mappings = provider_core:<physical network name>

I don't know if this works better for you than previous answers, but it's what we decided to do.

Thank you,

Dominic L. Hilsbos, MBA 
Vice President - Information Technology 
Perform Air International Inc.
DHilsbos at PerformAir.com 
www.PerformAir.com


-----Original Message-----
From: Krzysztof Klimonda [mailto:kklimonda at syntaxhighlighted.com] 
Sent: Wednesday, June 23, 2021 6:50 AM
To: openstack-discuss at lists.openstack.org
Subject: Re: [neutron] OVS tunnels and VLAN provider networks on the same interface

Thanks,

Does this assume that the ovs tunnel traffic is untagged, and there are no other tagged vlans that we want to direct to the host instead of ovs?

What if I want ovs to handle only a subset of VLANs and have other directed to the host? That would probably work with with my second option (modulo possible loss of connectivity if vswitchd goes down?) but I'm not sure how to do that with ovs bridges - with normal bridge, I can make it vlan-aware but I'm not sure how this would work with ovs.

Best Regards,
Krzysztof 

On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote:
> On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote:
> > Hi All,
> > 
> > What is the best practice for sharing same interface between OVS
> > tunnels and VLAN-based provider networks? For provider networks to
> > work, I must "bind" entire interface to vswitchd, so that it can handle
> > vlan bits, but this leaves me with a question of how to plug ovs tunnel
> > interface (and os internal used for control<->compute communication, if
> > shared). I have two ideas:
> 
> you assign the ovs tunnel interface ip to the bridge with the physical
> interfaces. this is standard practice when using ovs-dpdk for example
> as otherwise the tunnel traffic will not be dpdk acclerated. i suspect
> the same requirement exits for hardware offloaded ovs.
> 
> the bridge local port e.g. br-ex is a interface type internal port.
> ovs uses a chace of the host routing table to determin what interface
> to send the (vxlan,gre,geneve) encapsulated packet too based on the
> next hop interface in the routing table. if you assgign the tunnel
> local endpoint ip to an ovs bride it enable an internal optimisation
> that usesa a spescial out_port action that encuse the encapped packet
> on the bridge port's recive quene then simple mac learing enables it to
> forward the packet via the physical interface.
> 
> that is the openflow view a thte dataplant view with ovs-dpctl (or ovs-
> appctl for dpdk) you will see that the actual datapath flow will just
> encap the packet and transmit it via physical interface although for
> this to hapen theere must be a path between the br-tun and tbe br-ex
> via the br-int that is interconnected via patch ports.
> 
> creating a patch port via the br-ex and br-int and another pair between
> the br-tun and br-int can be done automaticaly by the l2 agent wtih teh
> correct fconfiguration and that allows ovs to collapse the bridge into
> a singel datapath instnace and execut this optimisation.
> 
> this has been implemented in the network-ovs-dpdk devstack plugin and
> then we had it prot to fuel and tripleo depending on you installer it
> may already support this optimisation but its perfectly valid for
> kernel ovs also.
> 
> 
> > 
> > 1) I can bind entire interface to ovs-vswitchd (in ip link output it's
> > marked with "master ovs-system") and create vlan interfaces on top of
> > that interface *in the system*. This seems to be working correctly in
> > my lab tests.
> that inefficent since it required the packet to be rpcessed by ovs then
> sent to the kernel networking stack to finally be set via the  vlan
> interface.
> > 
> > 2) I can create internal ports in vswitchd and plug them into ovs
> > bridge - this will make the interface show up in the system, and I can
> > configure it afterwards. In this setup I'm concerned with how packets
> > from VMs to other computes will flow through the system - will they
> > leave openvswitch to host system just to go back again to be sent
> > through a tunnel?
> this would also work simiar t what i suggested above but its simpelr to
> just use the bridge local port instead. the packtes shoudl not leave
> ovs and renter in this case. and you can verify that by looking at the
> dataplane flows.
> > 
> > I've tried looking for some documentation regarding that, but came up
> > empty - are there some links I could look at to get a better
> > understanding of packet flow and best practices?
> > 
> > Best Regards,
> > 
> 
> 
> 
> 


-- 
  Krzysztof Klimonda
  kklimonda at syntaxhighlighted.com





More information about the openstack-discuss mailing list