[neutron] OVS tunnels and VLAN provider networks on the same interface

Sean Mooney smooney at redhat.com
Wed Jun 23 17:02:12 UTC 2021


On Wed, 2021-06-23 at 16:54 +0200, Krzysztof Klimonda wrote:
> Hi,
> 
> On Wed, Jun 23, 2021, at 16:04, Sean Mooney wrote:
> > On Wed, 2021-06-23 at 15:50 +0200, Krzysztof Klimonda wrote:
> > > Thanks,
> > > 
> > > Does this assume that the ovs tunnel traffic is untagged, and
> > > there
> > > are no other tagged vlans that we want to direct to the host
> > > instead
> > > of ovs?
> > you can do takgin with openflow rules or by taggin the interface in
> > ovs.
> 
> In this case, I'd no longer set IP on the bridge, but instead create
> and tag internal interfaces in vswitchd (basically my second
> scenario), or can the bridge be somehow tagged from ovs side?
i would still assign the ip to the bridge an yes you can tag on the ovs
side although  i would not

i route all my tenant traffic over a vlan sub inteface crated in a
linux bond and add it as the only interface to my ovs.

this means i cant use vlan network in my gues really as it will be
duble taged but vxlan is confine din my case to vlan4 by the vlan sub
interface.

if i was not useing a kernel bond could also vlan tag inside ovs 
but since i want the bound to be on the host i cant use a macvlan or
ipvlan since that will not work for arp reasons. all reponces for the
cloud will go to the bond since the macvlan mac is different from the
vm/router mac.

you can just add the port or bound to ovs and then create a macvlan or
vlan for the host if you want too. that works but for arp to work for
you vms as i said the bound has to be attach to ovs directly and the
subport used for host networking 


> > 
> > the l2 agent does not manage flows on the br-ex or your phsyical
> > bridge
> > so you as an operator are allowed to tag them
> > > 
> > > What if I want ovs to handle only a subset of VLANs and have
> > > other
> > > directed to the host?
> > you can do that with a vlan subport on the ovs port but you should
> > ensure that its outside of the range in the ml2 driver config for
> > the
> > avaible vlans on the phsynet.
> 
> Right, that's something I have a control over so it shouldn't be a
> problem.
> 
> Thanks.
> 
> > >  That would probably work with with my second option (modulo
> > > possible
> > > loss of connectivity if vswitchd goes down?) but I'm not sure how
> > > to
> > > do that with ovs bridges - with normal bridge, I can make it
> > > vlan-
> > > aware but I'm not sure how this would work with ovs.
> > > 
> > > Best Regards,
> > > Krzysztof 
> > > 
> > > On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote:
> > > > On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote:
> > > > > Hi All,
> > > > > 
> > > > > What is the best practice for sharing same interface between
> > > > > OVS
> > > > > tunnels and VLAN-based provider networks? For provider
> > > > > networks
> > > > > to
> > > > > work, I must "bind" entire interface to vswitchd, so that it
> > > > > can
> > > > > handle
> > > > > vlan bits, but this leaves me with a question of how to plug
> > > > > ovs
> > > > > tunnel
> > > > > interface (and os internal used for control<->compute
> > > > > communication, if
> > > > > shared). I have two ideas:
> > > > 
> > > > you assign the ovs tunnel interface ip to the bridge with the
> > > > physical
> > > > interfaces. this is standard practice when using ovs-dpdk for
> > > > example
> > > > as otherwise the tunnel traffic will not be dpdk acclerated. i
> > > > suspect
> > > > the same requirement exits for hardware offloaded ovs.
> > > > 
> > > > the bridge local port e.g. br-ex is a interface type internal
> > > > port.
> > > > ovs uses a chace of the host routing table to determin what
> > > > interface
> > > > to send the (vxlan,gre,geneve) encapsulated packet too based on
> > > > the
> > > > next hop interface in the routing table. if you assgign the
> > > > tunnel
> > > > local endpoint ip to an ovs bride it enable an internal
> > > > optimisation
> > > > that usesa a spescial out_port action that encuse the encapped
> > > > packet
> > > > on the bridge port's recive quene then simple mac learing
> > > > enables
> > > > it to
> > > > forward the packet via the physical interface.
> > > > 
> > > > that is the openflow view a thte dataplant view with ovs-dpctl
> > > > (or
> > > > ovs-
> > > > appctl for dpdk) you will see that the actual datapath flow
> > > > will
> > > > just
> > > > encap the packet and transmit it via physical interface
> > > > although
> > > > for
> > > > this to hapen theere must be a path between the br-tun and tbe
> > > > br-
> > > > ex
> > > > via the br-int that is interconnected via patch ports.
> > > > 
> > > > creating a patch port via the br-ex and br-int and another pair
> > > > between
> > > > the br-tun and br-int can be done automaticaly by the l2 agent
> > > > wtih
> > > > teh
> > > > correct fconfiguration and that allows ovs to collapse the
> > > > bridge
> > > > into
> > > > a singel datapath instnace and execut this optimisation.
> > > > 
> > > > this has been implemented in the network-ovs-dpdk devstack
> > > > plugin
> > > > and
> > > > then we had it prot to fuel and tripleo depending on you
> > > > installer
> > > > it
> > > > may already support this optimisation but its perfectly valid
> > > > for
> > > > kernel ovs also.
> > > > 
> > > > 
> > > > > 
> > > > > 1) I can bind entire interface to ovs-vswitchd (in ip link
> > > > > output
> > > > > it's
> > > > > marked with "master ovs-system") and create vlan interfaces
> > > > > on
> > > > > top of
> > > > > that interface *in the system*. This seems to be working
> > > > > correctly in
> > > > > my lab tests.
> > > > that inefficent since it required the packet to be rpcessed by
> > > > ovs
> > > > then
> > > > sent to the kernel networking stack to finally be set via the 
> > > > vlan
> > > > interface.
> > > > > 
> > > > > 2) I can create internal ports in vswitchd and plug them into
> > > > > ovs
> > > > > bridge - this will make the interface show up in the system,
> > > > > and
> > > > > I can
> > > > > configure it afterwards. In this setup I'm concerned with how
> > > > > packets
> > > > > from VMs to other computes will flow through the system -
> > > > > will
> > > > > they
> > > > > leave openvswitch to host system just to go back again to be
> > > > > sent
> > > > > through a tunnel?
> > > > this would also work simiar t what i suggested above but its
> > > > simpelr to
> > > > just use the bridge local port instead. the packtes shoudl not
> > > > leave
> > > > ovs and renter in this case. and you can verify that by looking
> > > > at
> > > > the
> > > > dataplane flows.
> > > > > 
> > > > > I've tried looking for some documentation regarding that, but
> > > > > came up
> > > > > empty - are there some links I could look at to get a better
> > > > > understanding of packet flow and best practices?
> > > > > 
> > > > > Best Regards,
> > > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > 
> > > 
> > 
> > 
> > 
> > 
> 
> 





More information about the openstack-discuss mailing list