[openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

Mooney, Sean K sean.k.mooney at intel.com
Wed Jun 15 18:44:52 UTC 2016



> -----Original Message-----
> From: Peters, Rawlin [mailto:rawlin.peters at hpe.com]
> Sent: Wednesday, June 15, 2016 7:02 PM
> To: Kevin Benton <kevin at benton.pub>
> Cc: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability
> for wiring trunk ports
> 
> On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (kevin at benton.pub)
> wrote:
> > >which generates an arbitrary name
> >
> > I'm not a fan of this approach because it requires coordinated
> assumptions.
> > With the OVS hybrid plug strategy we have to make guesses on the agent
> > side about the presence of bridges with specific names that we never
> > explicitly requested and that we were never explicitly told about. So
> > we end up with code like [1] that is looking for a particular end of a
> > veth pair it just hopes is there so the rules have an effect.
[Mooney, Sean K] I really would like to avoid encoding knowledge to 
Generate the names the same way in both neutron and os-vf/nova or having any
Other special casing to figure out the bridge or interface names.

> 
> I don't think this should be viewed as a downside of Strategy 1 because,
> at least when we use patch port pairs, we can easily get the peer name
> from the port on br-int, then use the equivalent of "ovs-vsctl iface-to-
> br <peer name>"
> to get the name of the bridge. If we allow supporting veth pairs to
> implement the subports, then getting the arbitrary trunk bridge/veth
> names isn't as trivial.
> 
> This also brings up the question: do we even need to support veth pairs
> over patch port pairs anymore? Are there any distros out there that
> support openstack but not OVS patch ports?
[Mooney, Sean K] that is a separate discussions
In general im in favor of deprecating support for veth interconnect with ovs
And removing it in ocata.
I belive I was originally added in juno for centos and suse as then did not
Support ovs 2.0 or there kernel ovs module did not support patchports.
As far as I aware  there is no major linux os version that does not have patch 
Support in ovs and also meets the minimum python version of 2.7 required by OpenStack
So this functionality could safely be removed.

> 
> >
> > >it seems that the LinuxBridge implementation can simply use an L2
> > >agent extension for creating the vlan interfaces for the subports
> >
> > LinuxBridge implementation is the same regardless of the strategy for
> > OVS. The whole reason we have to come up with these alternative
> > approaches for OVS is because we can't use the obvious architecture of
> > letting it plug into the integration bridge due to VLANs already being
> > used for network isolation. I'm not sure pushing complexity out to
> > os-vif to deal with this is a great long-term strategy.
> 
> The complexity we'd be pushing out to os-vif is not much worse than the
> current complexity of the hybrid_ovs strategy already in place today.
[Mooney, Sean K] I don’t think strategy 1 is the correct course
Of action long-term with the trunk bridge approch. I honestly think that
The patch port creation should be the responsibility of the ovs agent alone.

I think the DRY principle applies in this respect also. The ovs agent will
Be required to add or remove patch ports after the vm is booted if subports
Are added/removed from the truck port. I don’t think it make sense to
Write the code to do that both in the ovs agent and separately in os-vif.

Having os-vif simply create the bridge if it does not exist and 
Add the port to it is a much simpler solution in that respect as you can reuse
The patch port code that is already in neutron and not duplicate it in os-vif.
https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L368-L371


> 
> >
> > >Also, we didn’t make the OVS agent monitor for new linux bridges in
> > >the hybrid_ovs strategy so that Neutron could be responsible for
> > >creating the veth pair.
> >
> > Linux Bridges are outside of the domain of OVS and even its agent. The
> > L2 agent doesn't actually do anything with the bridge itself, it just
> > needs a veth device it can put iptables rules on. That's in contrast
> > to these new OVS bridges that we will be managing rules for, creating
> > additional patch ports, etc.
> 
> I wouldn't say linux bridges are totally outside of its domain because
> it relies on them for security groups. Rather than relying on an
> arbitrary naming convention between Neutron and Nova, we could've
> implemented monitoring for new linux bridges to create veth pairs and
> firewall rules on. I'm glad we didn't, because that logic is specific to
> that particular firewall driver, similar to how this trunk bridge
> monitoring would be specific to only vlan-aware-vms. I think the logic
> lives best within an L2 agent extension, outside of the core of the OVS
> agent.
[Mooney, Sean K] 
is this assuming option A form https://review.openstack.org/#/c/318317/ i.e the vlan support approach?
If so that will not work for ovs with dpdk or ovs on windows. 
Which will mean we will have to implementation A for Linux bridge and linux kernel ovs  and option c for dpdk and windows ovs datapaths.
Or we can use option A for liunux bridge agent only and option C for all version of ovs I think that tis a much better approach.
One thing we I think we have to accept is that we will need at least two implementation as you cannot use the same approach for


> 
> >
> > >Why shouldn't we use the tools that are already available to us?
> >
> > Because we're trying to build a house and all we have are paint
> > brushes. :)
> 
> To me it seems like we already have a house that just needs a little
> paint :)
> 
> >
> >
> > 1.
> > https://github.com/openstack/neutron/blob/f78e5b4ec812cfcf5ab8b50fca62
> > d1ae0dd7741d/neutron/agent/linux/iptables_firewall.py#L919-L923
> ________________________________________________________________________
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


More information about the OpenStack-dev mailing list