[openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]
cboylan at sapwetik.org
Fri Apr 10 17:58:48 UTC 2015
On Fri, Apr 10, 2015, at 10:46 AM, Assaf Muller wrote:
> ----- Original Message -----
> > On Fri, Apr 10, 2015, at 12:31 AM, Kevin Benton wrote:
> > > I mentioned this in my email in the previous thread, but I am also
> > > concerned about the use of the Linux bridge plugin as the default for
> > > devstack.
> > >
> > > It will reflect poorly on the Neutron project because we are defaulting
> > > to
> > > something that gets almost no development effort and that is not even
> > > gated
> > > on (other than unit tests). This is a risky move that can damage
> > > first-time
> > > users' opinions of the viability of OpenStack. I wouldn't feel confident
> > > about something that has defaults that could be broken at any time...
> > > even
> > > during a release.
> > >
> > > Can someone point me to the list of complaints about OVS? I would rather
> > > invest time in addressing those issues rather than ignoring everything a
> > > good chunk of the neutron community has spent significant time on.
> > As someone that just spent a large chunk of this week debugging OVS in
> > order to get multinode testing of Neutron with DVR  running without
> > breaking existing multinode testing of nova net with multihost here is
> > what I have learned.
> > OVS documentation is terrible. There is no cohesive documentation that
> > explains the operation and use of OVS to the user. There are dev docs
> > , there are man pages for a large number of ovs-* commands , there
> > are "cookbooks" , and there are blog posts from luckier souls than me
> > that actually got OVS to work . What I would really like to see is a
> > cohesive set of docs that explains the operation of OVS (please do point
> > me at them if they exist I just haven't found them).
> > There are a large number of commands, so far I have found ovs-appctl,
> > ovs-dpctl, ovs-ofctl, and ovs-vsctl. Unfortunately, as a user I am
> > unsure what specifically each command exists for. I know they are ovs
> > related and they control things. Even reading the man pages I am not
> > quite sure what I would use each of them for. I think this is largely
> > due to the lack of cohesive documentation that explains things like
> > datapaths and how ovs-dpctl might be useful to control them.
> > tcpdump does not work on OVS interfaces. I just about gave up at this
> > point. To get packet captures I have to set up a veth, add it to the OVS
> > bridge, then mirror the the OVS interface onto this new port and tcpdump
> > the other end. This is non trivial  and given the lack of docs I
> > wouldn't expect users to just know how to do this. Also, this isn't a
> > physical switch in a datacenter. I shouldn't have to pretend I am moving
> > cables around and setting up a sniffer device. I am running linux I
> > should be able to just tcpdump a device.
> > ARP stops working . OVS appears to be intentionally dropping arp who
> > has requests. A switch without ARP is not useful. (The setup here is an
> > OVS bridge on one VM connected to an OVS bridge on another VM via GRE
> > similar to , one bridge has ip address 172.24.4.1/23 and the other
> > has 172.24.4.2/23. Pinging 172.24.4.2 from 172.24.4.1 does not work
> > because ARP does not work).
> The only ARP-drop flow I can think of is when DVR installs flows that
> ARP requests to the DVR routers interfaces on br-tun. This is because the
> local router is supposed to answer ARP requests to its interfaces.
> You shouldn't have any other flows that drop ARP messages. That being
> I just got devstack multinode DVR working. The configuration is detailed
> I did not run in to any issues apart from a bug I reported which was
The specific issue here is we are testing upstream on cloud VMs that do
not share L2 networking and we have no control over routing. This means
that for a controller + compute pair running tempest on the controller
we must have a network overlay that gives us routing to the floating IP
The existing solution for this with nova network testing is a linux
bridge with gre interfaces . It works very well. We cannot seem to
use this setup with DVR (we tried adding the gre port to the br-ex ovs
bridge but neutron wasn't happy with that for some reason ). Next we
tried using OVS bridges with gre tunnels between them. The goal being
that the existing nova net test would continue to just work and neutron
with DVR would be happy. Unfortunately nova-net breaks with the arp
issues above (note there is no br-tun or neutron in this case).
So my rant is specifically targetted at general OVS use and actually has
little to do with neutron and DVR other than it was what got me looking
at using OVS to solve this overlay network problem.
More information about the OpenStack-dev