[Openstack] Intermittent DHCP in neutron over GRE

Matt Davis mattd5574 at gmail.com
Fri Oct 17 18:43:42 UTC 2014

Hi all,

I'm having some trouble with a Neutron (icehouse on Ubuntu 14.04)
configuration and was hoping somebody could shed some light on it.  My
network node is seeing DHCP requests and trying to respond to them, but
they're getting lost in my openvswitch network.  This behavior is
inconsistent (some VMs get addresses and some don't), but it persists
across VM reboots (that is, if a VM gets an IP once, it appears to get one
every time it reboots and if it fails the first time, it will fail after
every reboot).

The setup is as follows:

External network:
Internal network:
GRE tunnels
Four compute nodes:[4,5,6,7]
3 control/network nodes (HA using Pacemaker):[0,2,3]
A mysql cluster for the database (Percona XtraDB)

On my compute node, ovs-vsctl show gives:

    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qvoec5db497-3b"
            tag: 1
            Interface "qvoec5db497-3b"
        Port "em2"
            Interface "em2"
        Port "snoop0"
            Interface "snoop0"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "em1"
            Interface "em1"
        Port phy-br-ex
            Interface phy-br-ex
    Bridge br-tun
        Port "gre-c0a86369"
            Interface "gre-c0a86369"
                type: gre
                options: {in_key=flow, local_ip="",
out_key=flow, remote_ip=""}
        Port "gre-c0a86364"
            Interface "gre-c0a86364"
                type: gre
                options: {in_key=flow, local_ip="",
out_key=flow, remote_ip=""}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-c0a86366"
            Interface "gre-c0a86366"
                type: gre
                options: {in_key=flow, local_ip="",
out_key=flow, remote_ip=""}
        Port "gre-c0a86367"
            Interface "gre-c0a86367"
                type: gre
                options: {in_key=flow, local_ip="",
out_key=flow, remote_ip=""}
        Port "gre-c0a86368"
            Interface "gre-c0a86368"
                type: gre
                options: {in_key=flow, local_ip="",
out_key=flow, remote_ip=""}
        Port "gre-c0a8636a"
            Interface "gre-c0a8636a"
                type: gre
                options: {in_key=flow, local_ip="",
out_key=flow, remote_ip=""}
    ovs_version: "2.0.2"

With a VM on, I can tap into the various ports on the
compute node and make the following observations:

1)  Requests are coming out of qvoec5db497-3b.
2)  Replies are coming back over gre-c0a86364 (sensible, as the DHCP server
is on
3)  I don't see requests on gre-c0a86364, and I don't see replies on
qvoec5db497-3b.  Clearly the request is going out over the GRE tunnel,
though, because a reply is coming back.  I'm using the following script to
observe the ports, so there may be a problem wiht my observations here:

#!/bin/sh -x


ovs-vsctl clear Bridge $bridge mirrors
ovs-vsctl del-port $ifname
ip link add name $ifname type dummy
ip link set dev $ifname up
ovs-vsctl add-port $bridge $ifname

ovs-vsctl -- set Bridge $bridge mirrors=@m  \
-- --id=@$ifname get Port $ifname  \
-- --id=@$port get Port $port  \
-- --id=@m create Mirror name=m-$ifname select-dst-port=@$port
select-src-port=@$port output-port=@$ifname

On the compute node, iptables -S is:

root at compute3:/home/mdavis# iptables -S
-N neutron-filter-top
-N neutron-openvswi-FORWARD
-N neutron-openvswi-INPUT
-N neutron-openvswi-OUTPUT
-N neutron-openvswi-iec5db497-3
-N neutron-openvswi-local
-N neutron-openvswi-oec5db497-3
-N neutron-openvswi-sec5db497-3
-N neutron-openvswi-sg-chain
-N neutron-openvswi-sg-fallback
-A INPUT -j neutron-openvswi-INPUT
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -j neutron-filter-top
-A FORWARD -j neutron-openvswi-FORWARD
-A FORWARD -d -o virbr0 -m conntrack --ctstate
-A FORWARD -s -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -j neutron-filter-top
-A OUTPUT -j neutron-openvswi-OUTPUT
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A neutron-filter-top -j neutron-openvswi-local
-A neutron-openvswi-FORWARD -m physdev --physdev-out tapec5db497-3b
--physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tapec5db497-3b
--physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tapec5db497-3b
--physdev-is-bridged -j neutron-openvswi-oec5db497-3
-A neutron-openvswi-iec5db497-3 -m state --state INVALID -j DROP
-A neutron-openvswi-iec5db497-3 -m state --state RELATED,ESTABLISHED -j
-A neutron-openvswi-iec5db497-3 -p tcp -m tcp --dport 443 -j RETURN
-A neutron-openvswi-iec5db497-3 -p tcp -m tcp --dport 22 -j RETURN
-A neutron-openvswi-iec5db497-3 -p tcp -m tcp --dport 80 -j RETURN
-A neutron-openvswi-iec5db497-3 -p icmp -j RETURN
-A neutron-openvswi-iec5db497-3 -s -p udp -m udp --sport 67
--dport 68 -j RETURN
-A neutron-openvswi-iec5db497-3 -j neutron-openvswi-sg-fallback
-A neutron-openvswi-oec5db497-3 -p udp -m udp --sport 68 --dport 67 -j
-A neutron-openvswi-oec5db497-3 -j neutron-openvswi-sec5db497-3
-A neutron-openvswi-oec5db497-3 -p udp -m udp --sport 67 --dport 68 -j DROP
-A neutron-openvswi-oec5db497-3 -m state --state INVALID -j DROP
-A neutron-openvswi-oec5db497-3 -m state --state RELATED,ESTABLISHED -j
-A neutron-openvswi-oec5db497-3 -j RETURN
-A neutron-openvswi-oec5db497-3 -j neutron-openvswi-sg-fallback
-A neutron-openvswi-sec5db497-3 -s -m mac --mac-source
FA:16:3E:AD:45:3A -j RETURN
-A neutron-openvswi-sec5db497-3 -j DROP
-A neutron-openvswi-sg-chain -m physdev --physdev-out tapec5db497-3b
--physdev-is-bridged -j neutron-openvswi-iec5db497-3
-A neutron-openvswi-sg-chain -m physdev --physdev-in tapec5db497-3b
--physdev-is-bridged -j neutron-openvswi-oec5db497-3
-A neutron-openvswi-sg-chain -j ACCEPT
-A neutron-openvswi-sg-fallback -j DROP

Open vswitch flows are:

root at compute3:/home/mdavis# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=6721.745s, table=0, n_packets=29791, n_bytes=5249622,
idle_age=1, priority=2,in_port=4 actions=drop
 cookie=0x0, duration=6722.49s, table=0, n_packets=1620378120,
n_bytes=487782754336, idle_age=0, priority=1 actions=NORMAL
 cookie=0x0, duration=6722.445s, table=22, n_packets=0, n_bytes=0,
idle_age=6722, priority=0 actions=drop
root at compute3:/home/mdavis# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=6722.853s, table=0, n_packets=41, n_bytes=11814,
idle_age=3754, priority=1,in_port=7 actions=resubmit(,2)
 cookie=0x0, duration=6722.997s, table=0, n_packets=0, n_bytes=0,
idle_age=6722, priority=1,in_port=3 actions=resubmit(,2)
 cookie=0x0, duration=6722.566s, table=0, n_packets=8488479,
n_bytes=2950926206, idle_age=3786, priority=1,in_port=6 actions=resubmit(,2)
 cookie=0x0, duration=6722.707s, table=0, n_packets=1399703,
n_bytes=336420637, idle_age=3717, priority=1,in_port=5 actions=resubmit(,2)
 cookie=0x0, duration=6723.95s, table=0, n_packets=186932608,
n_bytes=46508961076, idle_age=0, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=6723.132s, table=0, n_packets=0, n_bytes=0,
idle_age=6723, priority=1,in_port=4 actions=resubmit(,2)
 cookie=0x0, duration=6723.273s, table=0, n_packets=197016168,
n_bytes=69098476729, idle_age=0, priority=1,in_port=2 actions=resubmit(,2)
 cookie=0x0, duration=6723.905s, table=0, n_packets=0, n_bytes=0,
idle_age=6723, priority=0 actions=drop
 cookie=0x0, duration=6723.861s, table=1, n_packets=2074, n_bytes=737325,
idle_age=390, priority=1,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
 cookie=0x0, duration=6723.814s, table=1, n_packets=186930534,
n_bytes=46508223751, idle_age=0,
priority=1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=3901.865s, table=2, n_packets=189487661,
n_bytes=66331091024, idle_age=0, priority=1,tun_id=0x1
 cookie=0x0, duration=6723.766s, table=2, n_packets=17416730, n_bytes=
6054744362, idle_age=4100, priority=0 actions=drop
 cookie=0x0, duration=6723.724s, table=3, n_packets=0, n_bytes=0,
idle_age=6723, priority=0 actions=drop
 cookie=0x0, duration=6723.683s, table=10, n_packets=189487661,
n_bytes=66331091024, idle_age=0, priority=1
 cookie=0x0, duration=3897.748s, table=20, n_packets=0, n_bytes=0,
hard_timeout=300, idle_age=3897, hard_age=0,
 cookie=0x0, duration=6723.643s, table=20, n_packets=1, n_bytes=384,
idle_age=3897, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=3901.911s, table=21, n_packets=166399667,
n_bytes=39541368007, idle_age=0, dl_vlan=1
 cookie=0x0, duration=6723.594s, table=21, n_packets=20530868,
n_bytes=6966856128, idle_age=3456, priority=0 actions=drop
root at compute3:/home/mdavis# ovs-ofctl dump-flows br-ex
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=6730.347s, table=0, n_packets=180379513,
n_bytes=43740118754, idle_age=1, priority=2,in_port=3 actions=drop
 cookie=0x0, duration=6730.867s, table=0, n_packets=470149,
n_bytes=81654388, idle_age=0, priority=1 actions=NORMAL

My ml2_conf.ini for the node is:

type_drivers = flat,gre

tenant_network_types = flat,gre

mechanism_drivers = openvswitch


network_vlan_ranges = phys_int:1000:1023,phys_ex:2000:2023

tunnel_id_ranges = 1:1000

firewall_driver =
enable_security_group = True
enable_ipset = True

bridge_mappings = external:br-ex
local_ip =
tunnel_type = gre
enable_tunneling = True

Any thoughts as to what I may be doing wrong?


-Matt Davis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141017/bdd0ec26/attachment.html>

More information about the Openstack mailing list