[Openstack-operators] ovs->ml2 migration issues during icehouse upgrade
Jonathan Proulx
jon at jonproulx.com
Wed Jul 9 23:29:59 UTC 2014
I can't seem to find an understandable explaination of how to
translate my existing OVS config to ML2, so I suspect this is a simple
(to someone) config issue.
following upgrade steps at:
http://docs.openstack.org/openstack-ops/content/upgrades_havana-icehouse-ubuntu.html
all seems to go well until i try upgrading my compute node then
running instances loose network and new instances can't get net.
My networking is mostly based on provider vlans with some gre based
tenant overlays, all this relates to the main provider vlan, haven't
got to gre stuff yet
The proximal cause seems to a missing flow in the br-eth1. On a
working Havana system I see:
# ovs-ofctl dump-flows br-eth1
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=185347.841s, table=0, n_packets=113939229,
n_bytes=12298686848, idle_age=0, hard_age=65534,
priority=4,in_port=9,dl_vlan=1 actions=mod_vlan_vid:2113,NORMAL
cookie=0x0, duration=185393.285s, table=0, n_packets=18,
n_bytes=3384, idle_age=65534, hard_age=65534, priority=2,in_port=9
actions=drop
cookie=0x0, duration=185394.258s, table=0, n_packets=277410868,
n_bytes=1295102884039, idle_age=0, hard_age=65534, priority=1
actions=NORMAL
but on the broken icehouse system that first flow that translates the
internal VLAN tag (1) to the external VLAN (2113) is missing, so all
traffic from my test nodes dies there (phy-br-eht1 actually) and
never makes it off the hypervisor.
compute node ml2_conf.ini:
# grep -v -e ^$ -e ^# /etc/neutron/plugins/ml2/ml2_conf.ini
[ovs]
network_vlan_ranges=trunk:2112:2114
local_ip=${my_public_ip}
enable_tunneling=True
integration_bridge=br-int
tunnel_id_ranges=1:1000
tunnel_bridge=br-tun
tenant_network_type=gre
bridge_mappings=trunk:br-eth1
[agent]
tunnel_types=gre
l2_population=true
polling_interval=30
veth_mtu=9134
[securitygroup]
enable_security_group=true
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ml2]
type_drivers=vlan,gre
tenant_network_types=gre
mechanisim_drivers=openvswitch
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges=trunk:2112:2114
[ml2_type_gre]
tunnel_id_ranges=1:1000
[ml2_type_vxlan]
This is mostly copy pasta from the ovs plugin ini with a few
suggestions I've found online so is highly suspect, config on
controller/network node is identical modulo local_ip
anyone see what I got wrong there?
-Jon
More information about the OpenStack-operators
mailing list