[Openstack] Open vSwitch not working as expected...?

Erich Weiler weiler at soe.ucsc.edu
Fri Apr 25 18:23:05 UTC 2014


Hi Y'all,

I recently began rebuilding my OpenStack installation under the latest 
icehouse release, and everything is almost working, but I'm having 
issues with Open vSwitch, at least on the compute nodes.

I'm use the ML2 plugin and VLAN tenant isolation.  I have this in my 
/etc/neutron/plugin.ini file

----------
[ovs]
bridge_mappings = physnet1:br-eth1

[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers  = openvswitch

# Example: mechanism_drivers = linuxbridge,brocade

[ml2_type_flat]

[ml2_type_vlan]
network_vlan_ranges = physnet1:200:209
----------

My switchports that the nodes connect to are configured as trunks, 
allowing VLANs 200-209 to flow over them.

My network that the VMs should be connecting to is:

# neutron net-show cbse-net
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 23028b15-fb12-4a9f-9fba-02f165a52d44 |
| name                      | cbse-net                             |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 200                                  |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | dd25433a-b21d-475d-91e4-156b00f25047 |
| tenant_id                 | 7c1980078e044cb08250f628cbe73d29     |
+---------------------------+--------------------------------------+

# neutron subnet-show dd25433a-b21d-475d-91e4-156b00f25047
+------------------+--------------------------------------------------+
| Field            | Value                                            |
+------------------+--------------------------------------------------+
| allocation_pools | {"start": "10.200.0.2", "end": "10.200.255.254"} |
| cidr             | 10.200.0.0/16                                    |
| dns_nameservers  | 128.114.48.44                                    |
| enable_dhcp      | True                                             |
| gateway_ip       | 10.200.0.1                                       |
| host_routes      |                                                  |
| id               | dd25433a-b21d-475d-91e4-156b00f25047             |
| ip_version       | 4                                                |
| name             |                                                  |
| network_id       | 23028b15-fb12-4a9f-9fba-02f165a52d44             |
| tenant_id        | 7c1980078e044cb08250f628cbe73d29                 |
+------------------+--------------------------------------------------+

So those VMs on that network should send packets that would be tagged 
with VLAN 200.

I launch an instance, then look at the compute node with the instance on 
it.  It doesn't get a DHCP address, so it can't talk to the neutron node 
with the dnsmasq server running on it.  I configure the VM's interface 
to be a static IP on VLAN200, 10.200.0.30, and netmask 255.255.0.0.  I 
have another node set up on VLAN 200 on my switch to test with 
(10.200.0.50) that is a real bare-metal server.

I can't ping my bare-metal server.  I see the packets getting to eth1 on 
my compute node, but stopping there.  Then I figure out that the packets 
are *not being tagged* for VLAN 200 as they leave the compute node!!  So 
the switch is dropping them.    As a test I configure the switchport 
with "native vlan 200", and voila, the ping works.

So, Open vSwitch is not getting that it needs to tag the packets for 
VLAN 200.  A little diagnostics on the compute node:

  ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
  cookie=0x0, duration=966.803s, table=0, n_packets=0, n_bytes=0, 
idle_age=966, priority=0 actions=NORMAL

Shouldn't that show some VLAN tagging?

and a tcpdump on eth1 on the compute node:

# tcpdump -e -n -vv -i eth1 | grep -i arp
tcpdump: WARNING: eth1: no IPv4 address assigned
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 
65535 bytes
11:21:50.462447 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), 
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 
tell 10.200.0.30, length 28
11:21:51.462968 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), 
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 
tell 10.200.0.30, length 28
11:21:52.462330 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), 
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 
tell 10.200.0.30, length 28
11:21:53.462311 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), 
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 
tell 10.200.0.30, length 28
11:21:54.463169 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), 
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 
tell 10.200.0.30, length 28

That tcpdump also confirms the ARP packets are not being tagged 200 as 
they leave the physical interface.

This worked before when I was testing icehouse RC1, I don't know what 
changed with Open vSwitch...  Anyone have any ideas?

Thanks as always for the help!!  This list has been very helpful.

cheers,
erich





More information about the Openstack mailing list