[Openstack] Open vSwitch not working as expected...?

Erich Weiler weiler at soe.ucsc.edu
Fri Apr 25 19:22:39 UTC 2014


Sure!  Fro the network node:

# ovs-vsctl show
52702cef-6433-4627-ade8-51561b4e8126
     Bridge "br-eth2"
         Port "eth2"
             Interface "eth2"
         Port "br-eth2"
             Interface "br-eth2"
                 type: internal
         Port "phy-br-eth2"
             Interface "phy-br-eth2"
     Bridge br-int
         Port "qr-03f142ea-8d"
             Interface "qr-03f142ea-8d"
                 type: internal
         Port "tap2e4f052b-07"
             Interface "tap2e4f052b-07"
                 type: internal
         Port "qr-8fc9f5b9-9b"
             tag: 5
             Interface "qr-8fc9f5b9-9b"
                 type: internal
         Port br-int
             Interface br-int
                 type: internal
         Port int-br-ex
             Interface int-br-ex
         Port "qr-a102d5a4-10"
             tag: 4
             Interface "qr-a102d5a4-10"
                 type: internal
         Port "tap9c60db42-50"
             Interface "tap9c60db42-50"
                 type: internal
         Port "qr-fd62983b-9f"
             Interface "qr-fd62983b-9f"
                 type: internal
         Port "qr-2f4f2f9c-65"
             tag: 4095
             Interface "qr-2f4f2f9c-65"
                 type: internal
         Port "int-br-eth2"
             Interface "int-br-eth2"
     Bridge br-ex
         Port "qg-bd78c919-06"
             Interface "qg-bd78c919-06"
                 type: internal
         Port "qg-3911e599-0b"
             Interface "qg-3911e599-0b"
                 type: internal
         Port "qg-c3b05150-9e"
             Interface "qg-c3b05150-9e"
                 type: internal
         Port "eth1"
             Interface "eth1"
         Port br-ex
             Interface br-ex
                 type: internal
         Port phy-br-ex
             Interface phy-br-ex
     ovs_version: "1.11.0"

 From the compute node:

# ovs-vsctl show
bf0df6fb-b602-4a58-ac81-342b7bb17464
     Bridge "br-eth1"
         Port "phy-br-eth1"
             Interface "phy-br-eth1"
         Port "br-eth1"
             Interface "br-eth1"
                 type: internal
         Port "eth1"
             Interface "eth1"
     Bridge br-int
         Port "qvo297feecc-84"
             Interface "qvo297feecc-84"
         Port br-int
             Interface br-int
                 type: internal
         Port "int-br-eth1"
             Interface "int-br-eth1"
     ovs_version: "1.11.0"


On 04/25/14 12:15, Aaron Knister wrote:
> Can you send the output of ovs-vsctl show from both compute and network nodes?
>
> Sent from my iPhone
>
>> On Apr 25, 2014, at 2:23 PM, Erich Weiler <weiler at soe.ucsc.edu> wrote:
>>
>> Hi Y'all,
>>
>> I recently began rebuilding my OpenStack installation under the latest icehouse release, and everything is almost working, but I'm having issues with Open vSwitch, at least on the compute nodes.
>>
>> I'm use the ML2 plugin and VLAN tenant isolation.  I have this in my /etc/neutron/plugin.ini file
>>
>> ----------
>> [ovs]
>> bridge_mappings = physnet1:br-eth1
>>
>> [ml2]
>> type_drivers = vlan
>> tenant_network_types = vlan
>> mechanism_drivers  = openvswitch
>>
>> # Example: mechanism_drivers = linuxbridge,brocade
>>
>> [ml2_type_flat]
>>
>> [ml2_type_vlan]
>> network_vlan_ranges = physnet1:200:209
>> ----------
>>
>> My switchports that the nodes connect to are configured as trunks, allowing VLANs 200-209 to flow over them.
>>
>> My network that the VMs should be connecting to is:
>>
>> # neutron net-show cbse-net
>> +---------------------------+--------------------------------------+
>> | Field                     | Value                                |
>> +---------------------------+--------------------------------------+
>> | admin_state_up            | True                                 |
>> | id                        | 23028b15-fb12-4a9f-9fba-02f165a52d44 |
>> | name                      | cbse-net                             |
>> | provider:network_type     | vlan                                 |
>> | provider:physical_network | physnet1                             |
>> | provider:segmentation_id  | 200                                  |
>> | router:external           | False                                |
>> | shared                    | False                                |
>> | status                    | ACTIVE                               |
>> | subnets                   | dd25433a-b21d-475d-91e4-156b00f25047 |
>> | tenant_id                 | 7c1980078e044cb08250f628cbe73d29     |
>> +---------------------------+--------------------------------------+
>>
>> # neutron subnet-show dd25433a-b21d-475d-91e4-156b00f25047
>> +------------------+--------------------------------------------------+
>> | Field            | Value                                            |
>> +------------------+--------------------------------------------------+
>> | allocation_pools | {"start": "10.200.0.2", "end": "10.200.255.254"} |
>> | cidr             | 10.200.0.0/16                                    |
>> | dns_nameservers  | 128.114.48.44                                    |
>> | enable_dhcp      | True                                             |
>> | gateway_ip       | 10.200.0.1                                       |
>> | host_routes      |                                                  |
>> | id               | dd25433a-b21d-475d-91e4-156b00f25047             |
>> | ip_version       | 4                                                |
>> | name             |                                                  |
>> | network_id       | 23028b15-fb12-4a9f-9fba-02f165a52d44             |
>> | tenant_id        | 7c1980078e044cb08250f628cbe73d29                 |
>> +------------------+--------------------------------------------------+
>>
>> So those VMs on that network should send packets that would be tagged with VLAN 200.
>>
>> I launch an instance, then look at the compute node with the instance on it.  It doesn't get a DHCP address, so it can't talk to the neutron node with the dnsmasq server running on it.  I configure the VM's interface to be a static IP on VLAN200, 10.200.0.30, and netmask 255.255.0.0.  I have another node set up on VLAN 200 on my switch to test with (10.200.0.50) that is a real bare-metal server.
>>
>> I can't ping my bare-metal server.  I see the packets getting to eth1 on my compute node, but stopping there.  Then I figure out that the packets are *not being tagged* for VLAN 200 as they leave the compute node!!  So the switch is dropping them.    As a test I configure the switchport with "native vlan 200", and voila, the ping works.
>>
>> So, Open vSwitch is not getting that it needs to tag the packets for VLAN 200.  A little diagnostics on the compute node:
>>
>> ovs-ofctl dump-flows br-int
>> NXST_FLOW reply (xid=0x4):
>> cookie=0x0, duration=966.803s, table=0, n_packets=0, n_bytes=0, idle_age=966, priority=0 actions=NORMAL
>>
>> Shouldn't that show some VLAN tagging?
>>
>> and a tcpdump on eth1 on the compute node:
>>
>> # tcpdump -e -n -vv -i eth1 | grep -i arp
>> tcpdump: WARNING: eth1: no IPv4 address assigned
>> tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
>> 11:21:50.462447 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28
>> 11:21:51.462968 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28
>> 11:21:52.462330 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28
>> 11:21:53.462311 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28
>> 11:21:54.463169 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28
>>
>> That tcpdump also confirms the ARP packets are not being tagged 200 as they leave the physical interface.
>>
>> This worked before when I was testing icehouse RC1, I don't know what changed with Open vSwitch...  Anyone have any ideas?
>>
>> Thanks as always for the help!!  This list has been very helpful.
>>
>> cheers,
>> erich
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




More information about the Openstack mailing list