[Openstack] Rebooted, now can't ping my guest

The King in Yellow yellowking at gmail.com
Fri Mar 15 20:57:38 UTC 2013


Perhaps somebody could give me the contents of their quantum node's "ovs-ofctl
dump-flows br-tun" and I could figure out what mine *should* look like?


On Tue, Mar 12, 2013 at 4:24 PM, The King in Yellow <yellowking at gmail.com>wrote:

> Okay, I have worked around my problem-- but I don't quite understand it,
> and hope somebody can help me.  It appears to be a problem with the Open
> vSwitch flows in br-tun on both compute and network.  Here are the flows as
> they are now, working.  I have manually added the priority=5 lines.
> Without those added manually on both sides, traffic from the guest's
> doesn't work properly.
>
> root at os-network:~# ovs-ofctl dump-flows br-tun
> NXST_FLOW reply (xid=0x4):
>  cookie=0x0, duration=5331.934s, table=0, n_packets=1111, n_bytes=171598,
> priority=3,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
> actions=mod_vlan_vid:1,output:1
>  cookie=0x0, duration=3.119s, table=0, n_packets=6, n_bytes=496,
> priority=5,dl_vlan=1 actions=NORMAL
>  cookie=0x0, duration=10.759s, table=0, n_packets=0, n_bytes=0,
> priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x1,NORMAL
>  cookie=0x0, duration=5331.725s, table=0, n_packets=0, n_bytes=0,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:36:2e:54 actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=5331.898s, table=0, n_packets=0, n_bytes=0,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:e2:38:da actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=5332.499s, table=0, n_packets=3502, n_bytes=286312,
> priority=1 actions=drop
> root at os-network:~#
>
> root at os-compute-01:~# ovs-ofctl dump-flows br-tun
> NXST_FLOW reply (xid=0x4):
>  cookie=0x0, duration=22348.618s, table=0, n_packets=20165,
> n_bytes=991767,
> priority=3,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
> actions=mod_vlan_vid:1,output:1
>  cookie=0x0, duration=177.949s, table=0, n_packets=151, n_bytes=21830,
> priority=5,dl_vlan=1 actions=NORMAL
>  cookie=0x0, duration=411.826s, table=0, n_packets=80, n_bytes=9566,
> priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x1,NORMAL
>  cookie=0x0, duration=22348.567s, table=0, n_packets=1111, n_bytes=252718,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:ee:9e:b2 actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=22348.128s, table=0, n_packets=1107, n_bytes=123234,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:8d:6d:13 actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=22348.353s, table=0, n_packets=1494, n_bytes=124036,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:95:94:9c actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=22347.912s, table=0, n_packets=3334, n_bytes=425776,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:7b:e3:ee actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=22349.47s, table=0, n_packets=879, n_bytes=75279,
> priority=1 actions=drop
> root at os-compute-01:~#
>
> Here is a sample packet that would have been blocked, sniffed in GRE.  The
> yellow background (if you can see the color) is the GRE header.  Inside the
> GRE payload, MAC (red) is1272.590f.cf56, and MAC (orange) is fa16.3e7b.e3ee
> are exchanging ping packets.
>
> 0000   00 50 56 81 44 e7 00 50 56 81 25 73 08 00 45 00  .PV.D..PV.%s..E.
> 0010   00 82 64 93 40 00 40 2f ad a3 0a 0a 0a 02 0a 0a  ..d. at .@/........
> 0020   0a 01 20 00 65 58 00 00 00 00 12 72 59 0f cf 56  .. .eX.....rY..V
> 0030   fa 16 3e 95 94 9c 81 00 00 01 08 00 45 00 00 54  ..>.........E..T
> 0040   00 00 40 00 40 01 1d 00 0a 05 05 04 0a 2a 04 77  .. at .@........*.w
> 0050   08 00 00 9f 0e 17 00 08 17 87 3f 51 00 00 00 00  ..........?Q....
> 0060   d3 96 00 00 00 00 00 00 10 11 12 13 14 15 16 17  ................
> 0070   18 19 1a 1b 1c 1d 1e 1f 20 21 22 23 24 25 26 27  ........ !"#$%&'
> 0080   28 29 2a 2b 2c 2d 2e 2f 30 31 32 33 34 35 36 37  ()*+,-./01234567
>
> 0000   00 50 56 81 25 73 00 50 56 81 44 e7 08 00 45 00  .PV.%s.PV.D...E.
> 0010   00 82 95 e3 40 00 40 2f 7c 53 0a 0a 0a 01 0a 0a  .... at .@/|S......
> 0020   0a 02 20 00 65 58 00 00 00 00 fa 16 3e 95 94 9c  .. .eX......>...
> 0030   12 72 59 0f cf 56 81 00 00 01 08 00 45 00 00 54  .rY..V......E..T
> 0040   1e 24 00 00 3e 01 40 dc 0a 2a 04 77 0a 05 05 04  .$..>. at ..*.w....
> 0050   00 00 08 9f 0e 17 00 08 17 87 3f 51 00 00 00 00  ..........?Q....
> 0060   d3 96 00 00 00 00 00 00 10 11 12 13 14 15 16 17  ................
> 0070   18 19 1a 1b 1c 1d 1e 1f 20 21 22 23 24 25 26 27  ........ !"#$%&'
> 0080   28 29 2a 2b 2c 2d 2e 2f 30 31 32 33 34 35 36 37  ()*+,-./01234567
>
> The source MAC does match the MAC for his gateway, 10.5.5.1:
>
> root at os-network:~# ovs-ofctl show br-int
> OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000862cf391d546
>
> n_tables:255, n_buffers:256
> features: capabilities:0xc7, actions:0xfff
>  1(qr-9f9041ce-65): addr:12:72:59:0f:cf:56
>      config:     0
>      state:      0
> :
>
> ...which is the understandable problem.  That MAC address is not
> specifically entered in the OVS bridge br-tun.  Any clue why?  This is
> across reboots, service restarts, etc...  I guess it is the
> /etc/openswitch/conf.db that is corrupted?
>
> What I don't understand at this point is why removing the priority 5 flow
> on the compute br-tun stops everything as well.  fa16.3e95.949c *IS*
> specified in the computer br-tun's flows.  Here, you can the flows on
> compute's br-tun.  It did not begin to allow packets until I added the
> priority flow 5 back in.
>
> root at os-compute-01:~# ovs-ofctl del-flows br-tun --strict
> 'priority=5,dl_vlan=1'
> root at os-compute-01:~# ovs-ofctl dump-flows br-tun
> NXST_FLOW reply (xid=0x4):
>  cookie=0x0, duration=22143.277s, table=0, n_packets=20165,
> n_bytes=991767,
> priority=3,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
> actions=mod_vlan_vid:1,output:1
>  cookie=0x0, duration=206.485s, table=0, n_packets=50, n_bytes=6130,
> priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x1,NORMAL
>  cookie=0x0, duration=22143.226s, table=0, n_packets=1111, n_bytes=252718,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:ee:9e:b2 actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=22142.787s, table=0, n_packets=1107, n_bytes=123234,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:8d:6d:13 actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=22143.012s, table=0, n_packets=1494, n_bytes=124036,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:95:94:9c actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=22142.571s, table=0, n_packets=3334, n_bytes=425776,
> priority=3,tun_id=0x1,dl_dst=fa:16:3e:7b:e3:ee actions=mod_vlan_vid:1,NORMAL
>  cookie=0x0, duration=22144.129s, table=0, n_packets=846, n_bytes=73761,
> priority=1 actions=drop
> root at os-compute-01:~# ovs-ofctl add-flow br-tun  'priority=5,dl_vlan=1
> actions=NORMAL'
> root at os-compute-01:~#
>
> Return traffic on the compute node should his this flow, correct?  So, it
> looks to me like both sides are good...I'm not sure what I'm missing there.
>
> Thirdly, why is br-tun locked down so with such specific flows?
> Especially when br-int is acting as a normal switch
>
> root at os-network:~# ovs-ofctl dump-flows br-int
> NXST_FLOW reply (xid=0x4):
>  cookie=0x0, duration=4509.278s, table=0, n_packets=9128, n_bytes=977139,
> priority=1 actions=NORMAL
> root at os-network:~#
>
> Is this some requirement with the OVS GRE tunnels?
>
>
> Would these questions be better suited for another list?
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130315/172c3354/attachment.html>


More information about the Openstack mailing list