[Openstack] Duplicate ICMP due to public interface bridge being placed in promiscus mode

Vladimir Popovski vladimir at zadarastorage.com
Fri Oct 14 18:51:04 UTC 2011


Vish,



We are not sure if this particular issue may cause any problem, but just
wanted to understand what is wrong.



To provide a bit more data about this particular environment:



-          Dedicated servers config at RAX

-          FlatDHCP mode with primary NIC (eth0) bridged and used for
fixed_range as well

-          Secondary nic (eth1) used for RabbitMQ/Glance/etc

-          Nova-network running on one node only



If we disable promiscuous mode, everything works fine – no DUPs. But in this
case VMs running on other nodes are unable to go outside (what is an
expected behavior in this case).



Here are some details of this config:



Nova.conf:



--s3_host=10.240.107.128

--rabbit_host=10.240.107.128

--ec2_host=10.240.107.128

--glance_api_servers=10.240.107.128:9292

--sql_connection=mysql://root:123@10.240.107.128/nova



--routing_source_ip=172.31.252.152

--fixed_range=172.31.252.0/22



--network_manager=nova.network.manager.FlatDHCPManager

--public_interface=br100



--multi_host=False

…



root at web1:/# ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet 169.254.169.254/32 scope link lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000

    link/ether 84:2b:2b:5a:49:a0 brd ff:ff:ff:ff:ff:ff

    inet6 fe80::862b:2bff:fe5a:49a0/64 scope link

       valid_lft forever preferred_lft forever

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000

    link/ether 84:2b:2b:5a:49:a2 brd ff:ff:ff:ff:ff:ff

    inet 10.240.107.128/24 brd 10.240.107.255 scope global eth1

    inet6 fe80::862b:2bff:fe5a:49a2/64 scope link

       valid_lft forever preferred_lft forever

4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000

    link/ether 84:2b:2b:5a:49:a4 brd ff:ff:ff:ff:ff:ff

5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000

    link/ether 84:2b:2b:5a:49:a6 brd ff:ff:ff:ff:ff:ff

6: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN

    link/ether 02:bb:dd:bd:e6:ed brd ff:ff:ff:ff:ff:ff

    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

*8: br100: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UNKNOWN *

*    link/ether 84:2b:2b:5a:49:a0 brd ff:ff:ff:ff:ff:ff*

*    inet 172.31.252.152/22 brd 172.31.255.255 scope global br100*

*    inet 172.31.252.1/22 brd 172.31.255.255 scope global secondary br100*

*    inet6 fe80::90db:ceff:fe33:450c/64 scope link *

*       valid_lft forever preferred_lft forever*

9: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
UNKNOWN qlen 500

    link/ether fe:16:3e:0e:11:6e brd ff:ff:ff:ff:ff:ff

    inet6 fe80::fc16:3eff:fe0e:116e/64 scope link

       valid_lft forever preferred_lft forever



root at web1:/# brctl show

bridge name     bridge id               STP enabled     interfaces

br100           8000.842b2b5a49a0       no              eth0

                                                        vnet0

virbr0          8000.000000000000       yes





root at web1:/# ifconfig -a

br100     Link encap:Ethernet  HWaddr 84:2b:2b:5a:49:a0

          inet addr:172.31.252.152  Bcast:172.31.255.255  Mask:255.255.252.0

          inet6 addr: fe80::90db:ceff:fe33:450c/64 Scope:Link

          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1

          RX packets:6909 errors:0 dropped:621 overruns:0 frame:0

          TX packets:2634 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:533738 (533.7 KB)  TX bytes:419886 (419.8 KB)



eth0      Link encap:Ethernet  HWaddr 84:2b:2b:5a:49:a0

          inet6 addr: fe80::862b:2bff:fe5a:49a0/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:6705 errors:0 dropped:0 overruns:0 frame:0

          TX packets:2667 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:598182 (598.1 KB)  TX bytes:447933 (447.9 KB)

          Interrupt:36 Memory:d6000000-d6012800



eth1      Link encap:Ethernet  HWaddr 84:2b:2b:5a:49:a2

          inet addr:10.240.107.128  Bcast:10.240.107.255  Mask:255.255.255.0

          inet6 addr: fe80::862b:2bff:fe5a:49a2/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:557 errors:0 dropped:0 overruns:0 frame:0

          TX packets:491 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:214973 (214.9 KB)  TX bytes:267663 (267.6 KB)

         Interrupt:48 Memory:d8000000-d8012800



eth2      Link encap:Ethernet  HWaddr 84:2b:2b:5a:49:a4

          BROADCAST MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

          Interrupt:32 Memory:da000000-da012800



eth3      Link encap:Ethernet  HWaddr 84:2b:2b:5a:49:a6

          BROADCAST MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

          Interrupt:42 Memory:dc000000-dc012800



lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:62789 errors:0 dropped:0 overruns:0 frame:0

          TX packets:62789 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:46351343 (46.3 MB)  TX bytes:46351343 (46.3 MB)



virbr0    Link encap:Ethernet  HWaddr 02:bb:dd:bd:e6:ed

          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)



vnet0     Link encap:Ethernet  HWaddr fe:16:3e:0e:11:6e

          inet6 addr: fe80::fc16:3eff:fe0e:116e/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:646 errors:0 dropped:0 overruns:0 frame:0

          TX packets:3970 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:500

          RX bytes:119572 (119.5 KB)  TX bytes:317664 (317.6 KB)





Thanks,

-Vladimir





*From:* openstack-bounces+vladimir=zadarastorage.com at lists.launchpad.net[mailto:
openstack-bounces+vladimir=zadarastorage.com at lists.launchpad.net] *On Behalf
Of *Vishvananda Ishaya
*Sent:* Friday, October 14, 2011 9:02 AM
*To:* Shyam Kaushik
*Cc:* openstack at lists.launchpad.net
*Subject:* Re: [Openstack] Duplicate ICMP due to public interface bridge
being placed in promiscus mode



Strange, I haven't seen this.  It sounds like maybe you have a mac address
conflict somehow.  The only time I've seen duplicate icmp responses is when
I was running inside of virtualbox and due to the way vagrant was setting up
vms, I had multiple vms with same mac.



Vish



On Oct 14, 2011, at 5:59 AM, Shyam Kaushik wrote:



*Hi Vish,*



In our openstack deployment we observe this:



Since linux_net.py/initialize_gateway_device() does this

    # NOTE(vish): If the public interface is the same as the

    #             bridge, then the bridge has to be in promiscuous

    #             to forward packets properly.

    if(FLAGS.public_interface == dev):

        _execute('ip', 'link', 'set',

                     'dev', dev, 'promisc', 'on', run_as_root=True)





Any VM spawned on the cloud controller node if it sends an ICMP ping to an
external network gets duplicate replies (i.e. there are 2 replies for the
same ICMP request). For VM’s spawned on any other non-cloud controller this
doesn’t happen.



If we turn of promiscus mode on the bridge, the VM on cloud controller
doesn’t see the duplicate replies, but VM’s on non-cloud controller cannot
reach external network.



Question to you is, is this duplicate ICMP replies expected for VM’s running
on cloud controller due to above logic?



--Shyam

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111014/681c25f1/attachment.html>


More information about the Openstack mailing list