[openstack-dev] [octavia] amphora fails to send request to members

Yipei Niu newypei at gmail.com
Tue Nov 14 08:29:31 UTC 2017


Hi, Michael,

Please ignore my last two mails. Sorry about that.

The results of the two commands are as follows.

ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ sudo ip netns exec
amphora-haproxy ip route show table all
sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f
default via 10.0.1.1 dev eth1  table 1 onlink
default via 10.0.1.10 dev eth1
10.0.1.0/24 dev eth1  proto kernel  scope link  src 10.0.1.8
broadcast 10.0.1.0 dev eth1  table local  proto kernel  scope link  src
10.0.1.8
local 10.0.1.4 dev eth1  table local  proto kernel  scope host  src
10.0.1.8
local 10.0.1.8 dev eth1  table local  proto kernel  scope host  src
10.0.1.8
broadcast 10.0.1.255 dev eth1  table local  proto kernel  scope link  src
10.0.1.8
fe80::/64 dev eth1  proto kernel  metric 256  pref medium
unreachable default dev lo  table unspec  proto kernel  metric 4294967295
<0429%20496%207295>  error -101 pref medium
local fe80::f816:3eff:febe:5ad5 dev lo  table local  proto none  metric 0
pref medium
ff00::/8 dev eth1  table local  metric 256  pref medium
unreachable default dev lo  table unspec  proto kernel  metric 4294967295
<0429%20496%207295>  error -101 pref medium

ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ sudo ip netns exec
amphora-haproxy ip rule show
sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f
0: from all lookup local
100: from 10.0.1.4 lookup 1
32766: from all lookup main
32767: from all lookup default

I think I know the source. When haproxy receives packets sent by curl, it
responds with taking VIP as source ip for 3-way handshake. Before adding “
100: from 10.0.1.9 lookup 1”, the datagrams are routed based on main table.
After adding "100: from 10.0.1.9 lookup 1", the haproxy tries to find the
gateway based on "default via 10.0.1.1 dev eth1  table 1 onlink". However,
if there is no router, the gateway ip is missing, making haproxy fails to
build tcp connection.

Best regards,
Yipei


On Tue, Nov 14, 2017 at 11:04 AM, Yipei Niu <newypei at gmail.com> wrote:

> Hi, Michael,
>
> Sorry about the typo in the last mail. Please just ignore the last mail.
>
> In the environment where octavia and tricircle are installed together, I
> created a router and attached subnet1 to it. Then I bind the mac address of
> 10.0.1.10 (real gateway) to ip of 10.0.1.1 in the amphora arp cache,
> manually making amphora knows the mac address of 10.0.1.1 (actually it is
> the mac of 10.0.1.10, since 10.0.1.1 does not exist), it works.
>
> I also run the commands in this environment.
>
> ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ sudo ip netns exec
> amphora-haproxy ip route show table all
> sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f
> default via 10.0.1.1 dev eth1  table 1 onlink
> default via 10.0.1.10 dev eth1
> 10.0.1.0/24 dev eth1  proto kernel  scope link  src 10.0.1.8
> broadcast 10.0.1.0 dev eth1  table local  proto kernel  scope link  src
> 10.0.1.8
> local 10.0.1.4 dev eth1  table local  proto kernel  scope host  src
> 10.0.1.8
> local 10.0.1.8 dev eth1  table local  proto kernel  scope host  src
> 10.0.1.8
> broadcast 10.0.1.255 dev eth1  table local  proto kernel  scope link  src
> 10.0.1.8
> fe80::/64 dev eth1  proto kernel  metric 256  pref medium
> unreachable default dev lo  table unspec  proto kernel  metric 4294967295
> <0429%20496%207295>  error -101 pref medium
> local fe80::f816:3eff:febe:5ad5 dev lo  table local  proto none  metric 0
> pref medium
> ff00::/8 dev eth1  table local  metric 256  pref medium
> unreachable default dev lo  table unspec  proto kernel  metric 4294967295
> <0429%20496%207295>  error -101 pref medium
> ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ sudo ip netns exec
> amphora-haproxy ip rule show
> sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f
> 0: from all lookup local
> 100: from 10.0.1.4 lookup 1
> 32766: from all lookup main
> 32767: from all lookup default
>
>
> To make the situation clear, I run the commands in the environment
> installed octavia alone. Please note that in this environment, there is no
> router. The results are as follows.
>
> stack at stack-VirtualBox:~$ neutron lbaas-loadbalancer-list
> neutron CLI is deprecated and will be removed in the future. Use openstack
> CLI instead.
> +--------------------------------------+------+-------------
> ---------------------+-------------+---------------------+----------+
> | id                                   | name | tenant_id
>       | vip_address | provisioning_status | provider |
> +--------------------------------------+------+-------------
> ---------------------+-------------+---------------------+----------+
> | d087d3b4-afbe-4af6-8b31-5e86fc97da1b | lb1  |
> e59bb8f3bf9342aba02f9ba5804ed2fb | 10.0.1.9    | ACTIVE              |
> octavia  |
> +--------------------------------------+------+-------------
> ---------------------+-------------+---------------------+----------+
>
> ubuntu at amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns exec
> amphora-haproxy ip addr
> sudo: unable to resolve host amphora-dcbff58a-d418-4271-9374-9de2fd063ce9
> 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast
> state UP group default qlen 1000
>     link/ether fa:16:3e:dc:9e:61 brd ff:ff:ff:ff:ff:ff
>     inet 10.0.1.11/24 brd 10.0.1.255 scope global eth1
>        valid_lft forever preferred_lft forever
>     inet 10.0.1.9/24 brd 10.0.1.255 scope global secondary eth1:0
>        valid_lft forever preferred_lft forever
>     inet6 fe80::f816:3eff:fedc:9e61/64 scope link
>        valid_lft forever preferred_lft forever
>
> ubuntu at amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns exec
> amphora-haproxy ip route show table all
> sudo: unable to resolve host amphora-dcbff58a-d418-4271-9374-9de2fd063ce9
> default via 10.0.1.1 dev eth1  table 1 onlink
> default via 10.0.1.1 dev eth1 onlink
> 10.0.1.0/24 dev eth1  proto kernel  scope link  src 10.0.1.11
> broadcast 10.0.1.0 dev eth1  table local  proto kernel  scope link  src
> 10.0.1.11
> local 10.0.1.9 dev eth1  table local  proto kernel  scope host  src
> 10.0.1.11
> local 10.0.1.11 dev eth1  table local  proto kernel  scope host  src
> 10.0.1.11
> broadcast 10.0.1.255 dev eth1  table local  proto kernel  scope link  src
> 10.0.1.11
> fe80::/64 dev eth1  proto kernel  metric 256  pref medium
> unreachable default dev lo  table unspec  proto kernel  metric 4294967295
> <0429%20496%207295>  error -101 pref medium
> local fe80::f816:3eff:fedc:9e61 dev lo  table local  proto none  metric 0
> pref medium
> ff00::/8 dev eth1  table local  metric 256  pref medium
> unreachable default dev lo  table unspec  proto kernel  metric 4294967295
> <0429%20496%207295>  error -101 pref medium
>
> ubuntu at amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns exec
> amphora-haproxy ip rule show
> sudo: unable to resolve host amphora-dcbff58a-d418-4271-9374-9de2fd063ce9
> 0: from all lookup local
> 100: from 10.0.1.9 lookup 1
> 32766: from all lookup main
> 32767: from all lookup default
>
> If there is no router, the packets are supposed to forwarded in layer 2,
> as amphora is plugged with ports of every member's subnet. The priority of
> “from 10.0.1.9 lookup 1” is higher than that of "from all lookup local".
> Maybe this patch (https://review.openstack.org/#/c/501915/) affects the
> l2 traffic.
>
> Best regards,
> Yipei
>
> On Tue, Nov 14, 2017 at 10:40 AM, Yipei Niu <newypei at gmail.com> wrote:
>
>> Hi, Michael,
>>
>> Thanks a lot for your comments.
>>
>> In the environment where octavia and tricircle are installed together, I
>> created a router and attached subnet1 to it. Then I bind the mac address of
>> 10.0.1.9 (real gateway) to ip of 10.0.1.1 in the amphora arp cache,
>> manually making amphora knows the mac address of 10.0.1.1 (actually it is
>> the mac of 10.0.1.9, since 10.0.1.1 does not exist), it works.
>>
>> To make the situation clear, I run the commands in the environment
>> installed octavia alone. Please note that in this environment, there is no
>> router. The results are as follows.
>>
>> stack at stack-VirtualBox:~$ neutron lbaas-loadbalancer-list
>> neutron CLI is deprecated and will be removed in the future. Use
>> openstack CLI instead.
>> +--------------------------------------+------+-------------
>> ---------------------+-------------+---------------------+----------+
>> | id                                   | name | tenant_id
>>         | vip_address | provisioning_status | provider |
>> +--------------------------------------+------+-------------
>> ---------------------+-------------+---------------------+----------+
>> | d087d3b4-afbe-4af6-8b31-5e86fc97da1b | lb1  |
>> e59bb8f3bf9342aba02f9ba5804ed2fb | 10.0.1.9    | ACTIVE              |
>> octavia  |
>> +--------------------------------------+------+-------------
>> ---------------------+-------------+---------------------+----------+
>>
>> ubuntu at amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns
>> exec amphora-haproxy ip addr
>> sudo: unable to resolve host amphora-dcbff58a-d418-4271-9374-9de2fd063ce9
>> 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast
>> state UP group default qlen 1000
>>     link/ether fa:16:3e:dc:9e:61 brd ff:ff:ff:ff:ff:ff
>>     inet 10.0.1.11/24 brd 10.0.1.255 scope global eth1
>>        valid_lft forever preferred_lft forever
>>     inet 10.0.1.9/24 brd 10.0.1.255 scope global secondary eth1:0
>>        valid_lft forever preferred_lft forever
>>     inet6 fe80::f816:3eff:fedc:9e61/64 scope link
>>        valid_lft forever preferred_lft forever
>>
>> ubuntu at amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns
>> exec amphora-haproxy ip route show table all
>> sudo: unable to resolve host amphora-dcbff58a-d418-4271-9374-9de2fd063ce9
>> default via 10.0.1.1 dev eth1  table 1 onlink
>> default via 10.0.1.1 dev eth1 onlink
>> 10.0.1.0/24 dev eth1  proto kernel  scope link  src 10.0.1.11
>> broadcast 10.0.1.0 dev eth1  table local  proto kernel  scope link  src
>> 10.0.1.11
>> local 10.0.1.9 dev eth1  table local  proto kernel  scope host  src
>> 10.0.1.11
>> local 10.0.1.11 dev eth1  table local  proto kernel  scope host  src
>> 10.0.1.11
>> broadcast 10.0.1.255 dev eth1  table local  proto kernel  scope link  src
>> 10.0.1.11
>> fe80::/64 dev eth1  proto kernel  metric 256  pref medium
>> unreachable default dev lo  table unspec  proto kernel  metric 4294967295
>> <0429%20496%207295>  error -101 pref medium
>> local fe80::f816:3eff:fedc:9e61 dev lo  table local  proto none  metric
>> 0  pref medium
>> ff00::/8 dev eth1  table local  metric 256  pref medium
>> unreachable default dev lo  table unspec  proto kernel  metric 4294967295
>> <0429%20496%207295>  error -101 pref medium
>>
>> ubuntu at amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns
>> exec amphora-haproxy ip rule show
>> sudo: unable to resolve host amphora-dcbff58a-d418-4271-9374-9de2fd063ce9
>> 0: from all lookup local
>> 100: from 10.0.1.9 lookup 1
>> 32766: from all lookup main
>> 32767: from all lookup default
>>
>> If there is no router, the packets are supposed to forwarded in layer 2,
>> as amphora is plugged with ports of every member's subnet. The priority of
>> “from 10.0.1.9 lookup 1” is higher than that of "from all lookup local".
>> Maybe this patch (https://review.openstack.org/#/c/501915/) affects the
>> l2 traffic.
>>
>> Best regards,
>> Yipei
>>
>> On Sat, Nov 11, 2017 at 11:24 AM, Yipei Niu <newypei at gmail.com> wrote:
>>
>>> Hi, Michael,
>>>
>>> I tried to run to command, and I think the amphora can connect to the
>>> member (10.0.1.3). The results are as follows.
>>>
>>> ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ sudo ip netns
>>> exec amphora-haproxy ping 10.0.1.3
>>> sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4e
>>> e-05b695e2b71f
>>> PING 10.0.1.3 (10.0.1.3) 56(84) bytes of data.
>>> 64 bytes from 10.0.1.3: icmp_seq=1 ttl=64 time=189 ms
>>> 64 bytes from 10.0.1.3: icmp_seq=2 ttl=64 time=1.72 ms
>>> ^C
>>> --- 10.0.1.3 ping statistics ---
>>> 2 packets transmitted, 2 received, 0% packet loss, time 1006ms
>>> rtt min/avg/max/mdev = 1.722/95.855/189.989/94.134 ms
>>>
>>> ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ sudo ip netns
>>> exec amphora-haproxy curl 10.0.1.3
>>> sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4e
>>> e-05b695e2b71f
>>> Welcome to 10.0.1.3
>>>
>>> stack at devstack-1:~$ sudo ip netns exec qdhcp-310fea4b-36ae-4617-b499-5936e8eda842
>>> curl 10.0.1.3
>>> Welcome to 10.0.1.3
>>>
>>> As mentioned in my previous mail, I also have an environment installed
>>> with octavia alone, where the error reproduces. In that environment, I also
>>> tried the above commands, and have the same results. The member can be
>>> reached from host and amphora.
>>>
>>> ubuntu at amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns
>>> exec amphora-haproxy curl 10.0.1.5
>>> sudo: unable to resolve host amphora-dcbff58a-d418-4271-937
>>> 4-9de2fd063ce9
>>> Welcome to 10.0.1.5
>>>
>>> stack at stack-VirtualBox:~$ sudo ip netns exec
>>> qdhcp-13185eec-0996-4a08-b353-6775d5926b4c curl 10.0.1.5
>>> Welcome to 10.0.1.5
>>>
>>> In this environment, haproxy also tries to find the gateway ip 10.0.1.1.
>>>
>>> ubuntu at amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns
>>> exec amphora-haproxy tcpdump -i eth1 -nn
>>> sudo: unable to resolve host amphora-dcbff58a-d418-4271-937
>>> 4-9de2fd063ce9
>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>> decode
>>> listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
>>> ^C08:58:53.997948 IP 10.0.1.2.55110 > 10.0.1.9.80: Flags [S], seq
>>> 4080898035, win 28200, options [mss 1410,sackOK,TS val 3910330593 ecr
>>> 0,nop,wscale 7], length 0
>>> 08:58:54.011771 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28
>>> 08:58:54.991600 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28
>>> 08:58:55.018542 IP 10.0.1.2.55110 > 10.0.1.9.80: Flags [S], seq
>>> 4080898035, win 28200, options [mss 1410,sackOK,TS val 3910330850 ecr
>>> 0,nop,wscale 7], length 0
>>> 08:58:55.991703 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28
>>> 08:58:57.015187 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28
>>> 08:58:57.034507 IP 10.0.1.2.55110 > 10.0.1.9.80: Flags [S], seq
>>> 4080898035, win 28200, options [mss 1410,sackOK,TS val 3910331354 ecr
>>> 0,nop,wscale 7], length 0
>>> 08:58:58.016438 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28
>>> 08:58:59.017650 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28
>>> 08:58:59.115960 ARP, Request who-has 10.0.1.9 tell 10.0.1.2, length 28
>>> 08:58:59.116293 ARP, Reply 10.0.1.9 is-at fa:16:3e:dc:9e:61, length 28
>>> 08:59:01.031434 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28
>>> 08:59:01.162845 IP 10.0.1.2.55110 > 10.0.1.9.80: Flags [S], seq
>>> 4080898035, win 28200, options [mss 1410,sackOK,TS val 3910332386 ecr
>>> 0,nop,wscale 7], length 0
>>> 08:59:02.031095 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28
>>> 08:59:03.035527 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28
>>>
>>> 15 packets captured
>>> 15 packets received by filter
>>> 0 packets dropped by kernel
>>>
>>> And the gateway ip requested by haproxy is the same as it in the subnet.
>>> +-------------------+--------------------------------------------+
>>> | Field             | Value                                      |
>>> +-------------------+--------------------------------------------+
>>> | allocation_pools  | {"start": "10.0.1.2", "end": "10.0.1.254"} |
>>> | cidr              | 10.0.1.0/24                                |
>>> | created_at        | 2017-10-08T06:33:09Z                       |
>>> | description       |                                            |
>>> | dns_nameservers   |                                            |
>>> | enable_dhcp       | True                                       |
>>> | gateway_ip        | 10.0.1.1                                   |
>>> | host_routes       |                                            |
>>> | id                | 37023e56-a8bf-4070-8022-f6b6bb7b8e82       |
>>> | ip_version        | 4                                          |
>>> | ipv6_address_mode |                                            |
>>> | ipv6_ra_mode      |                                            |
>>> | name              | subnet1                                    |
>>> | network_id        | 13185eec-0996-4a08-b353-6775d5926b4c       |
>>> | project_id        | e59bb8f3bf9342aba02f9ba5804ed2fb           |
>>> | revision_number   | 0                                          |
>>> | service_types     |                                            |
>>> | subnetpool_id     |                                            |
>>> | tags              |                                            |
>>> | tenant_id         | e59bb8f3bf9342aba02f9ba5804ed2fb           |
>>> | updated_at        | 2017-10-08T06:33:09Z                       |
>>> +-------------------+--------------------------------------------+
>>>
>>> Some other info in this octavia env is as follows.
>>>
>>> The info of load balancer:
>>> +---------------------+------------------------------------------------+
>>> | Field               | Value                                          |
>>> +---------------------+------------------------------------------------+
>>> | admin_state_up      | True                                           |
>>> | description         |                                                |
>>> | id                  | d087d3b4-afbe-4af6-8b31-5e86fc97da1b           |
>>> | listeners           | {"id": "d22644d2-cc40-44f1-b37f-2bb4f555f9b9"} |
>>> | name                | lb1                                            |
>>> | operating_status    | ONLINE                                         |
>>> | pools               | {"id": "bc95c8e0-8475-4d97-9606-76c431e78ef7"} |
>>> | provider            | octavia                                        |
>>> | provisioning_status | ACTIVE                                         |
>>> | tenant_id           | e59bb8f3bf9342aba02f9ba5804ed2fb               |
>>> | vip_address         | 10.0.1.9                                       |
>>> | vip_port_id         | 902a78e7-a618-455b-91c7-cd36595475cc           |
>>> | vip_subnet_id       | 37023e56-a8bf-4070-8022-f6b6bb7b8e82           |
>>> +---------------------+------------------------------------------------+
>>>
>>> The info of VMs:
>>> +--------------------------------------+--------------------
>>> --------------------------+--------+------------+-----------
>>> --+------------------------------------------+
>>> | ID                                   | Name
>>>              | Status | Task State | Power State | Networks
>>>                  |
>>> +--------------------------------------+--------------------
>>> --------------------------+--------+------------+-----------
>>> --+------------------------------------------+
>>> | 50ae3581-94fc-43ce-b53c-728715cd5597 | amphora-dcbff58a-d418-4271-9374-9de2fd063ce9
>>> | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.10;
>>> net1=10.0.1.11 |
>>> | d19b565d-14aa-4679-9d98-ff51461cd625 | vm1
>>>               | ACTIVE | -          | Running     | net1=10.0.1.5
>>>                   |
>>> +--------------------------------------+--------------------
>>> --------------------------+--------+------------+-----------
>>> --+------------------------------------------+
>>>
>>> The info of listener:
>>> +---------------------------+-------------------------------
>>> -----------------+
>>> | Field                     | Value
>>>     |
>>> +---------------------------+-------------------------------
>>> -----------------+
>>> | admin_state_up            | True
>>>      |
>>> | connection_limit          | -1
>>>      |
>>> | default_pool_id           | bc95c8e0-8475-4d97-9606-76c431e78ef7
>>>      |
>>> | default_tls_container_ref |
>>>     |
>>> | description               |
>>>     |
>>> | id                        | d22644d2-cc40-44f1-b37f-2bb4f555f9b9
>>>      |
>>> | loadbalancers             | {"id": "d087d3b4-afbe-4af6-8b31-5e86fc97da1b"}
>>> |
>>> | name                      | listener1
>>>     |
>>> | protocol                  | HTTP
>>>      |
>>> | protocol_port             | 80
>>>      |
>>> | sni_container_refs        |
>>>     |
>>> | tenant_id                 | e59bb8f3bf9342aba02f9ba5804ed2fb
>>>      |
>>> +---------------------------+-------------------------------
>>> -----------------+
>>>
>>> So I think the amphora can reach the member, it just cannot respond to
>>> curl, hence failing to balance load across members. Maybe there is
>>> something wrong with the image of the amphora. Since I replaced the latest
>>> amphora image with an old one (built on 2017-07-24), it works. Totally
>>> same environment except the amphora image.
>>>
>>> Best regards,
>>> Yipei
>>>
>>> On Fri, Nov 10, 2017 at 5:24 PM, Yipei Niu <newypei at gmail.com> wrote:
>>>
>>>> Hi, Michael,
>>>>
>>>> Thanks a lot for your reply.
>>>>
>>>> I can make sure that there is no router or multiple dhcp services in my
>>>> environment.
>>>>
>>>> As shown in my first mail, the haproxy in the amphora tries to find the
>>>> gateway ip 10.0.1.1 that does not exist in the environment.
>>>>
>>>> ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ sudo ip netns
>>>> exec amphora-haproxy tcpdump -i eth1 -nn
>>>> sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4e
>>>> e-05b695e2b71f
>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>>> decode
>>>> listening on eth1, link-type EN10MB (Ethernet), capture size 262144
>>>> bytes
>>>> ^C07:25:24.225614 IP 10.0.1.2.55294 > 10.0.1.4.80: Flags [S], seq
>>>> 1637781601, win 28200, options [mss 1410,sackOK,TS val 30692602 ecr
>>>> 0,nop,wscale 7], length 0
>>>> 07:25:24.237854 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28
>>>> 07:25:25.224801 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28
>>>> 07:25:25.228610 IP 10.0.1.2.55294 > 10.0.1.4.80: Flags [S], seq
>>>> 1637781601, win 28200, options [mss 1410,sackOK,TS val 30692853 ecr
>>>> 0,nop,wscale 7], length 0
>>>> 07:25:26.224533 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28
>>>> 07:25:27.230911 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28
>>>> 07:25:27.250858 IP 10.0.1.2.55294 > 10.0.1.4.80: Flags [S], seq
>>>> 1637781601, win 28200, options [mss 1410,sackOK,TS val 30693359 ecr
>>>> 0,nop,wscale 7], length 0
>>>> 07:25:28.228796 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28
>>>> 07:25:29.228697 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28
>>>> 07:25:29.290551 ARP, Request who-has 10.0.1.4 tell 10.0.1.2, length 28
>>>> 07:25:29.290985 ARP, Reply 10.0.1.4 is-at fa:16:3e:be:5a:d5, length 28
>>>> 07:25:31.251122 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28
>>>> 07:25:32.248737 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28
>>>> 07:25:33.250309 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28
>>>>
>>>> *So if the subnet is not attached to any router, why does haproxy try
>>>> to find the gateway ip that does not exist at all? Maybe that is the reason
>>>> why haproxy receives the packet from curl but fail to respond. *
>>>>
>>>> I think the gateway ip (10.0.1.10) confuses you. Actually, in my
>>>> environment octavia and tricircle (https://wiki.openstack.org/wi
>>>> ki/Tricircle) are installed together. Because of the cross-neutron
>>>> mechanism of tricircle, the gateway ip of subnet in that region is
>>>> 10.0.1.10. But I can make sure that gateway ip (10.0.1.1 or 10.0.1.10) does
>>>> not exist in the network, since there is no router at all. This error also
>>>> happens in my another environment where octavia is installed alone. The
>>>> environment is installed on Oct. 6, and all the repos are the latest at
>>>> that time.
>>>>
>>>> Best regards,
>>>> Yipei
>>>>
>>>>
>>>> On Thu, Nov 9, 2017 at 2:50 PM, Yipei Niu <newypei at gmail.com> wrote:
>>>>
>>>>> Hi, Michael,
>>>>>
>>>>> Based on your mail, the information is as follows.
>>>>>
>>>>> 1. The version of Octavia I used is Queens, and the latest commit
>>>>> message is
>>>>> commit 2ab2836d0ebdd0fd5bc32d3adcc44a92557c8c1d
>>>>> Author: OpenStack Proposal Bot <openstack-infra at lists.openstack.org>
>>>>> Date:   Fri Nov 3 17:58:59 2017 +0000
>>>>>
>>>>>     Updated from global requirements
>>>>>
>>>>>     Change-Id: I9047e289b8a3c931156da480b3f9f676c54a8358
>>>>>
>>>>> 2. The info of the amphora and other VMs is as follows.
>>>>> +--------------------------------------+--------------------
>>>>> --------------------------+--------+------------+-----------
>>>>> --+---------------------------------------------------------+
>>>>> | ID                                   | Name
>>>>>                | Status | Task State | Power State | Networks
>>>>>                                   |
>>>>> +--------------------------------------+--------------------
>>>>> --------------------------+--------+------------+-----------
>>>>> --+---------------------------------------------------------+
>>>>> | 33bd02cb-f853-404d-a705-99bc1b04a178 |
>>>>> amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f | ACTIVE | -          |
>>>>> Running     | lb-mgmt-net1=192.168.1.4; net1=10.0.1.8                 |
>>>>> | dd046fc9-e2bf-437d-8c51-c397bccc3dc1 | client1
>>>>>                 | ACTIVE | -          | Running     | net1=10.0.1.3
>>>>>                                    |
>>>>> | 50446c75-7cb7-43eb-b057-4b6b89a926bc | client3
>>>>>                 | ACTIVE | -          | Running     | net4=10.0.4.3
>>>>>                                    |
>>>>> +--------------------------------------+--------------------
>>>>> --------------------------+--------+------------+-----------
>>>>> --+---------------------------------------------------------+
>>>>>
>>>>> 3. The info of the load balancer is as follows.
>>>>> +---------------------+-------------------------------------
>>>>> -----------+
>>>>> | Field               | Value
>>>>> |
>>>>> +---------------------+-------------------------------------
>>>>> -----------+
>>>>> | admin_state_up      | True
>>>>>  |
>>>>> | description         |
>>>>> |
>>>>> | id                  | 51cba1d5-cc3c-48ff-b41e-839619093334
>>>>>  |
>>>>> | listeners           | {"id": "b20ad920-c6cd-4e71-a9b9-c134e57ecd20"}
>>>>> |
>>>>> | name                | lb1
>>>>> |
>>>>> | operating_status    | ONLINE
>>>>>  |
>>>>> | pools               | {"id": "d0042605-da50-4048-b298-660420b0a1d2"}
>>>>> |
>>>>> | provider            | octavia
>>>>> |
>>>>> | provisioning_status | ACTIVE
>>>>>  |
>>>>> | tenant_id           | c2a97a04cb6d4f25bdcb8b3f263c869e
>>>>>  |
>>>>> | vip_address         | 10.0.1.4
>>>>>  |
>>>>> | vip_port_id         | 2209a819-0ac8-4211-b878-f0b41ac4727b
>>>>>  |
>>>>> | vip_subnet_id       | cbcf4f04-da6d-4800-8b40-4b141972c2bf
>>>>>  |
>>>>> +---------------------+-------------------------------------
>>>>> -----------+
>>>>>
>>>>> 4. The info of the listener is as follows.
>>>>> +---------------------------+-------------------------------
>>>>> -----------------+
>>>>> | Field                     | Value
>>>>>       |
>>>>> +---------------------------+-------------------------------
>>>>> -----------------+
>>>>> | admin_state_up            | True
>>>>>        |
>>>>> | connection_limit          | -1
>>>>>        |
>>>>> | default_pool_id           | d0042605-da50-4048-b298-660420b0a1d2
>>>>>        |
>>>>> | default_tls_container_ref |
>>>>>       |
>>>>> | description               |
>>>>>       |
>>>>> | id                        | b20ad920-c6cd-4e71-a9b9-c134e57ecd20
>>>>>        |
>>>>> | loadbalancers             | {"id": "51cba1d5-cc3c-48ff-b41e-839619093334"}
>>>>> |
>>>>> | name                      | listener1
>>>>>       |
>>>>> | protocol                  | HTTP
>>>>>        |
>>>>> | protocol_port             | 80
>>>>>        |
>>>>> | sni_container_refs        |
>>>>>       |
>>>>> | tenant_id                 | c2a97a04cb6d4f25bdcb8b3f263c869e
>>>>>        |
>>>>> +---------------------------+-------------------------------
>>>>> -----------------+
>>>>>
>>>>> 5. The members of the load balancer lb1 are as follows.
>>>>> +--------------------------------------+------+-------------
>>>>> ---------------------+----------+---------------+--------+--
>>>>> ------------------------------------+----------------+
>>>>> | id                                   | name | tenant_id
>>>>>           | address  | protocol_port | weight | subnet_id
>>>>>           | admin_state_up |
>>>>> +--------------------------------------+------+-------------
>>>>> ---------------------+----------+---------------+--------+--
>>>>> ------------------------------------+----------------+
>>>>> | 420c905c-1077-46c9-8b04-526a59d93376 |      |
>>>>> c2a97a04cb6d4f25bdcb8b3f263c869e | 10.0.1.3 |            80 |      1
>>>>> | cbcf4f04-da6d-4800-8b40-4b141972c2bf | True           |
>>>>> +--------------------------------------+------+-------------
>>>>> ---------------------+----------+---------------+--------+--
>>>>> ------------------------------------+----------------+
>>>>>
>>>>> 6. Since the VIP and the members reside in the same subnet, only two
>>>>> subnets are listed as follows.
>>>>> +--------------------------------------+-----------------+--
>>>>> --------------------------------+----------------+----------
>>>>> -----------------------------------------+
>>>>> | id                                   | name            | tenant_id
>>>>>                       | cidr           | allocation_pools
>>>>>                 |
>>>>> +--------------------------------------+-----------------+--
>>>>> --------------------------------+----------------+----------
>>>>> -----------------------------------------+
>>>>> | 752f865d-89e4-4284-9e91-8617a5a21da1 | lb-mgmt-subnet1 |
>>>>> c2a97a04cb6d4f25bdcb8b3f263c869e | 192.168.1.0/24 | {"start":
>>>>> "192.168.1.10", "end": "192.168.1.254"} |
>>>>> |                                      |                 |
>>>>>                       |                | {"start": "192.168.1.1", "end":
>>>>> "192.168.1.8"}    |
>>>>> | cbcf4f04-da6d-4800-8b40-4b141972c2bf | subnet1         |
>>>>> c2a97a04cb6d4f25bdcb8b3f263c869e | 10.0.1.0/24    | {"start":
>>>>> "10.0.1.1", "end": "10.0.1.9"}          |
>>>>> |                                      |                 |
>>>>>                       |                | {"start": "10.0.1.11", "end":
>>>>> "10.0.1.254"}       |
>>>>> +--------------------------------------+-----------------+--
>>>>> --------------------------------+----------------+----------
>>>>> -----------------------------------------+
>>>>>
>>>>> 7. The detailed info of subnet1 and lb-mgmt-subnet is listed as
>>>>> follows, respectively.
>>>>> lb-mgmt-subnet1
>>>>> +-------------------+---------------------------------------
>>>>> ------------+
>>>>> | Field             | Value
>>>>>  |
>>>>> +-------------------+---------------------------------------
>>>>> ------------+
>>>>> | allocation_pools  | {"start": "192.168.1.1", "end": "192.168.1.8"}
>>>>>   |
>>>>> |                   | {"start": "192.168.1.10", "end":
>>>>> "192.168.1.254"} |
>>>>> | cidr              | 192.168.1.0/24
>>>>>   |
>>>>> | created_at        | 2017-11-05T12:14:45Z
>>>>>   |
>>>>> | description       |
>>>>>  |
>>>>> | dns_nameservers   |
>>>>>  |
>>>>> | enable_dhcp       | True
>>>>>   |
>>>>> | gateway_ip        | 192.168.1.9
>>>>>  |
>>>>> | host_routes       |
>>>>>  |
>>>>> | id                | 752f865d-89e4-4284-9e91-8617a5a21da1
>>>>>   |
>>>>> | ip_version        | 4
>>>>>  |
>>>>> | ipv6_address_mode |
>>>>>  |
>>>>> | ipv6_ra_mode      |
>>>>>  |
>>>>> | name              | lb-mgmt-subnet1
>>>>>  |
>>>>> | network_id        | b4261144-3342-4605-8ca6-146e5b84c4ea
>>>>>   |
>>>>> | project_id        | c2a97a04cb6d4f25bdcb8b3f263c869e
>>>>>   |
>>>>> | revision_number   | 0
>>>>>  |
>>>>> | service_types     |
>>>>>  |
>>>>> | subnetpool_id     |
>>>>>  |
>>>>> | tags              |
>>>>>  |
>>>>> | tenant_id         | c2a97a04cb6d4f25bdcb8b3f263c869e
>>>>>   |
>>>>> | updated_at        | 2017-11-05T12:14:45Z
>>>>>   |
>>>>> +-------------------+---------------------------------------
>>>>> ------------+
>>>>>
>>>>> subnet1
>>>>> +-------------------+---------------------------------------------+
>>>>> | Field             | Value                                       |
>>>>> +-------------------+---------------------------------------------+
>>>>> | allocation_pools  | {"start": "10.0.1.1", "end": "10.0.1.9"}    |
>>>>> |                   | {"start": "10.0.1.11", "end": "10.0.1.254"} |
>>>>> | cidr              | 10.0.1.0/24                                 |
>>>>> | created_at        | 2017-11-05T12:37:56Z                        |
>>>>> | description       |                                             |
>>>>> | dns_nameservers   |                                             |
>>>>> | enable_dhcp       | True                                        |
>>>>> | gateway_ip        | 10.0.1.10                                   |
>>>>> | host_routes       |                                             |
>>>>> | id                | cbcf4f04-da6d-4800-8b40-4b141972c2bf        |
>>>>> | ip_version        | 4                                           |
>>>>> | ipv6_address_mode |                                             |
>>>>> | ipv6_ra_mode      |                                             |
>>>>> | name              | subnet1                                     |
>>>>> | network_id        | 310fea4b-36ae-4617-b499-5936e8eda842        |
>>>>> | project_id        | c2a97a04cb6d4f25bdcb8b3f263c869e            |
>>>>> | revision_number   | 0                                           |
>>>>> | service_types     |                                             |
>>>>> | subnetpool_id     |                                             |
>>>>> | tags              |                                             |
>>>>> | tenant_id         | c2a97a04cb6d4f25bdcb8b3f263c869e            |
>>>>> | updated_at        | 2017-11-05T12:37:56Z                        |
>>>>> +-------------------+---------------------------------------------+
>>>>>
>>>>> 8. The info of interfaces in the default and amphora-haproxy network
>>>>> namespace of the amphora are as follows.
>>>>> ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ ifconfig
>>>>> ens3      Link encap:Ethernet  HWaddr fa:16:3e:9e:6b:77
>>>>>           inet addr:192.168.1.4  Bcast:192.168.1.255
>>>>> Mask:255.255.255.0
>>>>>           inet6 addr: fe80::f816:3eff:fe9e:6b77/64 Scope:Link
>>>>>           UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
>>>>>           RX packets:13112 errors:0 dropped:0 overruns:0 frame:0
>>>>>           TX packets:41491 errors:0 dropped:0 overruns:0 carrier:0
>>>>>           collisions:0 txqueuelen:1000
>>>>>           RX bytes:775372 (775.3 KB)  TX bytes:9653389 (9.6 MB)
>>>>>
>>>>> lo        Link encap:Local Loopback
>>>>>           inet addr:127.0.0.1  Mask:255.0.0.0
>>>>>           inet6 addr: ::1/128 Scope:Host
>>>>>           UP LOOPBACK RUNNING  MTU:65536  Metric:1
>>>>>           RX packets:128 errors:0 dropped:0 overruns:0 frame:0
>>>>>           TX packets:128 errors:0 dropped:0 overruns:0 carrier:0
>>>>>           collisions:0 txqueuelen:1
>>>>>           RX bytes:11424 (11.4 KB)  TX bytes:11424 (11.4 KB)
>>>>>
>>>>> ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ sudo ip netns
>>>>> exec amphora-haproxy ifconfig
>>>>> sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4e
>>>>> e-05b695e2b71f
>>>>> eth1      Link encap:Ethernet  HWaddr fa:16:3e:be:5a:d5
>>>>>           inet addr:10.0.1.8  Bcast:10.0.1.255  Mask:255.255.255.0
>>>>>           inet6 addr: fe80::f816:3eff:febe:5ad5/64 Scope:Link
>>>>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>           RX packets:107 errors:0 dropped:0 overruns:0 frame:0
>>>>>           TX packets:218 errors:0 dropped:0 overruns:0 carrier:0
>>>>>           collisions:0 txqueuelen:1000
>>>>>           RX bytes:6574 (6.5 KB)  TX bytes:9468 (9.4 KB)
>>>>>
>>>>> eth1:0    Link encap:Ethernet  HWaddr fa:16:3e:be:5a:d5
>>>>>           inet addr:10.0.1.4  Bcast:10.0.1.255  Mask:255.255.255.0
>>>>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>
>>>>> 9. When curl the VIP from the host, it does not respond and finally
>>>>> return a timeout error.
>>>>> stack at devstack-1:/opt/stack/octavia$ sudo ip netns exec
>>>>> qdhcp-310fea4b-36ae-4617-b499-5936e8eda842 curl 10.0.1.4
>>>>> curl: (7) Failed to connect to 10.0.1.4 port 80: Connection timed out
>>>>>
>>>>> 10. Results of running "netstat -rn" on the host are as follows.
>>>>> Kernel IP routing table
>>>>> Destination     Gateway         Genmask         Flags   MSS Window
>>>>> irtt Iface
>>>>> 0.0.0.0         192.168.1.9     0.0.0.0         UG        0 0
>>>>> 0 o-hm0
>>>>> 0.0.0.0         10.0.2.2        0.0.0.0         UG        0 0
>>>>> 0 enp0s3
>>>>> 10.0.2.0        0.0.0.0         255.255.255.0   U         0 0
>>>>> 0 enp0s3
>>>>> 169.254.0.0     0.0.0.0         255.255.0.0     U         0 0
>>>>> 0 enp0s10
>>>>> 192.168.1.0     0.0.0.0         255.255.255.0   U         0 0
>>>>> 0 o-hm0
>>>>> 192.168.56.0    0.0.0.0         255.255.255.0   U         0 0
>>>>> 0 enp0s10
>>>>> 192.168.122.0   0.0.0.0         255.255.255.0   U         0 0
>>>>> 0 virbr0
>>>>>
>>>>> 11. In the amphora, the first two commit message of amphora-agent are
>>>>> as follows.
>>>>>
>>>>> commit 2ab2836d0ebdd0fd5bc32d3adcc44a92557c8c1d
>>>>> Author: OpenStack Proposal Bot <openstack-infra at lists.openstack.org>
>>>>> Date:   Fri Nov 3 17:58:59 2017 +0000
>>>>>
>>>>>     Updated from global requirements
>>>>>
>>>>>     Change-Id: I9047e289b8a3c931156da480b3f9f676c54a8358
>>>>>
>>>>> commit 504cb6c682e4779b5889c0eb68705d0ab12e2c81
>>>>> Merge: e983508 b8ebbe9
>>>>> Author: Zuul <zuul at review.openstack.org>
>>>>> Date:   Wed Nov 1 19:46:39 2017 +0000
>>>>>
>>>>>     Merge "Add cached_zone to the amphora record"
>>>>>
>>>>> Best regards,
>>>>> Yipei
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20171114/a9eb2120/attachment-0001.html>


More information about the OpenStack-dev mailing list