<div dir="ltr">Hi, Michael,<div><br></div><div>Thanks a lot for your comments. </div><div><br></div><div>In the environment where octavia and tricircle are installed together, I created a router and attached subnet1 to it. Then I bind the mac address of 10.0.1.9 (real gateway) to ip of 10.0.1.1 in the amphora arp cache, manually making amphora knows the mac address of 10.0.1.1 (actually it is the mac of 10.0.1.9, since 10.0.1.1 does not exist), it works. <br></div><div><br></div><div>To make the situation clear, I run the commands in the environment installed octavia alone. Please note that in this environment, there is no router. The results are as follows.</div><div><br></div><div><div>stack@stack-VirtualBox:~$ neutron lbaas-loadbalancer-list</div><div>neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.</div><div>+--------------------------------------+------+----------------------------------+-------------+---------------------+----------+</div><div>| id | name | tenant_id | vip_address | provisioning_status | provider |</div><div>+--------------------------------------+------+----------------------------------+-------------+---------------------+----------+</div><div>| d087d3b4-afbe-4af6-8b31-5e86fc97da1b | lb1 | e59bb8f3bf9342aba02f9ba5804ed2fb | 10.0.1.9 | ACTIVE | octavia |</div><div>+--------------------------------------+------+----------------------------------+-------------+---------------------+----------+</div></div><div><br></div><div><div>ubuntu@amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns exec amphora-haproxy ip addr</div><div>sudo: unable to resolve host amphora-dcbff58a-d418-4271-9374-9de2fd063ce9</div><div>1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1</div><div> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00</div><div>3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000</div><div> link/ether fa:16:3e:dc:9e:61 brd ff:ff:ff:ff:ff:ff</div><div> inet <a href="http://10.0.1.11/24">10.0.1.11/24</a> brd 10.0.1.255 scope global eth1</div><div> valid_lft forever preferred_lft forever</div><div> inet <a href="http://10.0.1.9/24">10.0.1.9/24</a> brd 10.0.1.255 scope global secondary eth1:0</div><div> valid_lft forever preferred_lft forever</div><div> inet6 fe80::f816:3eff:fedc:9e61/64 scope link</div><div> valid_lft forever preferred_lft forever</div></div><div><br></div><div><div>ubuntu@amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns exec amphora-haproxy ip route show table all</div><div>sudo: unable to resolve host amphora-dcbff58a-d418-4271-9374-9de2fd063ce9</div><div>default via 10.0.1.1 dev eth1 table 1 onlink</div><div>default via 10.0.1.1 dev eth1 onlink</div><div><a href="http://10.0.1.0/24">10.0.1.0/24</a> dev eth1 proto kernel scope link src 10.0.1.11</div><div>broadcast 10.0.1.0 dev eth1 table local proto kernel scope link src 10.0.1.11</div><div>local 10.0.1.9 dev eth1 table local proto kernel scope host src 10.0.1.11</div><div>local 10.0.1.11 dev eth1 table local proto kernel scope host src 10.0.1.11</div><div>broadcast 10.0.1.255 dev eth1 table local proto kernel scope link src 10.0.1.11</div><div>fe80::/64 dev eth1 proto kernel metric 256 pref medium</div><div>unreachable default dev lo table unspec proto kernel metric 4294967295 error -101 pref medium</div><div>local fe80::f816:3eff:fedc:9e61 dev lo table local proto none metric 0 pref medium</div><div>ff00::/8 dev eth1 table local metric 256 pref medium</div><div>unreachable default dev lo table unspec proto kernel metric 4294967295 error -101 pref medium</div><div><br></div><div>ubuntu@amphora-dcbff58a-d418-4271-9374-9de2fd063ce9:~$ sudo ip netns exec amphora-haproxy ip rule show</div><div>sudo: unable to resolve host amphora-dcbff58a-d418-4271-9374-9de2fd063ce9</div><div>0:<span style="white-space:pre"> </span>from all lookup local</div><div>100:<span style="white-space:pre"> </span>from 10.0.1.9 lookup 1</div><div>32766:<span style="white-space:pre"> </span>from all lookup main</div><div>32767:<span style="white-space:pre"> </span>from all lookup default</div></div><div><br></div><div>If there is no router, the packets are supposed to forwarded in layer 2, as amphora is plugged with ports of every member's subnet. The priority of “from 10.0.1.9 lookup 1” is higher than that of "from all lookup local". Maybe this patch (<a href="https://review.openstack.org/#/c/501915/" style="font-family:inherit;font-style:inherit;font-weight:inherit;color:rgb(188,21,24);font-size:12.0012px;white-space:pre-wrap;margin:0px;padding:0px;border:0px;vertical-align:baseline;text-decoration-line:none">https://review.openstack.org/#/c/501915/</a>) affects the l2 traffic. </div><div><br></div><div>Best regards,</div><div>Yipei</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Nov 11, 2017 at 11:24 AM, Yipei Niu <span dir="ltr"><<a href="mailto:newypei@gmail.com" target="_blank">newypei@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi, Michael,<div><br></div><div>I tried to run to command, and I think the amphora can connect to the member (10.0.1.3). The results are as follows.</div><div><br></div><div><div>ubuntu@amphora-a0621f0e-d27f-4<wbr>f22-a4ee-05b695e2b71f:~$ sudo ip netns exec amphora-haproxy ping 10.0.1.3</div><span class=""><div>sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4e<wbr>e-05b695e2b71f</div></span><div>PING 10.0.1.3 (10.0.1.3) 56(84) bytes of data.</div><div>64 bytes from <a href="http://10.0.1.3" target="_blank">10.0.1.3</a>: icmp_seq=1 ttl=64 time=189 ms</div><div>64 bytes from <a href="http://10.0.1.3" target="_blank">10.0.1.3</a>: icmp_seq=2 ttl=64 time=1.72 ms</div><div>^C</div><div>--- 10.0.1.3 ping statistics ---</div><div>2 packets transmitted, 2 received, 0% packet loss, time 1006ms</div><div>rtt min/avg/max/mdev = 1.722/95.855/189.989/94.134 ms</div></div><div><br></div><div><div>ubuntu@amphora-a0621f0e-d27f-4<wbr>f22-a4ee-05b695e2b71f:~$ sudo ip netns exec amphora-haproxy curl 10.0.1.3</div><span class=""><div>sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4e<wbr>e-05b695e2b71f</div></span><div>Welcome to 10.0.1.3</div></div><div><br></div><div><div>stack@devstack-1:~$ sudo ip netns exec qdhcp-310fea4b-36ae-4617-b499-<wbr>5936e8eda842 curl 10.0.1.3</div><div>Welcome to 10.0.1.3</div></div><div><br></div><div>As mentioned in my previous mail, I also have an environment installed with octavia alone, where the error reproduces. In that environment, I also tried the above commands, and have the same results. The member can be reached from host and amphora.</div><div><br></div><div><div>ubuntu@amphora-dcbff58a-d418-4<wbr>271-9374-9de2fd063ce9:~$ sudo ip netns exec amphora-haproxy curl 10.0.1.5</div><div>sudo: unable to resolve host amphora-dcbff58a-d418-4271-937<wbr>4-9de2fd063ce9</div><div>Welcome to 10.0.1.5</div></div><div><br></div><div><div>stack@stack-VirtualBox:~$ sudo ip netns exec qdhcp-13185eec-0996-4a08-b353-<wbr>6775d5926b4c curl 10.0.1.5</div><div>Welcome to 10.0.1.5</div></div><div><br></div><div>In this environment, haproxy also tries to find the gateway ip 10.0.1.1.</div><div><br></div><div><div>ubuntu@amphora-dcbff58a-d418-4<wbr>271-9374-9de2fd063ce9:~$ sudo ip netns exec amphora-haproxy tcpdump -i eth1 -nn</div><div>sudo: unable to resolve host amphora-dcbff58a-d418-4271-937<wbr>4-9de2fd063ce9</div><span class=""><div>tcpdump: verbose output suppressed, use -v or -vv for full protocol decode</div><div>listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes</div></span><div>^C08:58:53.997948 IP 10.0.1.2.55110 > 10.0.1.9.80: Flags [S], seq 4080898035, win 28200, options [mss 1410,sackOK,TS val 3910330593 ecr 0,nop,wscale 7], length 0</div><div>08:58:54.011771 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28</div><div>08:58:54.991600 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28</div><div>08:58:55.018542 IP 10.0.1.2.55110 > 10.0.1.9.80: Flags [S], seq 4080898035, win 28200, options [mss 1410,sackOK,TS val 3910330850 ecr 0,nop,wscale 7], length 0</div><div>08:58:55.991703 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28</div><div>08:58:57.015187 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28</div><div>08:58:57.034507 IP 10.0.1.2.55110 > 10.0.1.9.80: Flags [S], seq 4080898035, win 28200, options [mss 1410,sackOK,TS val 3910331354 ecr 0,nop,wscale 7], length 0</div><div>08:58:58.016438 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28</div><div>08:58:59.017650 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28</div><div>08:58:59.115960 ARP, Request who-has 10.0.1.9 tell 10.0.1.2, length 28</div><div>08:58:59.116293 ARP, Reply 10.0.1.9 is-at fa:16:3e:dc:9e:61, length 28</div><div>08:59:01.031434 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28</div><div>08:59:01.162845 IP 10.0.1.2.55110 > 10.0.1.9.80: Flags [S], seq 4080898035, win 28200, options [mss 1410,sackOK,TS val 3910332386 ecr 0,nop,wscale 7], length 0</div><div>08:59:02.031095 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28</div><div>08:59:03.035527 ARP, Request who-has 10.0.1.1 tell 10.0.1.9, length 28</div><div><br></div><div>15 packets captured</div><div>15 packets received by filter</div><span class=""><div>0 packets dropped by kernel</div></span></div><div><br></div><div>And the gateway ip requested by haproxy is the same as it in the subnet.</div><div><div>+-------------------+---------<wbr>------------------------------<wbr>-----+</div><div>| Field | Value |</div><div>+-------------------+---------<wbr>------------------------------<wbr>-----+</div><div>| allocation_pools | {"start": "10.0.1.2", "end": "10.0.1.254"} |</div><div>| cidr | <a href="http://10.0.1.0/24" target="_blank">10.0.1.0/24</a> |</div><div>| created_at | 2017-10-08T06:33:09Z |</div><div>| description | |</div><div>| dns_nameservers | |</div><div>| enable_dhcp | True |</div><div>| gateway_ip | 10.0.1.1 |</div><div>| host_routes | |</div><div>| id | 37023e56-a8bf-4070-8022-f6b6bb<wbr>7b8e82 |</div><span class=""><div>| ip_version | 4 |</div><div>| ipv6_address_mode | |</div><div>| ipv6_ra_mode | |</div><div>| name | subnet1 |</div></span><div>| network_id | 13185eec-0996-4a08-b353-6775d5<wbr>926b4c |</div><div>| project_id | e59bb8f3bf9342aba02f9ba5804ed2<wbr>fb |</div><span class=""><div>| revision_number | 0 |</div><div>| service_types | |</div><div>| subnetpool_id | |</div><div>| tags | |</div></span><div>| tenant_id | e59bb8f3bf9342aba02f9ba5804ed2<wbr>fb |</div><div>| updated_at | 2017-10-08T06:33:09Z |</div><div>+-------------------+---------<wbr>------------------------------<wbr>-----+</div></div><div><br></div><div>Some other info in this octavia env is as follows.</div><div><br></div><div>The info of load balancer:</div><div><span class=""><div>+---------------------+-------<wbr>------------------------------<wbr>-----------+</div><div>| Field | Value |</div><div>+---------------------+-------<wbr>------------------------------<wbr>-----------+</div><div>| admin_state_up | True |</div><div>| description | |</div></span><div>| id | d087d3b4-afbe-4af6-8b31-5e86fc<wbr>97da1b |</div><div>| listeners | {"id": "d22644d2-cc40-44f1-b37f-2bb4f<wbr>555f9b9"} |</div><div>| name | lb1 |</div><div>| operating_status | ONLINE |</div><div>| pools | {"id": "bc95c8e0-8475-4d97-9606-76c43<wbr>1e78ef7"} |</div><div>| provider | octavia |</div><div>| provisioning_status | ACTIVE |</div><div>| tenant_id | e59bb8f3bf9342aba02f9ba5804ed2<wbr>fb |</div><div>| vip_address | 10.0.1.9 |</div><div>| vip_port_id | 902a78e7-a618-455b-91c7-cd3659<wbr>5475cc |</div><div>| vip_subnet_id | 37023e56-a8bf-4070-8022-f6b6bb<wbr>7b8e82 |</div><div>+---------------------+-------<wbr>------------------------------<wbr>-----------+</div></div><div><br></div><div>The info of VMs:</div><div><div>+-----------------------------<wbr>---------+--------------------<wbr>--------------------------+---<wbr>-----+------------+-----------<wbr>--+---------------------------<wbr>---------------+</div><span class=""><div>| ID | Name | Status | Task State | Power State | Networks |</div></span><div>+-----------------------------<wbr>---------+--------------------<wbr>--------------------------+---<wbr>-----+------------+-----------<wbr>--+---------------------------<wbr>---------------+</div><div>| 50ae3581-94fc-43ce-b53c-728715<wbr>cd5597 | amphora-dcbff58a-d418-4271-937<wbr>4-9de2fd063ce9 | ACTIVE | - | Running | lb-mgmt-net=192.168.0.10; net1=10.0.1.11 |</div><div>| d19b565d-14aa-4679-9d98-ff5146<wbr>1cd625 | vm1 | ACTIVE | - | Running | net1=10.0.1.5 |</div><div>+-----------------------------<wbr>---------+--------------------<wbr>--------------------------+---<wbr>-----+------------+-----------<wbr>--+---------------------------<wbr>---------------+</div></div><div><br></div><div>The info of listener:</div><div><span class=""><div>+---------------------------+-<wbr>------------------------------<wbr>-----------------+</div><div>| Field | Value |</div><div>+---------------------------+-<wbr>------------------------------<wbr>-----------------+</div><div>| admin_state_up | True |</div><div>| connection_limit | -1 |</div></span><div>| default_pool_id | bc95c8e0-8475-4d97-9606-76c431<wbr>e78ef7 |</div><div>| default_tls_container_ref | |</div><div>| description | |</div><div>| id | d22644d2-cc40-44f1-b37f-2bb4f5<wbr>55f9b9 |</div><div>| loadbalancers | {"id": "d087d3b4-afbe-4af6-8b31-5e86f<wbr>c97da1b"} |</div><span class=""><div>| name | listener1 |</div><div>| protocol | HTTP |</div><div>| protocol_port | 80 |</div><div>| sni_container_refs | |</div></span><div>| tenant_id | e59bb8f3bf9342aba02f9ba5804ed2<wbr>fb |</div><div>+---------------------------+-<wbr>------------------------------<wbr>-----------------+</div></div><div><br></div><div>So I think the amphora can reach the member, it just cannot respond to curl, hence failing to balance load across members. Maybe there is something wrong with the image of the amphora. Since I replaced the latest amphora image with an old one (built on <span style="font-size:12.8px">2017-07-24</span>), it works. Totally same environment except the amphora image.</div><div><br></div><div>Best regards,</div><div>Yipei<br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Nov 10, 2017 at 5:24 PM, Yipei Niu <span dir="ltr"><<a href="mailto:newypei@gmail.com" target="_blank">newypei@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi, Michael,<div><br></div><div>Thanks a lot for your reply.</div><div><br></div><div>I can make sure that there is no router or multiple dhcp services in my environment. </div><div><br></div><div>As shown in my first mail, the haproxy in the amphora tries to find the gateway ip 10.0.1.1 that does not exist in the environment. </div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">ubuntu@amphora-a0621f0e-d27f-</span><span style="font-size:12.8px">4<wbr>f22-a4ee-05b695e2b71f:~$ sudo ip netns exec amphora-haproxy tcpdump -i eth1 -nn</span><br></div><div><span><div style="font-size:12.8px">sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4e<wbr>e-05b695e2b71f</div></span><span><div style="font-size:12.8px">tcpdump: verbose output suppressed, use -v or -vv for full protocol decode</div><div style="font-size:12.8px">listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes</div><div style="font-size:12.8px">^C07:25:24.225614 IP 10.0.1.2.55294 > 10.0.1.4.80: Flags [S], seq 1637781601, win 28200, options [mss 1410,sackOK,TS val 30692602 ecr 0,nop,wscale 7], length 0</div><div style="font-size:12.8px">07:25:24.237854 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28</div><div style="font-size:12.8px">07:25:25.224801 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28</div><div style="font-size:12.8px">07:25:25.228610 IP 10.0.1.2.55294 > 10.0.1.4.80: Flags [S], seq 1637781601, win 28200, options [mss 1410,sackOK,TS val 30692853 ecr 0,nop,wscale 7], length 0</div><div style="font-size:12.8px">07:25:26.224533 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28</div><div style="font-size:12.8px">07:25:27.230911 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28</div><div style="font-size:12.8px">07:25:27.250858 IP 10.0.1.2.55294 > 10.0.1.4.80: Flags [S], seq 1637781601, win 28200, options [mss 1410,sackOK,TS val 30693359 ecr 0,nop,wscale 7], length 0</div><div style="font-size:12.8px">07:25:28.228796 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28</div><div style="font-size:12.8px">07:25:29.228697 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28</div><div style="font-size:12.8px">07:25:29.290551 ARP, Request who-has 10.0.1.4 tell 10.0.1.2, length 28</div><div style="font-size:12.8px">07:25:29.290985 ARP, Reply 10.0.1.4 is-at fa:16:3e:be:5a:d5, length 28</div><div style="font-size:12.8px">07:25:31.251122 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28</div><div style="font-size:12.8px">07:25:32.248737 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28</div><div style="font-size:12.8px">07:25:33.250309 ARP, Request who-has 10.0.1.1 tell 10.0.1.4, length 28</div></span></div><div><br></div><div><b>So if the subnet is not attached to any router, why does haproxy try to find the gateway ip that does not exist at all? Maybe that is the reason why haproxy receives the packet from curl but fail to respond. </b></div><div><br></div><div>I think the gateway ip (10.0.1.10) confuses you. Actually, in my environment octavia and tricircle (<a href="https://wiki.openstack.org/wiki/Tricircle" target="_blank">https://wiki.openstack.org/wi<wbr>ki/Tricircle</a>) are installed together. Because of the cross-neutron mechanism of tricircle, the gateway ip of subnet in that region is 10.0.1.10. But I can make sure that gateway ip (10.0.1.1 or 10.0.1.10) does not exist in the network, since there is no router at all. This error also happens in my another environment where octavia is installed alone. The environment is installed on Oct. 6, and all the repos are the latest at that time. </div><div><br></div><div>Best regards,</div><div>Yipei</div><div><div class="m_-8583657996006631493h5"><div><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Nov 9, 2017 at 2:50 PM, Yipei Niu <span dir="ltr"><<a href="mailto:newypei@gmail.com" target="_blank">newypei@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi, Michael,</div><div><br></div><div>Based on your mail, the information is as follows.</div><div><br></div><div>1. The version of Octavia I used is Queens, and the latest commit message is</div><div><div>commit 2ab2836d0ebdd0fd5bc32d3adcc44a<wbr>92557c8c1d</div><div>Author: OpenStack Proposal Bot <<a href="mailto:openstack-infra@lists.openstack.org" target="_blank">openstack-infra@lists.opensta<wbr>ck.org</a>></div><div>Date: Fri Nov 3 17:58:59 2017 +0000</div><div><br></div><div> Updated from global requirements</div><div> </div><div> Change-Id: I9047e289b8a3c931156da480b3f9f<wbr>676c54a8358</div></div><div><br></div><div>2. The info of the amphora and other VMs is as follows.</div><div>+-----------------------------<wbr>---------+--------------------<wbr>--------------------------+---<wbr>-----+------------+-----------<wbr>--+---------------------------<wbr>------------------------------<wbr>+</div><div>| ID | Name | Status | Task State | Power State | Networks |</div><div>+-----------------------------<wbr>---------+--------------------<wbr>--------------------------+---<wbr>-----+------------+-----------<wbr>--+---------------------------<wbr>------------------------------<wbr>+</div><div>| 33bd02cb-f853-404d-a705-99bc1b<wbr>04a178 | amphora-a0621f0e-d27f-4f22-a4e<wbr>e-05b695e2b71f | ACTIVE | - | Running | lb-mgmt-net1=192.168.1.4; net1=10.0.1.8 |</div><div>| dd046fc9-e2bf-437d-8c51-c397bc<wbr>cc3dc1 | client1 | ACTIVE | - | Running | net1=10.0.1.3 |</div><div>| 50446c75-7cb7-43eb-b057-4b6b89<wbr>a926bc | client3 | ACTIVE | - | Running | net4=10.0.4.3 |</div><div>+-----------------------------<wbr>---------+--------------------<wbr>--------------------------+---<wbr>-----+------------+-----------<wbr>--+---------------------------<wbr>------------------------------<wbr>+</div><div><br></div><div>3. The info of the load balancer is as follows.</div><div>+---------------------+-------<wbr>------------------------------<wbr>-----------+</div><div>| Field | Value |</div><div>+---------------------+-------<wbr>------------------------------<wbr>-----------+</div><div>| admin_state_up | True |</div><div>| description | |</div><div>| id | 51cba1d5-cc3c-48ff-b41e-839619<wbr>093334 |</div><div>| listeners | {"id": "b20ad920-c6cd-4e71-a9b9-c134e<wbr>57ecd20"} |</div><div>| name | lb1 |</div><div>| operating_status | ONLINE |</div><div>| pools | {"id": "d0042605-da50-4048-b298-66042<wbr>0b0a1d2"} |</div><div>| provider | octavia |</div><div>| provisioning_status | ACTIVE |</div><div>| tenant_id | c2a97a04cb6d4f25bdcb8b3f263c86<wbr>9e |</div><div>| vip_address | 10.0.1.4 |</div><div>| vip_port_id | 2209a819-0ac8-4211-b878-f0b41a<wbr>c4727b |</div><div>| vip_subnet_id | cbcf4f04-da6d-4800-8b40-4b1419<wbr>72c2bf |</div><div>+---------------------+-------<wbr>------------------------------<wbr>-----------+</div><div><br></div><div>4. The info of the listener is as follows.</div><div>+---------------------------+-<wbr>------------------------------<wbr>-----------------+</div><div>| Field | Value |</div><div>+---------------------------+-<wbr>------------------------------<wbr>-----------------+</div><div>| admin_state_up | True |</div><div>| connection_limit | -1 |</div><div>| default_pool_id | d0042605-da50-4048-b298-660420<wbr>b0a1d2 |</div><div>| default_tls_container_ref | |</div><div>| description | |</div><div>| id | b20ad920-c6cd-4e71-a9b9-c134e5<wbr>7ecd20 |</div><div>| loadbalancers | {"id": "51cba1d5-cc3c-48ff-b41e-83961<wbr>9093334"} |</div><div>| name | listener1 |</div><div>| protocol | HTTP |</div><div>| protocol_port | 80 |</div><div>| sni_container_refs | |</div><div>| tenant_id | c2a97a04cb6d4f25bdcb8b3f263c86<wbr>9e |</div><div>+---------------------------+-<wbr>------------------------------<wbr>-----------------+</div><div><br></div><div>5. The members of the load balancer lb1 are as follows.</div><div>+-----------------------------<wbr>---------+------+-------------<wbr>---------------------+--------<wbr>--+---------------+--------+--<wbr>------------------------------<wbr>------+----------------+</div><div>| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |</div><div>+-----------------------------<wbr>---------+------+-------------<wbr>---------------------+--------<wbr>--+---------------+--------+--<wbr>------------------------------<wbr>------+----------------+</div><div>| 420c905c-1077-46c9-8b04-526a59<wbr>d93376 | | c2a97a04cb6d4f25bdcb8b3f263c86<wbr>9e | 10.0.1.3 | 80 | 1 | cbcf4f04-da6d-4800-8b40-4b1419<wbr>72c2bf | True |</div><div>+-----------------------------<wbr>---------+------+-------------<wbr>---------------------+--------<wbr>--+---------------+--------+--<wbr>------------------------------<wbr>------+----------------+</div><div><br></div><div>6. Since the VIP and the members reside in the same subnet, only two subnets are listed as follows.</div><div>+-----------------------------<wbr>---------+-----------------+--<wbr>------------------------------<wbr>--+----------------+----------<wbr>------------------------------<wbr>-----------+</div><div>| id | name | tenant_id | cidr | allocation_pools |</div><div>+-----------------------------<wbr>---------+-----------------+--<wbr>------------------------------<wbr>--+----------------+----------<wbr>------------------------------<wbr>-----------+</div><div>| 752f865d-89e4-4284-9e91-8617a5<wbr>a21da1 | lb-mgmt-subnet1 | c2a97a04cb6d4f25bdcb8b3f263c86<wbr>9e | <a href="http://192.168.1.0/24" target="_blank">192.168.1.0/24</a> | {"start": "192.168.1.10", "end": "192.168.1.254"} |</div><div>| | | | | {"start": "192.168.1.1", "end": "192.168.1.8"} |</div><div>| cbcf4f04-da6d-4800-8b40-4b1419<wbr>72c2bf | subnet1 | c2a97a04cb6d4f25bdcb8b3f263c86<wbr>9e | <a href="http://10.0.1.0/24" target="_blank">10.0.1.0/24</a> | {"start": "10.0.1.1", "end": "10.0.1.9"} |</div><div>| | | | | {"start": "10.0.1.11", "end": "10.0.1.254"} |</div><div>+-----------------------------<wbr>---------+-----------------+--<wbr>------------------------------<wbr>--+----------------+----------<wbr>------------------------------<wbr>-----------+</div><div><br></div><div>7. The detailed info of subnet1 and lb-mgmt-subnet is listed as follows, respectively.</div><div>lb-mgmt-subnet1</div><div>+-------------------+---------<wbr>------------------------------<wbr>------------+</div><div>| Field | Value |</div><div>+-------------------+---------<wbr>------------------------------<wbr>------------+</div><div>| allocation_pools | {"start": "192.168.1.1", "end": "192.168.1.8"} |</div><div>| | {"start": "192.168.1.10", "end": "192.168.1.254"} |</div><div>| cidr | <a href="http://192.168.1.0/24" target="_blank">192.168.1.0/24</a> |</div><div>| created_at | 2017-11-05T12:14:45Z |</div><div>| description | |</div><div>| dns_nameservers | |</div><div>| enable_dhcp | True |</div><div>| gateway_ip | 192.168.1.9 |</div><div>| host_routes | |</div><div>| id | 752f865d-89e4-4284-9e91-8617a5<wbr>a21da1 |</div><div>| ip_version | 4 |</div><div>| ipv6_address_mode | |</div><div>| ipv6_ra_mode | |</div><div>| name | lb-mgmt-subnet1 |</div><div>| network_id | b4261144-3342-4605-8ca6-146e5b<wbr>84c4ea |</div><div>| project_id | c2a97a04cb6d4f25bdcb8b3f263c86<wbr>9e |</div><div>| revision_number | 0 |</div><div>| service_types | |</div><div>| subnetpool_id | |</div><div>| tags | |</div><div>| tenant_id | c2a97a04cb6d4f25bdcb8b3f263c86<wbr>9e |</div><div>| updated_at | 2017-11-05T12:14:45Z |</div><div>+-------------------+---------<wbr>------------------------------<wbr>------------+</div><div><br></div><div>subnet1</div><div>+-------------------+---------<wbr>------------------------------<wbr>------+</div><div>| Field | Value |</div><div>+-------------------+---------<wbr>------------------------------<wbr>------+</div><div>| allocation_pools | {"start": "10.0.1.1", "end": "10.0.1.9"} |</div><div>| | {"start": "10.0.1.11", "end": "10.0.1.254"} |</div><div>| cidr | <a href="http://10.0.1.0/24" target="_blank">10.0.1.0/24</a> |</div><div>| created_at | 2017-11-05T12:37:56Z |</div><div>| description | |</div><div>| dns_nameservers | |</div><div>| enable_dhcp | True |</div><div>| gateway_ip | 10.0.1.10 |</div><div>| host_routes | |</div><div>| id | cbcf4f04-da6d-4800-8b40-4b1419<wbr>72c2bf |</div><div>| ip_version | 4 |</div><div>| ipv6_address_mode | |</div><div>| ipv6_ra_mode | |</div><div>| name | subnet1 |</div><div>| network_id | 310fea4b-36ae-4617-b499-5936e8<wbr>eda842 |</div><div>| project_id | c2a97a04cb6d4f25bdcb8b3f263c86<wbr>9e |</div><div>| revision_number | 0 |</div><div>| service_types | |</div><div>| subnetpool_id | |</div><div>| tags | |</div><div>| tenant_id | c2a97a04cb6d4f25bdcb8b3f263c86<wbr>9e |</div><div>| updated_at | 2017-11-05T12:37:56Z |</div><div>+-------------------+---------<wbr>------------------------------<wbr>------+</div><div><br></div><div>8. The info of interfaces in the default and amphora-haproxy network namespace of the amphora are as follows.</div><div>ubuntu@amphora-a0621f0e-d27f-4<wbr>f22-a4ee-05b695e2b71f:~$ ifconfig</div><div>ens3 Link encap:Ethernet HWaddr fa:16:3e:9e:6b:77 </div><div> inet addr:192.168.1.4 Bcast:192.168.1.255 Mask:255.255.255.0</div><div> inet6 addr: fe80::f816:3eff:fe9e:6b77/64 Scope:Link</div><div> UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1</div><div> RX packets:13112 errors:0 dropped:0 overruns:0 frame:0</div><div> TX packets:41491 errors:0 dropped:0 overruns:0 carrier:0</div><div> collisions:0 txqueuelen:1000 </div><div> RX bytes:775372 (775.3 KB) TX bytes:9653389 (9.6 MB)</div><div><br></div><div>lo Link encap:Local Loopback </div><div> inet addr:127.0.0.1 Mask:255.0.0.0</div><div> inet6 addr: ::1/128 Scope:Host</div><div> UP LOOPBACK RUNNING MTU:65536 Metric:1</div><div> RX packets:128 errors:0 dropped:0 overruns:0 frame:0</div><div> TX packets:128 errors:0 dropped:0 overruns:0 carrier:0</div><div> collisions:0 txqueuelen:1 </div><div> RX bytes:11424 (11.4 KB) TX bytes:11424 (11.4 KB)</div><div><br></div><div>ubuntu@amphora-a0621f0e-d27f-4<wbr>f22-a4ee-05b695e2b71f:~$ sudo ip netns exec amphora-haproxy ifconfig</div><span class="m_-8583657996006631493m_926137018094811064gmail-"><div>sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4e<wbr>e-05b695e2b71f</div></span><div>eth1 Link encap:Ethernet HWaddr fa:16:3e:be:5a:d5 </div><div> inet addr:10.0.1.8 Bcast:10.0.1.255 Mask:255.255.255.0</div><div> inet6 addr: fe80::f816:3eff:febe:5ad5/64 Scope:Link</div><div> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1</div><div> RX packets:107 errors:0 dropped:0 overruns:0 frame:0</div><div> TX packets:218 errors:0 dropped:0 overruns:0 carrier:0</div><div> collisions:0 txqueuelen:1000 </div><div> RX bytes:6574 (6.5 KB) TX bytes:9468 (9.4 KB)</div><div><br></div><div>eth1:0 Link encap:Ethernet HWaddr fa:16:3e:be:5a:d5 </div><div> inet addr:10.0.1.4 Bcast:10.0.1.255 Mask:255.255.255.0</div><div> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1</div><div><br></div><div>9. When curl the VIP from the host, it does not respond and finally return a timeout error.</div><div><div>stack@devstack-1:/opt/stack/oc<wbr>tavia$ sudo ip netns exec qdhcp-310fea4b-36ae-4617-b499-<wbr>5936e8eda842 curl 10.0.1.4</div><div>curl: (7) Failed to connect to 10.0.1.4 port 80: Connection timed out</div></div><div><br></div><div>10. Results of running "netstat -rn" on the host are as follows.</div><div>Kernel IP routing table</div><div>Destination Gateway Genmask Flags MSS Window irtt Iface</div><div>0.0.0.0 192.168.1.9 0.0.0.0 UG 0 0 0 o-hm0</div><div>0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3</div><div>10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3</div><div>169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 enp0s10</div><div>192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 o-hm0</div><div>192.168.56.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s10</div><div>192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0</div><div> <br></div><div>11. In the amphora, the first two commit message of amphora-agent are as follows.</div><div><div><br></div><div>commit 2ab2836d0ebdd0fd5bc32d3adcc44a<wbr>92557c8c1d</div><div>Author: OpenStack Proposal Bot <<a href="mailto:openstack-infra@lists.openstack.org" target="_blank">openstack-infra@lists.opensta<wbr>ck.org</a>></div><div>Date: Fri Nov 3 17:58:59 2017 +0000</div><div><br></div><div> Updated from global requirements</div><div> </div><div> Change-Id: I9047e289b8a3c931156da480b3f9f<wbr>676c54a8358</div><div><br></div><div>commit 504cb6c682e4779b5889c0eb68705d<wbr>0ab12e2c81</div><div>Merge: e983508 b8ebbe9</div><div>Author: Zuul <<a href="mailto:zuul@review.openstack.org" target="_blank">zuul@review.openstack.org</a>></div><div>Date: Wed Nov 1 19:46:39 2017 +0000</div><div><br></div><div> Merge "Add cached_zone to the amphora record"</div></div><div><br></div><div>Best regards,</div><div>Yipei</div><div><br></div></div>
</blockquote></div><br></div></div></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>