[openstack-dev] [octavia] amphora fails to send request to members
Michael Johnson
johnsomor at gmail.com
Fri Nov 10 06:26:21 UTC 2017
Hi Yipei,
I see a few things that are odd:
stack at devstack-1:/opt/stack/octavia$ sudo ip netns exec
qdhcp-310fea4b-36ae-4617-b499-5936e8eda842 curl 10.0.1.4
curl: (7) Failed to connect to 10.0.1.4 port 80: Connection timed out
This means that the connection is not working between curl and the
HAProxy process. If HAProxy was just unable to reach the member you
would get a 503 error.
The other thing that does not make sense is the routing table on the
amphora-haproxy namespace.
It has a route of:
default 10.0.1.1 0.0.0.0 UG 0 0 0 eth1
But your subnet has a gateway:
|gateway_ip | 10.0.1.10 |
Are there multiple routers on the VIP network? Or multiple DHCP
services? Since neutron is configured for DHCP on that subnet the
amphora will get it's default gateway from the DHCP packet it uses to
configure the base interface eth1.
Michael
On Wed, Nov 8, 2017 at 10:50 PM, Yipei Niu <newypei at gmail.com> wrote:
> Hi, Michael,
>
> Based on your mail, the information is as follows.
>
> 1. The version of Octavia I used is Queens, and the latest commit message is
> commit 2ab2836d0ebdd0fd5bc32d3adcc44a92557c8c1d
> Author: OpenStack Proposal Bot <openstack-infra at lists.openstack.org>
> Date: Fri Nov 3 17:58:59 2017 +0000
>
> Updated from global requirements
>
> Change-Id: I9047e289b8a3c931156da480b3f9f676c54a8358
>
> 2. The info of the amphora and other VMs is as follows.
> +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------+
> | ID | Name
> | Status | Task State | Power State | Networks
> |
> +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------+
> | 33bd02cb-f853-404d-a705-99bc1b04a178 |
> amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f | ACTIVE | - | Running
> | lb-mgmt-net1=192.168.1.4; net1=10.0.1.8 |
> | dd046fc9-e2bf-437d-8c51-c397bccc3dc1 | client1
> | ACTIVE | - | Running | net1=10.0.1.3
> |
> | 50446c75-7cb7-43eb-b057-4b6b89a926bc | client3
> | ACTIVE | - | Running | net4=10.0.4.3
> |
> +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------+
>
> 3. The info of the load balancer is as follows.
> +---------------------+------------------------------------------------+
> | Field | Value |
> +---------------------+------------------------------------------------+
> | admin_state_up | True |
> | description | |
> | id | 51cba1d5-cc3c-48ff-b41e-839619093334 |
> | listeners | {"id": "b20ad920-c6cd-4e71-a9b9-c134e57ecd20"} |
> | name | lb1 |
> | operating_status | ONLINE |
> | pools | {"id": "d0042605-da50-4048-b298-660420b0a1d2"} |
> | provider | octavia |
> | provisioning_status | ACTIVE |
> | tenant_id | c2a97a04cb6d4f25bdcb8b3f263c869e |
> | vip_address | 10.0.1.4 |
> | vip_port_id | 2209a819-0ac8-4211-b878-f0b41ac4727b |
> | vip_subnet_id | cbcf4f04-da6d-4800-8b40-4b141972c2bf |
> +---------------------+------------------------------------------------+
>
> 4. The info of the listener is as follows.
> +---------------------------+------------------------------------------------+
> | Field | Value
> |
> +---------------------------+------------------------------------------------+
> | admin_state_up | True
> |
> | connection_limit | -1
> |
> | default_pool_id | d0042605-da50-4048-b298-660420b0a1d2
> |
> | default_tls_container_ref |
> |
> | description |
> |
> | id | b20ad920-c6cd-4e71-a9b9-c134e57ecd20
> |
> | loadbalancers | {"id": "51cba1d5-cc3c-48ff-b41e-839619093334"}
> |
> | name | listener1
> |
> | protocol | HTTP
> |
> | protocol_port | 80
> |
> | sni_container_refs |
> |
> | tenant_id | c2a97a04cb6d4f25bdcb8b3f263c869e
> |
> +---------------------------+------------------------------------------------+
>
> 5. The members of the load balancer lb1 are as follows.
> +--------------------------------------+------+----------------------------------+----------+---------------+--------+--------------------------------------+----------------+
> | id | name | tenant_id
> | address | protocol_port | weight | subnet_id |
> admin_state_up |
> +--------------------------------------+------+----------------------------------+----------+---------------+--------+--------------------------------------+----------------+
> | 420c905c-1077-46c9-8b04-526a59d93376 | |
> c2a97a04cb6d4f25bdcb8b3f263c869e | 10.0.1.3 | 80 | 1 |
> cbcf4f04-da6d-4800-8b40-4b141972c2bf | True |
> +--------------------------------------+------+----------------------------------+----------+---------------+--------+--------------------------------------+----------------+
>
> 6. Since the VIP and the members reside in the same subnet, only two subnets
> are listed as follows.
> +--------------------------------------+-----------------+----------------------------------+----------------+---------------------------------------------------+
> | id | name | tenant_id
> | cidr | allocation_pools |
> +--------------------------------------+-----------------+----------------------------------+----------------+---------------------------------------------------+
> | 752f865d-89e4-4284-9e91-8617a5a21da1 | lb-mgmt-subnet1 |
> c2a97a04cb6d4f25bdcb8b3f263c869e | 192.168.1.0/24 | {"start":
> "192.168.1.10", "end": "192.168.1.254"} |
> | | |
> | | {"start": "192.168.1.1", "end": "192.168.1.8"} |
> | cbcf4f04-da6d-4800-8b40-4b141972c2bf | subnet1 |
> c2a97a04cb6d4f25bdcb8b3f263c869e | 10.0.1.0/24 | {"start": "10.0.1.1",
> "end": "10.0.1.9"} |
> | | |
> | | {"start": "10.0.1.11", "end": "10.0.1.254"} |
> +--------------------------------------+-----------------+----------------------------------+----------------+---------------------------------------------------+
>
> 7. The detailed info of subnet1 and lb-mgmt-subnet is listed as follows,
> respectively.
> lb-mgmt-subnet1
> +-------------------+---------------------------------------------------+
> | Field | Value |
> +-------------------+---------------------------------------------------+
> | allocation_pools | {"start": "192.168.1.1", "end": "192.168.1.8"} |
> | | {"start": "192.168.1.10", "end": "192.168.1.254"} |
> | cidr | 192.168.1.0/24 |
> | created_at | 2017-11-05T12:14:45Z |
> | description | |
> | dns_nameservers | |
> | enable_dhcp | True |
> | gateway_ip | 192.168.1.9 |
> | host_routes | |
> | id | 752f865d-89e4-4284-9e91-8617a5a21da1 |
> | ip_version | 4 |
> | ipv6_address_mode | |
> | ipv6_ra_mode | |
> | name | lb-mgmt-subnet1 |
> | network_id | b4261144-3342-4605-8ca6-146e5b84c4ea |
> | project_id | c2a97a04cb6d4f25bdcb8b3f263c869e |
> | revision_number | 0 |
> | service_types | |
> | subnetpool_id | |
> | tags | |
> | tenant_id | c2a97a04cb6d4f25bdcb8b3f263c869e |
> | updated_at | 2017-11-05T12:14:45Z |
> +-------------------+---------------------------------------------------+
>
> subnet1
> +-------------------+---------------------------------------------+
> | Field | Value |
> +-------------------+---------------------------------------------+
> | allocation_pools | {"start": "10.0.1.1", "end": "10.0.1.9"} |
> | | {"start": "10.0.1.11", "end": "10.0.1.254"} |
> | cidr | 10.0.1.0/24 |
> | created_at | 2017-11-05T12:37:56Z |
> | description | |
> | dns_nameservers | |
> | enable_dhcp | True |
> | gateway_ip | 10.0.1.10 |
> | host_routes | |
> | id | cbcf4f04-da6d-4800-8b40-4b141972c2bf |
> | ip_version | 4 |
> | ipv6_address_mode | |
> | ipv6_ra_mode | |
> | name | subnet1 |
> | network_id | 310fea4b-36ae-4617-b499-5936e8eda842 |
> | project_id | c2a97a04cb6d4f25bdcb8b3f263c869e |
> | revision_number | 0 |
> | service_types | |
> | subnetpool_id | |
> | tags | |
> | tenant_id | c2a97a04cb6d4f25bdcb8b3f263c869e |
> | updated_at | 2017-11-05T12:37:56Z |
> +-------------------+---------------------------------------------+
>
> 8. The info of interfaces in the default and amphora-haproxy network
> namespace of the amphora are as follows.
> ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ ifconfig
> ens3 Link encap:Ethernet HWaddr fa:16:3e:9e:6b:77
> inet addr:192.168.1.4 Bcast:192.168.1.255 Mask:255.255.255.0
> inet6 addr: fe80::f816:3eff:fe9e:6b77/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
> RX packets:13112 errors:0 dropped:0 overruns:0 frame:0
> TX packets:41491 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:775372 (775.3 KB) TX bytes:9653389 (9.6 MB)
>
> lo Link encap:Local Loopback
> inet addr:127.0.0.1 Mask:255.0.0.0
> inet6 addr: ::1/128 Scope:Host
> UP LOOPBACK RUNNING MTU:65536 Metric:1
> RX packets:128 errors:0 dropped:0 overruns:0 frame:0
> TX packets:128 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1
> RX bytes:11424 (11.4 KB) TX bytes:11424 (11.4 KB)
>
> ubuntu at amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f:~$ sudo ip netns exec
> amphora-haproxy ifconfig
> sudo: unable to resolve host amphora-a0621f0e-d27f-4f22-a4ee-05b695e2b71f
> eth1 Link encap:Ethernet HWaddr fa:16:3e:be:5a:d5
> inet addr:10.0.1.8 Bcast:10.0.1.255 Mask:255.255.255.0
> inet6 addr: fe80::f816:3eff:febe:5ad5/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:107 errors:0 dropped:0 overruns:0 frame:0
> TX packets:218 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:6574 (6.5 KB) TX bytes:9468 (9.4 KB)
>
> eth1:0 Link encap:Ethernet HWaddr fa:16:3e:be:5a:d5
> inet addr:10.0.1.4 Bcast:10.0.1.255 Mask:255.255.255.0
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>
> 9. When curl the VIP from the host, it does not respond and finally return a
> timeout error.
> stack at devstack-1:/opt/stack/octavia$ sudo ip netns exec
> qdhcp-310fea4b-36ae-4617-b499-5936e8eda842 curl 10.0.1.4
> curl: (7) Failed to connect to 10.0.1.4 port 80: Connection timed out
>
> 10. Results of running "netstat -rn" on the host are as follows.
> Kernel IP routing table
> Destination Gateway Genmask Flags MSS Window irtt
> Iface
> 0.0.0.0 192.168.1.9 0.0.0.0 UG 0 0 0
> o-hm0
> 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0
> enp0s3
> 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0
> enp0s3
> 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0
> enp0s10
> 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0
> o-hm0
> 192.168.56.0 0.0.0.0 255.255.255.0 U 0 0 0
> enp0s10
> 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0
> virbr0
>
> 11. In the amphora, the first two commit message of amphora-agent are as
> follows.
>
> commit 2ab2836d0ebdd0fd5bc32d3adcc44a92557c8c1d
> Author: OpenStack Proposal Bot <openstack-infra at lists.openstack.org>
> Date: Fri Nov 3 17:58:59 2017 +0000
>
> Updated from global requirements
>
> Change-Id: I9047e289b8a3c931156da480b3f9f676c54a8358
>
> commit 504cb6c682e4779b5889c0eb68705d0ab12e2c81
> Merge: e983508 b8ebbe9
> Author: Zuul <zuul at review.openstack.org>
> Date: Wed Nov 1 19:46:39 2017 +0000
>
> Merge "Add cached_zone to the amphora record"
>
> Best regards,
> Yipei
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
More information about the OpenStack-dev
mailing list