[octavia] Amphora agent returned unexpected result code 500
Dear colleagues, trying to launch Amphorae, getting the following error in logs: Amphora agent returned unexpected result code 500 with response {'message': 'Error plugging VIP', 'details': 'SIOCADDRT: Network is unreachable\nFailed to bring up eth1.\n'} While details below, questions are here: - whether it's enough to assign roles as explained below to special project for Octavia? - whether it can be issue with image, created by diskimage_create.sh? - any recommendation on where to search for the problem. Thank you. My environment is: - Openstack Rocky - Octavia 4.0 - amphora instance runs in special project "octavia", where users octavia, nova and neutron have admin role - amphora image prepared using original git repo process and elements without modification: * git clone <octavia> * cd octavia * diskimage-create/diskimage-create.sh * openstack image create [ ... ] --tag amphora After created, amphora instance successfully connects to management network and can be accessed by controller: 2019-05-05 20:46:06.851 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connected to amphora. Response: <Response [200]> request /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:486 2019-05-05 20:46:06.852 18234 DEBUG octavia.controller.worker.tasks.amphora_driver_tasks [-] Successfuly connected to amphora 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5: {'ipvsadm_version': '1:1.28-3', 'api_version': '0.5', 'haproxy_version': '1.6.3-1ubuntu0.2', 'hostname': 'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5', 'keepalived_version': '1:1.2.24-1ubuntu0.16.04.1'} execute /opt/openstack/lib/python3.6/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py:372 [ ... ] 2019-05-05 20:46:06.990 18234 DEBUG octavia.controller.worker.tasks.network_tasks [-] Plumbing VIP for amphora id: 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5 execute /opt/openstack/lib/python3.6/site-packages/octavia/controller/worker/tasks/network_tasks.py:382 2019-05-05 20:46:07.003 18234 DEBUG octavia.network.drivers.neutron.base [-] Neutron extension security-group found enabled _check_extension_enabled /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py:66 2019-05-05 20:46:07.013 18234 DEBUG octavia.network.drivers.neutron.base [-] Neutron extension dns-integration found enabled _check_extension_enabled /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py:66 2019-05-05 20:46:07.025 18234 DEBUG octavia.network.drivers.neutron.base [-] Neutron extension qos found enabled _check_extension_enabled /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py:66 2019-05-05 20:46:07.044 18234 DEBUG octavia.network.drivers.neutron.base [-] Neutron extension allowed-address-pairs found enabled _check_extension_enabled /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py:66 2019-05-05 20:46:08.406 18234 DEBUG octavia.network.drivers.neutron.allowed_address_pairs [-] Created vip port: b0398cc8-6d52-4f12-9f1f-1141b0f10751 for amphora: 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5 _plug_amphora_vip /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py:97 [ ... ] 2019-05-05 20:46:15.405 18234 DEBUG octavia.network.drivers.neutron.allowed_address_pairs [-] Retrieving network details for amphora 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5 _get_amp_net_configs /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py:596 [ ... ] 2019-05-05 20:46:15.837 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Post-VIP-Plugging with vrrp_ip 10.0.2.13 vrrp_port b0398cc8-6d52-4f12-9f1f-1141b0f10751 post_vip_plug /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:233 2019-05-05 20:46:15.838 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url plug/vip/10.0.2.24 request /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:462 2019-05-05 20:46:15.838 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url https://172.16.252.35:9443/0.5/plug/vip/10.0.2.24 request /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:465 2019-05-05 20:46:16.089 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connected to amphora. Response: <Response [500]> request /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:486 2019-05-05 20:46:16.090 18234 ERROR octavia.amphorae.drivers.haproxy.exceptions [-] Amphora agent returned unexpected result code 500 with response {'message': 'Error plugging VIP', 'details': 'SIOCADDRT: Network is unreachable\nFailed to bring up eth1.\n'} During the process, NEUTRON logs contains the following records that indicate the following (note "status=DOWN" in neutron-dhcp-agent; later immediately before to be deleted, it will shed 'ACTIVE'): May 5 20:46:13 ardbeg neutron-dhcp-agent: 2019-05-05 20:46:13.857 1804 INFO neutron.agent.dhcp.agent [req-07833602-9579-403b-a264-76fd3ee408ee a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[{u'ip_address': u'10.0.2.24', u'mac_address': u'72:d0:1c:4c:94:91'}], binding:host_id=ardbeg, binding:profile=, binding:vif_details=datapath_type=system, ovs_hybrid_plug=False, port_filter=True, binding:vif_type=ovs, binding:vnic_type=normal, created_at=2019-05-05T20:46:07Z, description=, device_id=f1bce6e9-be5b-464b-8f64-686f36e9de1f, device_owner=compute:nova, dns_assignment=[{u'hostname': u'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5', u'ip_address': u'10.0.2.13', u'fqdn': u'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5.loqal.'}], dns_domain=, dns_name=amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5, extra_dhcp_opts=[], fixed_ips=[{u'subnet_id': u'24b10886-3d53-4aee-bdc6-f165b242ae4f', u'ip_address': u'10.0.2.13'}], id=b0398cc8-6d52-4f12-9f1f-1141b0f10751, mac_address=72:d0:1c:4c:94:91, name=octavia-lb-vrrp-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5, network_id=b24d2830-eec6-4abd-82f2-ac71c8ecbf40, port_security_enabled=True, project_id=41a02a69918849509f4102b04f8a7de9, qos_policy_id=None, revision_number=5, security_groups=[u'6df53a15-6afc-4c99-b464-03de4f546b4f'], status=DOWN, tags=[], tenant_id=41a02a69918849509f4102b04f8a7de9, updated_at=2019-05-05T20:46:13Z May 5 20:46:14 ardbeg neutron-openvswitch-agent: 2019-05-05 20:46:14.185 31542 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-a4425cdb-afc1-4f6a-9ef9-c8706e3285d6 - - - - -] Port b0398cc8-6d52-4f12-9f1f-1141b0f10751 updated. Details: {'profile': {}, 'network_qos_policy_id': None, 'qos_policy_id': None, 'allowed_address_pairs': [{'ip_address': AuthenticIPNetwork('10.0.2.24'), 'mac_address': EUI('72:d0:1c:4c:94:91')}], 'admin_state_up': True, 'network_id': 'b24d2830-eec6-4abd-82f2-ac71c8ecbf40', 'segmentation_id': 437, 'fixed_ips': [{'subnet_id': '24b10886-3d53-4aee-bdc6-f165b242ae4f', 'ip_address': '10.0.2.13'}], 'device_owner': u'compute:nova', 'physical_network': None, 'mac_address': '72:d0:1c:4c:94:91', 'device': u'b0398cc8-6d52-4f12-9f1f-1141b0f10751', 'port_security_enabled': True, 'port_id': 'b0398cc8-6d52-4f12-9f1f-1141b0f10751', 'network_type': u'vxlan', 'security_groups': [u'6df53a15-6afc-4c99-b464-03de4f546b4f']} May 5 20:46:14 ardbeg neutron-openvswitch-agent: 2019-05-05 20:46:14.197 31542 INFO neutron.agent.securitygroups_rpc [req-a4425cdb-afc1-4f6a-9ef9-c8706e3285d6 - - - - -] Preparing filters for devices set([u'b0398cc8-6d52-4f12-9f1f-1141b0f10751']) Note Nova returns response 200/completed: May 5 20:46:14 controller-l neutron-server: 2019-05-05 20:46:14.326 20981 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'b0398cc8-6d52-4f12-9f1f-1141b0f10751', u'name': u'network-changed', u'server_uuid': u'f1bce6e9-be5b-464b-8f64-686f36e9de1f', u'code': 200} and "openstack server show" shows both NICs are attached to the amphorae: $ openstack server show f1bce6e9-be5b-464b-8f64-686f36e9de1f +-------------------------------------+------------------------------------------------------------+ | Field | Value | +-------------------------------------+------------------------------------------------------------+ [ ... ] | addresses | octavia-net=172.16.252.35; u1000-p1000-xbone=10.0.2.13 | +-------------------------------------+------------------------------------------------------------+ Later Octavia worker reports the following: 2019-05-05 20:46:16.124 18234 DEBUG octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-octavia-amp-post-vip-plug' (f105ced1-72c6-4116-b582-599a21cdee36) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /opt/openstack/lib/python3.6/site-packages/taskflow/listeners/logging.py:194 2019-05-05 20:46:16.127 18234 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-octavia-amp-post-vip-plug' (f105ced1-72c6-4116-b582-599a21cdee36) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2019-05-05 20:46:16.141 18234 DEBUG octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-reload-amp-after-plug-vip' (c4d6222e-2508-4a9c-9514-e7f9bcf84e31) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /opt/openstack/lib/python3.6/site-packages/taskflow/listeners/logging.py:194 2019-05-05 20:46:16.142 18234 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-reload-amp-after-plug-vip' (c4d6222e-2508-4a9c-9514-e7f9bcf84e31) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2019-05-05 20:46:16.146 18234 DEBUG octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-ocatvia-amp-update-vip-data' (2e1d1a04-282d-43b7-8c4f-fe31e75804ea) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /opt/openstack/lib/python3.6/site-packages/taskflow/listeners/logging.py:194 2019-05-05 20:46:16.148 18234 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-ocatvia-amp-update-vip-data' (2e1d1a04-282d-43b7-8c4f-fe31e75804ea) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2019-05-05 20:46:16.173 18234 DEBUG octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-octavia-amp-plug-vip' (c63a5bed-f531-4ed3-83d2-bce72e835932) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /opt/openstack/lib/python3.6/site-packages/taskflow/listeners/logging.py:194 2019-05-05 20:46:16.174 18234 WARNING octavia.controller.worker.tasks.network_tasks [-] Unable to plug VIP for amphora id 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5 load balancer id e01c6ff5-179a-4ed5-ae5d-1d00d6c584b8 and Neutron then deletes port but NOTE that immediately before deletion port reported by neutron-dhcp-agent as ACTIVE: May 5 20:46:17 ardbeg neutron-dhcp-agent: 2019-05-05 20:46:17.080 1804 INFO neutron.agent.dhcp.agent [req-835e5b91-28e5-44b9-a463-d04a0323294f a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=ardbeg, binding:profile=, binding:vif_details=datapath_type=system, ovs_hybrid_plug=False, port_filter=True, binding:vif_type=ovs, binding:vnic_type=normal, created_at=2019-05-05T20:46:07Z, description=, device_id=f1bce6e9-be5b-464b-8f64-686f36e9de1f, device_owner=compute:nova, dns_assignment=[{u'hostname': u'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5', u'ip_address': u'10.0.2.13', u'fqdn': u'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5.loqal.'}], dns_domain=, dns_name=amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5, extra_dhcp_opts=[], fixed_ips=[{u'subnet_id': u'24b10886-3d53-4aee-bdc6-f165b242ae4f', u'ip_address': u'10.0.2.13'}], id=b0398cc8-6d52-4f12-9f1f-1141b0f10751, mac_address=72:d0:1c:4c:94:91, name=octavia-lb-vrrp-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5, network_id=b24d2830-eec6-4abd-82f2-ac71c8ecbf40, port_security_enabled=True, project_id=41a02a69918849509f4102b04f8a7de9, qos_policy_id=None, revision_number=8, security_groups=[u'ba20352e-95b9-4c97-a688-59d44e3aa8cf'], status=ACTIVE, tags=[], tenant_id=41a02a69918849509f4102b04f8a7de9, updated_at=2019-05-05T20:46:16Z May 5 20:46:17 controller-l neutron-server: 2019-05-05 20:46:17.086 20981 INFO neutron.wsgi [req-835e5b91-28e5-44b9-a463-d04a0323294f a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - default default] 10.0.10.31 "PUT /v2.0/ports/b0398cc8-6d52-4f12-9f1f-1141b0f10751 HTTP/1.1" status: 200 len: 1395 time: 0.6318841 May 5 20:46:17 controller-l neutron-server: 2019-05-05 20:46:17.153 20981 INFO neutron.wsgi [req-37ee0da3-8dcc-4fb8-9cd3-91c5a8dcedef a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - default default] 10.0.10.31 "GET /v2.0/ports/b0398cc8-6d52-4f12-9f1f-1141b0f10751 HTTP/1.1" status: 200 len: 1395 time: 0.0616651 May 5 20:46:18 controller-l neutron-server: 2019-05-05 20:46:18.179 20981 INFO neutron.wsgi [req-8896542e-5dcb-4e6d-9379-04cd88c4035b a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - default default] 10.0.10.31 "DELETE /v2.0/ports/b0398cc8-6d52-4f12-9f1f-1141b0f10751 HTTP/1.1" status: 204 len: 149 time: 1.0199890 Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison
Hi, I did some additional tests (out of Octavia, reproducing Octavia's model) to check whether granted roles are enough. Seems, enough: 1) I have "customer" project with plenty of VM's connected to project's local (not shared) network (b24d2...) and subnet (24b10...), which is supposed to be vip-subnet: # openstack subnet show 24b10... project_id: ec62f... 2) I have "octavia" project, where users octavia, nova and neutron have "admin" role 3) under user "octavia" I create port in project "octavia", connected to "customer"s subnet and bind it to VM: octavia@octavia$ openstack port create --network b24d2... --fixed-ip subnet=24b10... --disable-port-security tport port id: 1c883... project_id: 41a02... octavia@octavia$ openstack server create --image cirros-0.4 --flavor B1 --nic port-id=1c883... tserv project_id: 41a02... 4) finally, I able to ping test server from customer project's VMs, despite the fact they're in different projects So it seems that roles to reproduce Octavia's model are enough and Openstack configured in the right way. On 5/6/19 12:34 AM, Volodymyr Litovka wrote:
Dear colleagues,
trying to launch Amphorae, getting the following error in logs:
Amphora agent returned unexpected result code 500 with response {'message': 'Error plugging VIP', 'details': 'SIOCADDRT: Network is unreachable\nFailed to bring up eth1.\n'}
While details below, questions are here: - whether it's enough to assign roles as explained below to special project for Octavia? - whether it can be issue with image, created by diskimage_create.sh? - any recommendation on where to search for the problem.
Thank you.
My environment is: - Openstack Rocky - Octavia 4.0 - amphora instance runs in special project "octavia", where users octavia, nova and neutron have admin role - amphora image prepared using original git repo process and elements without modification: * git clone <octavia> * cd octavia * diskimage-create/diskimage-create.sh * openstack image create [ ... ] --tag amphora
After created, amphora instance successfully connects to management network and can be accessed by controller:
2019-05-05 20:46:06.851 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connected to amphora. Response: <Response [200]> request /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:486 2019-05-05 20:46:06.852 18234 DEBUG octavia.controller.worker.tasks.amphora_driver_tasks [-] Successfuly connected to amphora 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5: {'ipvsadm_version': '1:1.28-3', 'api_version': '0.5', 'haproxy_version': '1.6.3-1ubuntu0.2', 'hostname': 'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5', 'keepalived_version': '1:1.2.24-1ubuntu0.16.04.1'} execute /opt/openstack/lib/python3.6/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py:372 [ ... ] 2019-05-05 20:46:06.990 18234 DEBUG octavia.controller.worker.tasks.network_tasks [-] Plumbing VIP for amphora id: 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5 execute /opt/openstack/lib/python3.6/site-packages/octavia/controller/worker/tasks/network_tasks.py:382 2019-05-05 20:46:07.003 18234 DEBUG octavia.network.drivers.neutron.base [-] Neutron extension security-group found enabled _check_extension_enabled /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py:66 2019-05-05 20:46:07.013 18234 DEBUG octavia.network.drivers.neutron.base [-] Neutron extension dns-integration found enabled _check_extension_enabled /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py:66 2019-05-05 20:46:07.025 18234 DEBUG octavia.network.drivers.neutron.base [-] Neutron extension qos found enabled _check_extension_enabled /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py:66 2019-05-05 20:46:07.044 18234 DEBUG octavia.network.drivers.neutron.base [-] Neutron extension allowed-address-pairs found enabled _check_extension_enabled /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py:66 2019-05-05 20:46:08.406 18234 DEBUG octavia.network.drivers.neutron.allowed_address_pairs [-] Created vip port: b0398cc8-6d52-4f12-9f1f-1141b0f10751 for amphora: 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5 _plug_amphora_vip /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py:97 [ ... ] 2019-05-05 20:46:15.405 18234 DEBUG octavia.network.drivers.neutron.allowed_address_pairs [-] Retrieving network details for amphora 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5 _get_amp_net_configs /opt/openstack/lib/python3.6/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py:596 [ ... ] 2019-05-05 20:46:15.837 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Post-VIP-Plugging with vrrp_ip 10.0.2.13 vrrp_port b0398cc8-6d52-4f12-9f1f-1141b0f10751 post_vip_plug /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:233 2019-05-05 20:46:15.838 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url plug/vip/10.0.2.24 request /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:462 2019-05-05 20:46:15.838 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url https://172.16.252.35:9443/0.5/plug/vip/10.0.2.24 request /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:465 2019-05-05 20:46:16.089 18234 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connected to amphora. Response: <Response [500]> request /opt/openstack/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:486 2019-05-05 20:46:16.090 18234 ERROR octavia.amphorae.drivers.haproxy.exceptions [-] Amphora agent returned unexpected result code 500 with response {'message': 'Error plugging VIP', 'details': 'SIOCADDRT: Network is unreachable\nFailed to bring up eth1.\n'}
During the process, NEUTRON logs contains the following records that indicate the following (note "status=DOWN" in neutron-dhcp-agent; later immediately before to be deleted, it will shed 'ACTIVE'):
May 5 20:46:13 ardbeg neutron-dhcp-agent: 2019-05-05 20:46:13.857 1804 INFO neutron.agent.dhcp.agent [req-07833602-9579-403b-a264-76fd3ee408ee a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[{u'ip_address': u'10.0.2.24', u'mac_address': u'72:d0:1c:4c:94:91'}], binding:host_id=ardbeg, binding:profile=, binding:vif_details=datapath_type=system, ovs_hybrid_plug=False, port_filter=True, binding:vif_type=ovs, binding:vnic_type=normal, created_at=2019-05-05T20:46:07Z, description=, device_id=f1bce6e9-be5b-464b-8f64-686f36e9de1f, device_owner=compute:nova, dns_assignment=[{u'hostname': u'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5', u'ip_address': u'10.0.2.13', u'fqdn': u'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5.loqal.'}], dns_domain=, dns_name=amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5, extra_dhcp_opts=[], fixed_ips=[{u'subnet_id': u'24b10886-3d53-4aee-bdc6-f165b242ae4f', u'ip_address': u'10.0.2.13'}], id=b0398cc8-6d52-4f12-9f1f-1141b0f10751, mac_address=72:d0:1c:4c:94:91, name=octavia-lb-vrrp-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5, network_id=b24d2830-eec6-4abd-82f2-ac71c8ecbf40, port_security_enabled=True, project_id=41a02a69918849509f4102b04f8a7de9, qos_policy_id=None, revision_number=5, security_groups=[u'6df53a15-6afc-4c99-b464-03de4f546b4f'], status=DOWN, tags=[], tenant_id=41a02a69918849509f4102b04f8a7de9, updated_at=2019-05-05T20:46:13Z May 5 20:46:14 ardbeg neutron-openvswitch-agent: 2019-05-05 20:46:14.185 31542 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-a4425cdb-afc1-4f6a-9ef9-c8706e3285d6 - - - - -] Port b0398cc8-6d52-4f12-9f1f-1141b0f10751 updated. Details: {'profile': {}, 'network_qos_policy_id': None, 'qos_policy_id': None, 'allowed_address_pairs': [{'ip_address': AuthenticIPNetwork('10.0.2.24'), 'mac_address': EUI('72:d0:1c:4c:94:91')}], 'admin_state_up': True, 'network_id': 'b24d2830-eec6-4abd-82f2-ac71c8ecbf40', 'segmentation_id': 437, 'fixed_ips': [{'subnet_id': '24b10886-3d53-4aee-bdc6-f165b242ae4f', 'ip_address': '10.0.2.13'}], 'device_owner': u'compute:nova', 'physical_network': None, 'mac_address': '72:d0:1c:4c:94:91', 'device': u'b0398cc8-6d52-4f12-9f1f-1141b0f10751', 'port_security_enabled': True, 'port_id': 'b0398cc8-6d52-4f12-9f1f-1141b0f10751', 'network_type': u'vxlan', 'security_groups': [u'6df53a15-6afc-4c99-b464-03de4f546b4f']} May 5 20:46:14 ardbeg neutron-openvswitch-agent: 2019-05-05 20:46:14.197 31542 INFO neutron.agent.securitygroups_rpc [req-a4425cdb-afc1-4f6a-9ef9-c8706e3285d6 - - - - -] Preparing filters for devices set([u'b0398cc8-6d52-4f12-9f1f-1141b0f10751'])
Note Nova returns response 200/completed:
May 5 20:46:14 controller-l neutron-server: 2019-05-05 20:46:14.326 20981 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'b0398cc8-6d52-4f12-9f1f-1141b0f10751', u'name': u'network-changed', u'server_uuid': u'f1bce6e9-be5b-464b-8f64-686f36e9de1f', u'code': 200}
and "openstack server show" shows both NICs are attached to the amphorae:
$ openstack server show f1bce6e9-be5b-464b-8f64-686f36e9de1f +-------------------------------------+------------------------------------------------------------+ | Field | Value | +-------------------------------------+------------------------------------------------------------+ [ ... ] | addresses | octavia-net=172.16.252.35; u1000-p1000-xbone=10.0.2.13 | +-------------------------------------+------------------------------------------------------------+
Later Octavia worker reports the following:
2019-05-05 20:46:16.124 18234 DEBUG octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-octavia-amp-post-vip-plug' (f105ced1-72c6-4116-b582-599a21cdee36) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /opt/openstack/lib/python3.6/site-packages/taskflow/listeners/logging.py:194 2019-05-05 20:46:16.127 18234 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-octavia-amp-post-vip-plug' (f105ced1-72c6-4116-b582-599a21cdee36) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2019-05-05 20:46:16.141 18234 DEBUG octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-reload-amp-after-plug-vip' (c4d6222e-2508-4a9c-9514-e7f9bcf84e31) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /opt/openstack/lib/python3.6/site-packages/taskflow/listeners/logging.py:194 2019-05-05 20:46:16.142 18234 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-reload-amp-after-plug-vip' (c4d6222e-2508-4a9c-9514-e7f9bcf84e31) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2019-05-05 20:46:16.146 18234 DEBUG octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-ocatvia-amp-update-vip-data' (2e1d1a04-282d-43b7-8c4f-fe31e75804ea) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /opt/openstack/lib/python3.6/site-packages/taskflow/listeners/logging.py:194 2019-05-05 20:46:16.148 18234 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-ocatvia-amp-update-vip-data' (2e1d1a04-282d-43b7-8c4f-fe31e75804ea) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2019-05-05 20:46:16.173 18234 DEBUG octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-plug-net-subflow-octavia-amp-plug-vip' (c63a5bed-f531-4ed3-83d2-bce72e835932) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /opt/openstack/lib/python3.6/site-packages/taskflow/listeners/logging.py:194 2019-05-05 20:46:16.174 18234 WARNING octavia.controller.worker.tasks.network_tasks [-] Unable to plug VIP for amphora id 5bec4c09-a209-4e73-a66e-e4fc0fb8ded5 load balancer id e01c6ff5-179a-4ed5-ae5d-1d00d6c584b8
and Neutron then deletes port but NOTE that immediately before deletion port reported by neutron-dhcp-agent as ACTIVE:
May 5 20:46:17 ardbeg neutron-dhcp-agent: 2019-05-05 20:46:17.080 1804 INFO neutron.agent.dhcp.agent [req-835e5b91-28e5-44b9-a463-d04a0323294f a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=ardbeg, binding:profile=, binding:vif_details=datapath_type=system, ovs_hybrid_plug=False, port_filter=True, binding:vif_type=ovs, binding:vnic_type=normal, created_at=2019-05-05T20:46:07Z, description=, device_id=f1bce6e9-be5b-464b-8f64-686f36e9de1f, device_owner=compute:nova, dns_assignment=[{u'hostname': u'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5', u'ip_address': u'10.0.2.13', u'fqdn': u'amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5.loqal.'}], dns_domain=, dns_name=amphora-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5, extra_dhcp_opts=[], fixed_ips=[{u'subnet_id': u'24b10886-3d53-4aee-bdc6-f165b242ae4f', u'ip_address': u'10.0.2.13'}], id=b0398cc8-6d52-4f12-9f1f-1141b0f10751, mac_address=72:d0:1c:4c:94:91, name=octavia-lb-vrrp-5bec4c09-a209-4e73-a66e-e4fc0fb8ded5, network_id=b24d2830-eec6-4abd-82f2-ac71c8ecbf40, port_security_enabled=True, project_id=41a02a69918849509f4102b04f8a7de9, qos_policy_id=None, revision_number=8, security_groups=[u'ba20352e-95b9-4c97-a688-59d44e3aa8cf'], status=ACTIVE, tags=[], tenant_id=41a02a69918849509f4102b04f8a7de9, updated_at=2019-05-05T20:46:16Z May 5 20:46:17 controller-l neutron-server: 2019-05-05 20:46:17.086 20981 INFO neutron.wsgi [req-835e5b91-28e5-44b9-a463-d04a0323294f a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - default default] 10.0.10.31 "PUT /v2.0/ports/b0398cc8-6d52-4f12-9f1f-1141b0f10751 HTTP/1.1" status: 200 len: 1395 time: 0.6318841 May 5 20:46:17 controller-l neutron-server: 2019-05-05 20:46:17.153 20981 INFO neutron.wsgi [req-37ee0da3-8dcc-4fb8-9cd3-91c5a8dcedef a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - default default] 10.0.10.31 "GET /v2.0/ports/b0398cc8-6d52-4f12-9f1f-1141b0f10751 HTTP/1.1" status: 200 len: 1395 time: 0.0616651 May 5 20:46:18 controller-l neutron-server: 2019-05-05 20:46:18.179 20981 INFO neutron.wsgi [req-8896542e-5dcb-4e6d-9379-04cd88c4035b a18f38c780074c6280dde5edad159666 41a02a69918849509f4102b04f8a7de9 - default default] 10.0.10.31 "DELETE /v2.0/ports/b0398cc8-6d52-4f12-9f1f-1141b0f10751 HTTP/1.1" status: 204 len: 149 time: 1.0199890
Thank you.
-- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison
-- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison
participants (1)
-
Volodymyr Litovka