[openstack-dev] [octavia] fail to plug vip to amphora
Michael Johnson
johnsomor at gmail.com
Wed Jun 28 17:00:51 UTC 2017
Hi Yipei,
I have meant to add this as a config option, but in the interim you can do the following to disable the automatic cleanup by disabling the revert flow in taskflow:
octavia/common/base_taskflow.py line 37 add “never_resolve=True,” to the engine load parameters.
Michael
From: Yipei Niu [mailto:newypei at gmail.com]
Sent: Monday, June 26, 2017 11:34 PM
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [octavia] fail to plug vip to amphora
Hi, Micheal,
Thanks a lot for your help, but I still have one question.
In Octavia, once the controller worker fails plugging VIP to the amphora, the amphora is deleted immediately, making it impossible to trace the error. How to prevent Octavia from stopping and deleting the amphora?
Best regards,
Yipei
On Mon, Jun 26, 2017 at 11:21 AM, Yipei Niu <newypei at gmail.com <mailto:newypei at gmail.com> > wrote:
Hi, all,
I am trying to create a load balancer in octavia. The amphora can be booted successfully, and can be reached via icmp. However, octavia fails to plug vip to the amphora through the amphora client api and returns 500 status code, causing some errors as follows.
|__Flow 'octavia-create-loadbalancer-flow': InternalServerError: Internal Server Error
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker Traceback (most recent call last):
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker File "/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker result = task.execute(**arguments)
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker File "/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 240, in execute
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker amphorae_network_config)
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker File "/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 219, in execute
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker amphora, loadbalancer, amphorae_network_config)
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 137, in post_vip_plug
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker net_info)
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 378, in plug_vip
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker return exc.check_exception(r)
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/exceptions.py", line 32, in check_exception
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker raise responses[status_code]()
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker InternalServerError: Internal Server Error
2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
To fix the problem, I log in the amphora and find that there is one http server process is listening on port 9443, so I think the amphora api services is active. But do not know how to further investigate what error happens inside the amphora api service and solve it? Look forward to your valuable comments.
Best regards,
Yipei
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170628/712d1e50/attachment.html>
More information about the OpenStack-dev
mailing list