[openstack-dev] [octavia] fail to plug vip to amphora
Yipei Niu
newypei at gmail.com
Wed Jun 28 09:49:38 UTC 2017
Hi, Michael,
Thanks for your help. I have already created a load balancer successfully,
but failed creating a listener. The detailed errors of amphora-agent and
syslog in the amphora are as follows.
In amphora-agent.log:
[2017-06-28 08:54:12 +0000] [1209] [INFO] Starting gunicorn 19.7.0
[2017-06-28 08:54:13 +0000] [1209] [DEBUG] Arbiter booted
[2017-06-28 08:54:13 +0000] [1209] [INFO] Listening at: http://[::]:9443
(1209)
[2017-06-28 08:54:13 +0000] [1209] [INFO] Using worker: sync
[2017-06-28 08:54:13 +0000] [1209] [DEBUG] 1 workers
[2017-06-28 08:54:13 +0000] [1816] [INFO] Booting worker with pid: 1816
[2017-06-28 08:54:15 +0000] [1816] [DEBUG] POST /0.5/plug/vip/10.0.1.8
::ffff:192.168.0.12 - - [28/Jun/2017:08:54:59 +0000] "POST /0.5/plug/vip/
10.0.1.8 HTTP/1.1" 202 78 "-" "Octavia HaProxy Rest Client/0.5 (
https://wiki.openstack.org/wiki/Octavia)"
[2017-06-28 08:59:18 +0000] [1816] [DEBUG] PUT
/0.5/listeners/9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4/bca2c985-471a-4477-8217-92fa71d04cb7/haproxy
::ffff:192.168.0.12 - - [28/Jun/2017:08:59:19 +0000] "PUT
/0.5/listeners/9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4/bca2c985-471a-4477-8217-92fa71d04cb7/haproxy
HTTP/1.1" 400 414 "-" "Octavia HaProxy Rest Client/0.5 (
https://wiki.openstack.org/wiki/Octavia)"
In syslog:
Jun 28 08:57:14 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
#############################################################
Jun 28 08:57:14 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
-----BEGIN SSH HOST KEY FINGERPRINTS-----
Jun 28 08:57:14 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 1024
SHA256:qDQcKq2Je/CzlpPndccMf0aR0u/KPJEEIAl4RraAgVc
root at amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (DSA)
Jun 28 08:57:15 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 256
SHA256:n+5tCCdJwASMaD/kJ6fm0kVNvXDh4aO0si2Uls4MXkI
root at amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (ECDSA)
Jun 28 08:57:15 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 256
SHA256:7RWMBOW+QKzeolI6BDSpav9dVZuon58weIQJ9/peVxE
root at amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (ED25519)
Jun 28 08:57:16 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 2048
SHA256:9z+EcAAUyTENKJRctKCzPslK6Yf4c7s9R8sEflDITIU
root at amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (RSA)
Jun 28 08:57:16 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: -----END
SSH HOST KEY FINGERPRINTS-----
Jun 28 08:57:16 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
#############################################################
Jun 28 08:57:17 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
cloud-init[2092]: Cloud-init v. 0.7.9 running 'modules:final' at Wed, 28
Jun 2017 08:57:03 +0000. Up 713.82 seconds.
Jun 28 08:57:17 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
cloud-init[2092]: Cloud-init v. 0.7.9 finished at Wed, 28 Jun 2017 08:57:16
+0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up
727.30 seconds
Jun 28 08:57:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Started Execute cloud user/final scripts.
Jun 28 08:57:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Reached target Cloud-init target.
Jun 28 08:57:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Startup finished in 52.054s (kernel) + 11min 17.647s (userspace) = 12min
9.702s.
Jun 28 08:59:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
amphora-agent[1209]: 2017-06-28 08:59:19.243 1816 ERROR
octavia.amphorae.backends.agent.api_server.listener [-] Failed to verify
haproxy file: Command '['haproxy', '-c', '-L', 'NK20KVuD6oi5NrRP7KOVflM
3MsQ', '-f',
'/var/lib/octavia/bca2c985-471a-4477-8217-92fa71d04cb7/haproxy.cfg.new']'
returned non-zero exit status 1
Jun 28 09:00:11 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Starting Cleanup of Temporary Directories...
Jun 28 09:00:12 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
systemd-tmpfiles[3040]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line
for path "/var/log", ignoring.
Jun 28 09:00:15 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Started Cleanup of Temporary Directories.
Look forward to your valuable comments.
Best regards,
Yipei
On Tue, Jun 27, 2017 at 2:33 PM, Yipei Niu <newypei at gmail.com> wrote:
> Hi, Micheal,
>
> Thanks a lot for your help, but I still have one question.
>
> In Octavia, once the controller worker fails plugging VIP to the amphora,
> the amphora is deleted immediately, making it impossible to trace the
> error. How to prevent Octavia from stopping and deleting the amphora?
>
> Best regards,
> Yipei
>
> On Mon, Jun 26, 2017 at 11:21 AM, Yipei Niu <newypei at gmail.com> wrote:
>
>> Hi, all,
>>
>> I am trying to create a load balancer in octavia. The amphora can be
>> booted successfully, and can be reached via icmp. However, octavia fails to
>> plug vip to the amphora through the amphora client api and returns 500
>> status code, causing some errors as follows.
>>
>> |__Flow
>> 'octavia-create-loadbalancer-flow': InternalServerError: Internal Server
>> Error
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> Traceback (most recent call last):
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> File "/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py",
>> line 53, in _execute_task
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> result = task.execute(**arguments)
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> File "/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py",
>> line 240, in execute
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> amphorae_network_config)
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> File "/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py",
>> line 219, in execute
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> amphora, loadbalancer, amphorae_network_config)
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py",
>> line 137, in post_vip_plug
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> net_info)
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py",
>> line 378, in plug_vip
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> return exc.check_exception(r)
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/exceptions.py",
>> line 32, in check_exception
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> raise responses[status_code]()
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker
>> InternalServerError: Internal Server Error
>> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.cont
>> roller_worker
>>
>> To fix the problem, I log in the amphora and find that there is one http
>> server process is listening on port 9443, so I think the amphora api
>> services is active. But do not know how to further investigate what error
>> happens inside the amphora api service and solve it? Look forward to your
>> valuable comments.
>>
>> Best regards,
>> Yipei
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170628/765f4c8b/attachment.html>
More information about the OpenStack-dev
mailing list