Octavia
Bernd Bausch
berndbausch at gmail.com
Tue Aug 10 06:38:55 UTC 2021
The stacktrace shows that Octavia can't reach an Amphora. I suppose you
get this log when trying to create a loadbalancer? If so, most likely
the Amphora management network is not set up correctly.
The difficulty is that Octavia workers, which are /processes /running on
the controllers, need to have access to the same management network as
the Amphora /instances/. If you implement the management network as a
normal tenant network, some non-trivial manual Openvswitch configuration
is required. See
https://docs.openstack.org/octavia/latest/install/install-ubuntu.html#install-and-configure-components
for instructions. In production settings, usually a VLAN is used, which
is easy to access from controller processes.
I succeeded running Octavia on a Kolla cluster with three-way
controllers, with a tenant network (VXLAN-based) for amphora management.
My notes are attached (they are tailored to my configuration, of course).
Bernd.
On 2021/08/09 10:57 AM, Chris Lyons wrote:
Well…it gets. A lot further…. I see this error now…. Im looking around
to see if it’s a security group missing or if there is some other
setting I missed. Im not seeing any scripts to prep the env…usually
there is something like that if it’s a security group…anyone know?
...
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker Traceback (most recent
call last):
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker File
"/usr/lib/python3.6/site-packages/taskflow/engines/action_engine/executor.py",
line 53, in _execute_task
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker result =
task.execute(**arguments)
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker File
"/usr/lib/python3.6/site-packages/octavia/controller/worker/v1/tasks/amphora_driver_tasks.py",
line 424, in execute
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker amp_info =
self.amphora_driver.get_info(amphora)
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker File
"/usr/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py",
line 373, in get_info
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker amphora,
raise_retry_exception=raise_retry_exception)
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker File
"/usr/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py",
line 106, in _populate_amphora_api_version
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker
raise_retry_exception=raise_retry_exception)['api_version']
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker File
"/usr/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py",
line 744, in get_api_version
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker
raise_retry_exception=raise_retry_exception)
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker File
"/usr/lib/python3.6/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py",
line 738, in request
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker raise
driver_except.TimeOutException()
2021-08-08 21:22:35.965 27 ERROR
octavia.controller.worker.v1.controller_worker
octavia.amphorae.driver_exceptions.exceptions.TimeOutException:
contacting the amphora timed out
...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210810/2e677eb7/attachment-0001.html>
-------------- next part --------------
===========================================================================
Make the Loadbalancer management network accessible to controllers
===========================================================================
Assumptions
-----------------------------------------------------
By default, kolla-ansible creates a VXLAN named lb-mgmt-net. These instructions
assume the following characteristics:
$ openstack subnet show lb-mgmt-subnet -c allocation_pools -c cidr -c gateway_ip -c enable_dhcp
+------------------+-------------------------+
| Field | Value |
+------------------+-------------------------+
| allocation_pools | 172.16.0.50-172.16.0.99 |
| cidr | 172.16.0.0/24 |
| enable_dhcp | True |
| gateway_ip | 172.16.0.166 |
+------------------+-------------------------+
Create a NIC that facilitates access to lb-mgmt-net
--------------------------------------------------------
Get DHCP server's port ID and host:
openstack port list --network lb-mgmt-net
openstack network agent list --network lb-mgmt-net
We assume host is k1 for the rest of this document.
Find the VLAN ID (tag) that Openvswitch associated the with DHCP server's NIC.
The NIC's name starts with "tap", then the first 12 characters of the port ID.
ssh k1 sudo docker exec openvswitch_vswitchd ovsdb-client \
dump unix:/var/run/openvswitch/db.sock Open_vSwitch Port name tag
Port table
name tag
-------------- ---
br-ex []
...
tap4df0c7a2-27 1 <<< DHCP server's port ID starts with 4df0c7a2-27
Here the VLAN ID (tag) is 1.
Create a bridge port in br-int and give it the same VLAN ID (tag) as before:
ssh k1 sudo docker exec openvswitch_vswitchd ovs-vsctl add-port br-int o-hm0 tag=1 -- \
set interface o-hm0 type=internal
These are two commands in one, separated by a double dash.
Double-check success with
ssh k1 ip a show dev o-hm0
Configure the lb-mgmt-net gateway address on o-hm0
---------------------------------------------------------
On k1, add two lines to /etc/netplan/00-installer-config.yaml:
o-hm0:
addresses: [ 172.16.0.166/24 ]
version: 2
then
ssh k1 sudo netplan apply
Double-check success. o-hm0 should be up and have its IP address,
and there should be a route to 172.16.0.0/24 via o-hm0.
ssh k1 ip a show dev o-hm0
ssh k1 ip r
Add routes from k2 and k3 to o-hm0
------------------------------------------------------------
At this point, the network is accessible to k1. Add routes to the other
controllers via k1.
ssh k2 sudo ip r add 172.16.0.0/24 via 192.168.122.201
ssh k1 sudo ip r add 172.16.0.0/24 via 192.168.122.201
Make routes persistent on k2 and k3.
$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
ethernets:
ens3:
....
routes:
- to: 172.16.0.0/24
via: 192.168.122.201
Relax the firewall on k1
-------------------------------------------------------------
The firewall must allow traffic from and to lb-mgmt-net.
Create an rc.local file with this content:
#!/bin/bash
iptables -A FORWARD -s 172.16.0.0/24 -j ACCEPT
iptables -A FORWARD -d 172.16.0.0/24 -j ACCEPT
Make it executable, then enable and start the rc-local service.
sudo chmod +x /etc/rc.local
sudo systemctl enable rc-local --now
More information about the openstack-discuss
mailing list