[Octavia][Victoria] No service listening on port 9443 in the amphora instance

Luke Camilleri luke.camilleri at zylacomputing.com
Thu May 6 19:30:05 UTC 2021


Hi Michael and thanks a lot for your help on this, after following your 
steps the agent got deployed successfully in the amphora-image.

I have some other queries that I would like to ask mainly related to the 
health-manager/load-balancer network setup and IP assignment. First of 
all let me point out that I am using a manual installation process, and 
it might help others to understand the underlying infrastructure 
required to make this component work as expected.

1- The installation procedure contains this step:

$ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia

which is later on called to assign the IP to the o-hm0 interface which 
is connected to the lb-management network as shown below:

$ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia

Apart from having a dhcp config for a single IP seems a bit of an 
overkill, using these steps is injecting an additional routing table 
into the default namespace as shown below in my case:

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use 
Iface
0.0.0.0         172.16.0.1      0.0.0.0         UG    0 0        0 o-hm0
0.0.0.0         10.X.X.1        0.0.0.0         UG    100 0        0 ensX
10.X.X.0        0.0.0.0         255.255.255.0   U     100 0        0 ensX
169.254.169.254 172.16.0.100    255.255.255.255 UGH   0 0        0 o-hm0
172.16.0.0      0.0.0.0         255.240.0.0     U     0 0        0 o-hm0

Since the load-balancer management network does not need any external 
connectivity (but only communication between health-manager service and 
amphora-agent), why is a gateway required and why isn't the IP address 
allocated as part of the interface creation script which is called when 
the service is started or stopped (example below)?

---

#!/bin/bash

set -ex

MAC=$MGMT_PORT_MAC
BRNAME=$BRNAME

if [ "$1" == "start" ]; then
   ip link add o-hm0 type veth peer name o-bhm0
   brctl addif $BRNAME o-bhm0
   ip link set o-bhm0 up
   ip link set dev o-hm0 address $MAC
  *** ip addr add 172.16.0.2/12 dev o-hm0
  ***ip link set o-hm0 mtu 1500
   ip link set o-hm0 up
   iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT
elif [ "$1" == "stop" ]; then
   ip link del o-hm0
else
   brctl show $BRNAME
   ip a s dev o-hm0
fi

---

2- Is there a possibility to specify a fixed vlan outside of tenant 
range for the load balancer management network?

3- Are the configuration changes required only in neutron.conf or also 
in additional config files like neutron_lbaas.conf and 
services_lbaas.conf, similar to the vpnaas configuration?

Thanks in advance for any assistance, but its like putting together a 
puzzle of information :-)

On 05/05/2021 20:25, Michael Johnson wrote:
> Hi Luke.
>
> Yes, the amphora-agent will listen on 9443 in the amphorae instances.
> It uses TLS mutual authentication, so you can get a TLS response, but
> it will not let you into the API without a valid certificate. A simple
> "openssl s_client" is usually enough to prove that it is listening and
> requesting the client certificate.
>
> I can't talk to the "openstack-octavia-diskimage-create" package you
> found in centos, but I can discuss how to build an amphora image using
> the OpenStack tools.
>
> If you get Octavia from git or via a release tarball, we provide a
> script to build the amphora image. This is how we build our images for
> the testing gates, etc. and is the recommended way (at least from the
> OpenStack Octavia community) to create amphora images.
>
> https://opendev.org/openstack/octavia/src/branch/master/diskimage-create
>
> For CentOS 8, the command would be:
>
> diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3
> is the minimum disk size for centos images, you may want more if you
> are not offloading logs)
>
> I just did a run on a fresh centos 8 instance:
> git clone https://opendev.org/openstack/octavia
> python3 -m venv dib
> source dib/bin/activate
> pip3 install diskimage-builder PyYAML six
> sudo dnf install yum-utils
> ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3
>
> This built an image.
>
> Off and on we have had issues building CentOS images due to issues in
> the tools we rely on. If you run into issues with this image, drop us
> a note back.
>
> Michael
>
> On Wed, May 5, 2021 at 9:37 AM Luke Camilleri
> <luke.camilleri at zylacomputing.com> wrote:
>> Hi there, i am trying to get Octavia running on a Victoria deployment on
>> CentOS 8. It was a bit rough getting to the point to launch an instance
>> mainly due to the load-balancer management network and the lack of
>> documentation
>> (https://docs.openstack.org/octavia/victoria/install/install.html) to
>> deploy this oN CentOS. I will try to fix this once I have my deployment
>> up and running to help others on the way installing and configuring this :-)
>>
>> At this point a LB can be launched by the tenant and the instance is
>> spawned in the Octavia project and I can ping and SSH into the amphora
>> instance from the Octavia node where the octavia-health-manager service
>> is running using the IP within the same subnet of the amphoras
>> (172.16.0.0/12).
>>
>> Unfortunately I keep on getting these errors in the log file of the
>> worker log (/var/log/octavia/worker.log):
>>
>> 2021-05-05 01:54:49.368 14521 WARNING
>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
>> to instance. Retrying.: requests.exceptions.ConnectionError:
>> HTTPSConnectionPool(host='172.16.4.46', p
>> ort=9443): Max retries exceeded with url: // (Caused by
>> NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object
>> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111]
>> Connection ref
>> used',))
>>
>> 2021-05-05 01:54:54.374 14521 ERROR
>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries
>> (currently set to 120) exhausted.  The amphora is unavailable. Reason:
>> HTTPSConnectionPool(host='172.16
>> .4.46', port=9443): Max retries exceeded with url: // (Caused by
>> NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object
>> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne
>> ction refused',))
>>
>> 2021-05-05 01:54:54.374 14521 ERROR
>> octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora
>> compute instance failed to become reachable. This either means the
>> compute driver failed to fully boot the
>> instance inside the timeout interval or the instance is not reachable
>> via the lb-mgmt-net.:
>> octavia.amphorae.driver_exceptions.exceptions.TimeOutException:
>> contacting the amphora timed out
>>
>> obviously the instance is deleted then and the task fails from the
>> tenant's perspective.
>>
>> The main issue here is that there is no service running on port 9443 on
>> the amphora instance. I am assuming that this is in fact the
>> amphora-agent service that is running on the instance which should be
>> listening on this port 9443 but the service does not seem to be up or
>> not installed at all.
>>
>> To create the image I have installed the CentOS package
>> "openstack-octavia-diskimage-create" which provides the utility
>> disk-image-create but from what I can conclude the amphora-agent is not
>> being installed (thought this was done automatically by default :-( )
>>
>> Can anyone let me know if the amphora-agent is what gets queried on port
>> 9443 ?
>>
>> If the agent is not installed/injected by default when building the
>> amphora image?
>>
>> The command to inject the amphora-agent into the amphora image when
>> using the disk-image-create command?
>>
>> Thanks in advance for any assistance
>>
>>



More information about the openstack-discuss mailing list