<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Hi Michael and thanks a lot for the detailed answer below.</p>
<p>I believe I have got most of this sorted out apart from some
small issues below:</p>
<ol>
<li>If the o-hm0 interface gets the IP information from the DHCP
server setup by neutron for the lb-mgmt-net, then the management
node will always have 2 default gateways and this will bring
along issues, the same DHCP settings when deployed to the
amphora do not have the same issue since the amphora only has 1
IP assigned on the lb-mgmt-net. Can you please confirm this?<br>
</li>
<li>How does the amphora know where to locate the worker and
housekeeping processes or does the traffic originate from the
services instead? Maybe the addresses are "injected" from the
config file?<br>
</li>
<li>Can you please confirm if the same floating IP concept runs
from public (external) IP to the private (tenant) and from
private to lb-mgmt-net please?</li>
</ol>
<p>Thanks in advance for any feedback<br>
</p>
<div class="moz-cite-prefix">On 06/05/2021 22:46, Michael Johnson
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAMH0MgLDth7KbcR_J5w7wad4sJgg-bfbL1SFkDnnT_UzR-2isg@mail.gmail.com">
<pre class="moz-quote-pre" wrap="">Hi Luke,
1. I agree that DHCP is technically unnecessary for the o-hm0
interface if you can manage your address allocation on the network you
are using for the lb-mgmt-net.
I don't have detailed information about the Ubuntu install
instructions, but I suspect it was done to simplify the IPAM to be
managed by whatever is providing DHCP on the lb-mgmt-net provided (be
it neutron or some other resource on a provider network).
The lb-mgmt-net is simply a neutron network that the amphora
management address is on. It is routable and does not require external
access. The only tricky part to it is the worker, health manager, and
housekeeping processes need to be reachable from the amphora, and the
controllers need to reach the amphora over the network(s). There are
many ways to accomplish this.
2. See my above answer. Fundamentally the lb-mgmt-net is just a
neutron network that nova can use to attach an interface to the
amphora instances for command and control traffic. As long as the
controllers can reach TCP 9433 on the amphora, and the amphora can
send UDP 5555 back to the health manager endpoints, it will work fine.
3. Octavia, with the amphora driver, does not require any special
configuration in Neutron (beyond the advanced services RBAC policy
being available for the neutron service account used in your octavia
configuration file). The neutron_lbaas.conf and services_lbaas.conf
are legacy configuration files/settings that were used for
neutron-lbaas which is now end of life. See the wiki page for
information on the deprecation of neutron-lbaas:
<a class="moz-txt-link-freetext" href="https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation">https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation</a>.
Michael
On Thu, May 6, 2021 at 12:30 PM Luke Camilleri
<a class="moz-txt-link-rfc2396E" href="mailto:luke.camilleri@zylacomputing.com"><luke.camilleri@zylacomputing.com></a> wrote:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">
Hi Michael and thanks a lot for your help on this, after following your
steps the agent got deployed successfully in the amphora-image.
I have some other queries that I would like to ask mainly related to the
health-manager/load-balancer network setup and IP assignment. First of
all let me point out that I am using a manual installation process, and
it might help others to understand the underlying infrastructure
required to make this component work as expected.
1- The installation procedure contains this step:
$ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia
which is later on called to assign the IP to the o-hm0 interface which
is connected to the lb-management network as shown below:
$ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia
Apart from having a dhcp config for a single IP seems a bit of an
overkill, using these steps is injecting an additional routing table
into the default namespace as shown below in my case:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0
0.0.0.0 10.X.X.1 0.0.0.0 UG 100 0 0 ensX
10.X.X.0 0.0.0.0 255.255.255.0 U 100 0 0 ensX
169.254.169.254 172.16.0.100 255.255.255.255 UGH 0 0 0 o-hm0
172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 o-hm0
Since the load-balancer management network does not need any external
connectivity (but only communication between health-manager service and
amphora-agent), why is a gateway required and why isn't the IP address
allocated as part of the interface creation script which is called when
the service is started or stopped (example below)?
---
#!/bin/bash
set -ex
MAC=$MGMT_PORT_MAC
BRNAME=$BRNAME
if [ "$1" == "start" ]; then
ip link add o-hm0 type veth peer name o-bhm0
brctl addif $BRNAME o-bhm0
ip link set o-bhm0 up
ip link set dev o-hm0 address $MAC
*** ip addr add 172.16.0.2/12 dev o-hm0
***ip link set o-hm0 mtu 1500
ip link set o-hm0 up
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT
elif [ "$1" == "stop" ]; then
ip link del o-hm0
else
brctl show $BRNAME
ip a s dev o-hm0
fi
---
2- Is there a possibility to specify a fixed vlan outside of tenant
range for the load balancer management network?
3- Are the configuration changes required only in neutron.conf or also
in additional config files like neutron_lbaas.conf and
services_lbaas.conf, similar to the vpnaas configuration?
Thanks in advance for any assistance, but its like putting together a
puzzle of information :-)
On 05/05/2021 20:25, Michael Johnson wrote:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">Hi Luke.
Yes, the amphora-agent will listen on 9443 in the amphorae instances.
It uses TLS mutual authentication, so you can get a TLS response, but
it will not let you into the API without a valid certificate. A simple
"openssl s_client" is usually enough to prove that it is listening and
requesting the client certificate.
I can't talk to the "openstack-octavia-diskimage-create" package you
found in centos, but I can discuss how to build an amphora image using
the OpenStack tools.
If you get Octavia from git or via a release tarball, we provide a
script to build the amphora image. This is how we build our images for
the testing gates, etc. and is the recommended way (at least from the
OpenStack Octavia community) to create amphora images.
<a class="moz-txt-link-freetext" href="https://opendev.org/openstack/octavia/src/branch/master/diskimage-create">https://opendev.org/openstack/octavia/src/branch/master/diskimage-create</a>
For CentOS 8, the command would be:
diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3
is the minimum disk size for centos images, you may want more if you
are not offloading logs)
I just did a run on a fresh centos 8 instance:
git clone <a class="moz-txt-link-freetext" href="https://opendev.org/openstack/octavia">https://opendev.org/openstack/octavia</a>
python3 -m venv dib
source dib/bin/activate
pip3 install diskimage-builder PyYAML six
sudo dnf install yum-utils
./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3
This built an image.
Off and on we have had issues building CentOS images due to issues in
the tools we rely on. If you run into issues with this image, drop us
a note back.
Michael
On Wed, May 5, 2021 at 9:37 AM Luke Camilleri
<a class="moz-txt-link-rfc2396E" href="mailto:luke.camilleri@zylacomputing.com"><luke.camilleri@zylacomputing.com></a> wrote:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">Hi there, i am trying to get Octavia running on a Victoria deployment on
CentOS 8. It was a bit rough getting to the point to launch an instance
mainly due to the load-balancer management network and the lack of
documentation
(<a class="moz-txt-link-freetext" href="https://docs.openstack.org/octavia/victoria/install/install.html">https://docs.openstack.org/octavia/victoria/install/install.html</a>) to
deploy this oN CentOS. I will try to fix this once I have my deployment
up and running to help others on the way installing and configuring this :-)
At this point a LB can be launched by the tenant and the instance is
spawned in the Octavia project and I can ping and SSH into the amphora
instance from the Octavia node where the octavia-health-manager service
is running using the IP within the same subnet of the amphoras
(172.16.0.0/12).
Unfortunately I keep on getting these errors in the log file of the
worker log (/var/log/octavia/worker.log):
2021-05-05 01:54:49.368 14521 WARNING
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
to instance. Retrying.: requests.exceptions.ConnectionError:
HTTPSConnectionPool(host='172.16.4.46', p
ort=9443): Max retries exceeded with url: // (Caused by
NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object
at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111]
Connection ref
used',))
2021-05-05 01:54:54.374 14521 ERROR
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries
(currently set to 120) exhausted. The amphora is unavailable. Reason:
HTTPSConnectionPool(host='172.16
.4.46', port=9443): Max retries exceeded with url: // (Caused by
NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object
at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne
ction refused',))
2021-05-05 01:54:54.374 14521 ERROR
octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora
compute instance failed to become reachable. This either means the
compute driver failed to fully boot the
instance inside the timeout interval or the instance is not reachable
via the lb-mgmt-net.:
octavia.amphorae.driver_exceptions.exceptions.TimeOutException:
contacting the amphora timed out
obviously the instance is deleted then and the task fails from the
tenant's perspective.
The main issue here is that there is no service running on port 9443 on
the amphora instance. I am assuming that this is in fact the
amphora-agent service that is running on the instance which should be
listening on this port 9443 but the service does not seem to be up or
not installed at all.
To create the image I have installed the CentOS package
"openstack-octavia-diskimage-create" which provides the utility
disk-image-create but from what I can conclude the amphora-agent is not
being installed (thought this was done automatically by default :-( )
Can anyone let me know if the amphora-agent is what gets queried on port
9443 ?
If the agent is not installed/injected by default when building the
amphora image?
The command to inject the amphora-agent into the amphora image when
using the disk-image-create command?
Thanks in advance for any assistance
</pre>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</body>
</html>