Also an FYI, they will not stay in PENDING_* forever. They are only in that state when a controller is working on the LB, like retrying to connect. The worker process logs will show the retrying warning messages that point to the issue. After your configured retry timeouts expire, the controller will mark the LB in provisioning status of ERROR when it gives up retrying. At that point you can delete it or failover the load balancer to try again. I don't know what the timeouts are configured for when deployed with kolla, but you might adjust them for your organization's preference (i.e retry a really long time or fail quickly). Michael On Sun, May 22, 2022 at 11:41 AM Russell Stather <Russell.Stather@ignitiongroup.co.za> wrote:
aha, your script helped a lot 🙂
I see that the lb management lan needs to be external, and routable from the machines running openstack control nodes.
I need my network guys to allocate an extra external network, so I can put that as the lb management network. I'll try tomorrow and let you know if I succeed. ________________________________ From: Oliver Weinmann <oliver.weinmann@me.com> Sent: 22 May 2022 20:01 To: Thomas Goirand <zigo@debian.org> Cc: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: Re: Octavia Amphora staying in PENDING_CREATE forever.
Sorry I meant control nodes. This term is used in kolla-ansible. I never deployed Openstack from scratch. I always used some tool like packstack, tripleo or now kolla-ansible. To me it seems a lot easier as kolla-ansible can do all the magic. You simply enable Octavia, set some parameters and you are set.
Von meinem iPhone gesendet
Am 22.05.2022 um 18:47 schrieb Thomas Goirand <zigo@debian.org>:
On 5/22/22 16:54, Russell Stather wrote:
Hi igadmin@ig-umh-maas:~$ openstack loadbalancer amphora list +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ | id | loadbalancer_id | status | role | lb_network_ip | ha_ip | +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ | 5364e993-0f3b-477f-ac03-07fb6767480f | 962204f0-516a-4bc5-886e-6ed27f98efad | BOOTING | None | fc00:e3bf:6573:9272:f816:3eff:fe57:6acf | None | +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ This gives me an IPv6 address. What do you mean by the controller nodes? The node running octavia itself?
OpenStack has no "controller" but this therm is usually used for the servers running the API and workers.
In this case, you want the nodes running octavia-worker. In the logs of the workers, you should be able to see that it cannot ssh the amphora VMs.
The IPv6 that you saw is the ha_ip, ie the VRRP port. This is *not* the IP of the amphora VMs that are booting. These are supposed to be in "loadbalancer_ip". However, you have nothing in there. So probably you haven't configured Octavia correctly.
Did you create a network especially for octavia, and did you write its ID in /etc/octavia/octavia.conf?
Also, did you: - create an ssh key for Octavia? - create a PKI for Octavia?
I created this script for the Octavia PKI, that you can simply run on one controller, and then copy the certs in the other nodes running the Octavia services: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-...
This script can be used (though you may want to customize it, especially the IP addresses, vlan, etc.) to create the ssh key, networking, etc.:
https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-...
I hope this helps, Cheers,
Thomas Goirand (zigo)