Octavia Amphora staying in PENDING_CREATE forever.

Oliver Weinmann oliver.weinmann at me.com
Sun May 22 18:01:15 UTC 2022


Sorry I meant control nodes. This term is used in kolla-ansible. I never deployed Openstack from scratch. I always used some tool like packstack, tripleo or now kolla-ansible. To me it seems a lot easier as kolla-ansible can do all the magic. You simply enable Octavia, set some parameters and you are set.

Von meinem iPhone gesendet

> Am 22.05.2022 um 18:47 schrieb Thomas Goirand <zigo at debian.org>:
> 
> On 5/22/22 16:54, Russell Stather wrote:
>> Hi
>> igadmin at ig-umh-maas:~$ openstack loadbalancer amphora list
>> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+
>> | id                                   | loadbalancer_id                       | status  | role | lb_network_ip                           | ha_ip |
>> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+
>> | 5364e993-0f3b-477f-ac03-07fb6767480f | 962204f0-516a-4bc5-886e-6ed27f98efad | BOOTING | None | fc00:e3bf:6573:9272:f816:3eff:fe57:6acf | None  |
>> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+
>> This gives me an IPv6 address. What do you mean by the controller nodes? The node running octavia itself?
> 
> OpenStack has no "controller" but this therm is usually used for the servers running the API and workers.
> 
> In this case, you want the nodes running octavia-worker. In the logs of the workers, you should be able to see that it cannot ssh the amphora VMs.
> 
> The IPv6 that you saw is the ha_ip, ie the VRRP port. This is *not* the IP of the amphora VMs that are booting. These are supposed to be in "loadbalancer_ip". However, you have nothing in there. So probably you haven't configured Octavia correctly.
> 
> Did you create a network especially for octavia, and did you write its ID in /etc/octavia/octavia.conf?
> 
> Also, did you:
> - create an ssh key for Octavia?
> - create a PKI for Octavia?
> 
> I created this script for the Octavia PKI, that you can simply run on one controller, and then copy the certs in the other nodes running the Octavia services:
> https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-certs
> 
> This script can be used (though you may want to customize it, especially the IP addresses, vlan, etc.) to create the ssh key, networking, etc.:
> 
> https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-amphora-secgroups-sshkey-lbrole-and-network
> 
> I hope this helps,
> Cheers,
> 
> Thomas Goirand (zigo)
> 



More information about the openstack-discuss mailing list