Re: [Octavia] octavia not setting up load balancers
Hi, I have set debug to true and I can see the following but not sure if they're related though Disabling service 'block-storage': Encountered an exception attempting to process config for project 'cinder' (service type 'block-storage'): no such option valid_interfaces in group [cinder]: oslo_config.cfg.NoSuchOptError: no such option valid_interfaces in group [cinder] Disabling service 'compute': Encountered an exception attempting to process config for project 'nova' (service type 'compute'): no such option valid_interfaces in group [nova]: oslo_config.cfg.NoSuchOptError: no such option valid_interfaces in group [nova] 2024-05-27 19:09:32.136 225246 WARNING openstack [None req-588327db-96d5-4377-ba72-b603731f9568 - c2270ba97baa40cbbb17f89d6980a2d6 - - default default] Disabling service 'image': Encountered an exception attempting to process config for project 'glance' (service type 'image'): no such option valid_interfaces in group [glance]: oslo_config.cfg.NoSuchOptError: no such option valid_interfaces in group [glance] Jaime On 27/05/2024 21:20, Winicius Allan wrote:
I understood. Is there any error log level message or anything like that?
Em seg., 27 de mai. de 2024 às 15:19, Jaime Ibar <jim2k7@gmail.com> escreveu:
Hi Winicius,
yes, I have the Octavia:health-mgr interface and it shows as Active and up, in fact, I can ssh into the amphora from the management network.
Thanks Jaime
On Mon, 27 May 2024 at 18:54, Winicius Allan <winiciusab12@gmail.com> wrote:
Hi Jaime,
In general, a load balancer stuck in the PENDING_CREATE state could be a network problem. Do you have a network interface on your control node that belongs to the same network as Amphora?
Regards.
Em seg., 27 de mai. de 2024 às 12:51, Jaime Ibar <jim2k7@gmail.com> escreveu:
Nope, I can't see anything.
This is what I can see
[...] request url / request /usr/lib/python3/dist-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:652 request url https://172.16.4.56:9443// request /usr/lib/python3/dist-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:655 Connected to amphora. Response: <Response [200]> request /usr/lib/python3/dist-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:677 Amphora f3809732-80d8-4795-a5cb-cb36e63bc895 has API version 1.0 _populate_amphora_api_version /usr/lib/python3/dist-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:104
Update DB for loadbalancer id Marking all listeners of loadbalancer ce31155c-3484-4ff0-8740-fcd8c0915a1c ACTIVE execute
Mark ACTIVE in DB for load balancer id: [...]
So all seems to be OK but nothing in the amphora.
Thanks Jaime
On Mon, 27 May 2024 at 16:25, Oliver Weinmann <oliver.weinmann@me.com> wrote:
Ok, anything suspicious in the Octavia worker logs?
Von meinem iPhone gesendet
Am 27.05.2024 um 17:17 schrieb Jaime Ibar <jim2k7@gmail.com>:
Hi Oliver,
thanks for the tip but after increasing the quotas, the problem persists :(
Jaime
On Mon, 27 May 2024 at 12:20, Oliver Weinmann <oliver.weinmann@me.com> wrote:
Hi,
Just a quick guess. Check the quotas for security groups of the service tenant.
In our deployment we had a similar issue and saw errors indicating this in the logs.
Cheers, Oliver
Von meinem iPhone gesendet
> Am 27.05.2024 um 13:16 schrieb Jaime Ibar <jim2k7@gmail.com>: > > > Hi all, > > I've been running Octavia with no issues so far and now it stopped working > with no reasons(no recent upgrades other that OS). > When I create a load balancer, sometimes the amphora is launched successfully > but some others is not. > When it's launched, no listeners, backends, pools are created and the load balancer > gets stuck in either, pending create or pending update state, and deleting all the > load balancer details from the database is the only option of getting rid of it. > > I can ssh into the amphora but I can't see any error, gunicorn is listening > on port 9443 but no haproxy instances are created(haproxy-<uuid>) on > /var/lib/octavia > > I'm running 2023.2 Bobcat version. > > Any ideas? > > TIA > Jaime > > -- > salu2 > > Jaime
-- salu2
Jaime
-- salu2
Jaime
-- salu2
Jaime
-- Jaime
participants (1)
-
Jaime Ibar