Ussuri: how to delete lbaas loadbalancer left over?

Eugen Block eblock at nde.ag
Fri Jul 7 22:05:12 UTC 2023


Hi,

I mean the latter. Once you have Octavia installed you can create new  
LBs, but as I understand it you won’t be able to delete the legacy  
LBs. Did the neutron config change when you upgraded to Ussuri? I  
wonder if there’s just some config missing to be able to delete the  
old LBs, I don’t have a clue tbh. Maybe someone else has some more  
experience and will chime in.

Zitat von Michel Jouvin <michel.jouvin at ijclab.in2p3.fr>:

> Ho Eugène,
>
> Thanks for your answer. Do you mean that after installing octavia  
> (it is planned) we'll have again the ability to delete the remaining  
> LBAAS instances? Or just that octavia is the LBAAS replacement in  
> terms of functionalities?
>
> Best regards,
>
> Michel
> Sent from my mobile
> Le 7 juillet 2023 18:52:30 Eugen Block <eblock at nde.ag> a écrit :
>
>> Hi,
>>
>> neutron lbaas was deprecated in Queens so you may have to migrate the
>> existing LBs to octavia. I have never done that but I remember reading
>> through the SUSE Docs when one of our customers had to decide whether
>> they wanted to upgrade or reinstall with a newer openstack release.
>> They decided to do the latter, so we set up octavia from scratch and
>> didn't have to migrate anything. There's also a video I've never
>> watched [2], maybe that helps. I can't really tell if a migration is
>> possible to work around your issue but I thought I'd share anyway.
>>
>> Regards,
>> Eugen
>>
>> [1]
>> https://documentation.suse.com/soc/9/single-html/suse-openstack-cloud-crowbar-deployment/#sec-depl-ostack-octavia-migrate-users
>> [2] https://www.youtube.com/watch?v=jj4KMJPA0Pk
>>
>> Zitat von Michel Jouvin <michel.jouvin at ijclab.in2p3.fr>:
>>
>>> Hi,
>>>
>>> We had a few Magnum (K8s) clusters created a couple of years ago
>>> (with Rocky and Stein versions) and forgotten. We started to delete
>>> them this spring when we where running Train Neutron service.
>>> Basically we managed to do this with the following sequence:
>>>
>>> - openstack coe cluster delete xxx and waiting for DELETE_FAILED
>>> - Use openstack coe cluster show / openstack stack resource list -n2
>>> to identify the neutron entry causing the error and pick the
>>> corresponding resource ID
>>> - Find the ports associated with the router with openstack port list
>>> --router previously_found_id
>>> - Use the port subnet to find the port corresponding lbaas load
>>> balancer ID, use the neutron CLI to delete the load balancer
>>> (deleting one by one all the dependencies preventing the load
>>> balancer removal)
>>> - Rerun openstack coe cluster delete
>>>
>>> For some reasons, we didn't cleanup all the abandoned clusters and
>>> upgraded Neutron to Ussuri. Unfortunately, since then our previous
>>> process is no longer working as it seems that the Neutron server
>>> doesn't know anymore anything about the LBAAS load balancers (and
>>> "neutron lbaas-loadbalancer list" returns nothing). In the neutron
>>> server, any attempt to delete the subnet attached to the load
>>> balancer (or to list them with Neutron CLI) results in the following
>>> errors in Neutron server.log :
>>>
>>> ------
>>>
>>> 2023-07-07 16:27:31.139 14962 WARNING
>>> neutron.pecan_wsgi.controllers.root
>>> [req-71e712fc-d8a7-4815-90b3-b406c10e0caa
>>> a2b4a88cfee0c18702fe89ccb07ae875de3f34f3f1bb43e505fd83aebcfc094c
>>> 245bc968c1b7465dac1b93a30bf67ba9 - 1367c9a4d5da4b229c35789c271dc7aa
>>> 1367c9a4d5da4b229c35789c271dc7aa] No controller found for: lbaas -
>>> returning response code 404: pecan.routing.PecanNotFound
>>> 2023-07-07 16:27:31.140 14962 INFO
>>> neutron.pecan_wsgi.hooks.translation
>>> [req-71e712fc-d8a7-4815-90b3-b406c10e0caa
>>> a2b4a88cfee0c18702fe89ccb07ae875de3f34f3f1bb43e505fd83aebcfc094c
>>> 245bc968c1b7465dac1b93a30bf67ba9 - 1367c9a4d5da4b229c35789c271dc7aa
>>> 1367c9a4d5da4b229c35789c271dc7aa] GET failed (client error): The
>>> resource could not be found.
>>> 2023-07-07 16:27:31.141 14962 INFO neutron.wsgi
>>> [req-71e712fc-d8a7-4815-90b3-b406c10e0caa
>>> a2b4a88cfee0c18702fe89ccb07ae875de3f34f3f1bb43e505fd83aebcfc094c
>>> 245bc968c1b7465dac1b93a30bf67ba9 - 1367c9a4d5da4b229c35789c271dc7aa
>>> 1367c9a4d5da4b229c35789c271dc7aa] 157.136.249.153 "GET
>>> /v2.0/lbaas/loadbalancers?name=kube_service_964f7e76-d2d5-4126-ab11-cd689f6dd9f9_runnerdeploy-wm9sm-5h52l_hello-node-x-default-x-runnerdeploy-wm9sm-5h52l HTTP/1.1" status: 404  len: 304  
>>> time:
>>> 0.0052643
>>> ------
>>>
>>> Any suggestion to workaround this problem and be able to
>>> successfully delete our old Magnum clusters?
>>>
>>> Thanks in advance for any help. Best regards,
>>>
>>> Michel






More information about the openstack-discuss mailing list