Ussuri: how to delete lbaas loadbalancer left over?
Eugen Block
eblock at nde.ag
Wed Jul 19 07:39:46 UTC 2023
Hi, it sounds promising, I hope you get rid of the LBs in a clean
manner. I just thought it might be the cleanest way to first migrate
and then delete them properly without leaving any orphans in the
database. In case you succeed it would be of great value to have your
solution posted here, thanks!
Zitat von Michel Jouvin <michel.jouvin at ijclab.in2p3.fr>:
> Hi Eugen,
>
> I found some time to look at my issue in the light of your advices!
> In my research, I found the presentation
> https://opendev.org/openstack/neutron-lbaas which is both synthetic
> and clear, with a list of the migration options offered (at the time
> of the presentation). This includes the DB migration tool,
> nlbaas2octavia.py. Following your broken link, I manage to find it,
> it is still there but hidden. You need to checkout the repo
> https://opendev.org/openstack/neutron-lbaas and go back a couple of
> revisions (it is explained in the README). It is Python2 but at
> least it is a starting point, either installing a Python2 venv or
> migrating the script which is not very long and seems to have very
> few dependcies, apart from oslo.
>
> I'm afraid that the neutron-lbaas-proxy is no longer an option as,
> as far as I understood, it relies on some LBAAS code still being
> present in the Neutron server and this part of the Neutron code has
> been completely removed I think in Ussuri release (it's the reason
> of my problems in fact, too bad, it was a matter of a few days, just
> missed the very old announcement that it would be the case).
>
> If I succeed to clean the things (again I don't really want to
> migrate the existing LBAAS, I just want to delete them), I'll report
> in this thread in case it is useful to somebody else...
>
> Best regards,
>
> Michel
>
> Le 08/07/2023 à 00:15, Eugen Block a écrit :
>> Unfortunately, the link to the migration tool doesn’t work:
>>
>> https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation
>>
>> But maybe it can get you in the right direction,
>> neutron-lbaas-proxy seems to be the keyword. But as I already
>> mentioned, I don’t have experience with this path.
>>
>> Zitat von Eugen Block <eblock at nde.ag>:
>>
>>> Hi,
>>>
>>> I mean the latter. Once you have Octavia installed you can create
>>> new LBs, but as I understand it you won’t be able to delete the
>>> legacy LBs. Did the neutron config change when you upgraded to
>>> Ussuri? I wonder if there’s just some config missing to be able to
>>> delete the old LBs, I don’t have a clue tbh. Maybe someone else
>>> has some more experience and will chime in.
>>>
>>> Zitat von Michel Jouvin <michel.jouvin at ijclab.in2p3.fr>:
>>>
>>>> Ho Eugène,
>>>>
>>>> Thanks for your answer. Do you mean that after installing octavia
>>>> (it is planned) we'll have again the ability to delete the
>>>> remaining LBAAS instances? Or just that octavia is the LBAAS
>>>> replacement in terms of functionalities?
>>>>
>>>> Best regards,
>>>>
>>>> Michel
>>>> Sent from my mobile
>>>> Le 7 juillet 2023 18:52:30 Eugen Block <eblock at nde.ag> a écrit :
>>>>
>>>>> Hi,
>>>>>
>>>>> neutron lbaas was deprecated in Queens so you may have to migrate the
>>>>> existing LBs to octavia. I have never done that but I remember reading
>>>>> through the SUSE Docs when one of our customers had to decide whether
>>>>> they wanted to upgrade or reinstall with a newer openstack release.
>>>>> They decided to do the latter, so we set up octavia from scratch and
>>>>> didn't have to migrate anything. There's also a video I've never
>>>>> watched [2], maybe that helps. I can't really tell if a migration is
>>>>> possible to work around your issue but I thought I'd share anyway.
>>>>>
>>>>> Regards,
>>>>> Eugen
>>>>>
>>>>> [1]
>>>>> https://documentation.suse.com/soc/9/single-html/suse-openstack-cloud-crowbar-deployment/#sec-depl-ostack-octavia-migrate-users [2]
>>>>> https://www.youtube.com/watch?v=jj4KMJPA0Pk
>>>>>
>>>>> Zitat von Michel Jouvin <michel.jouvin at ijclab.in2p3.fr>:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We had a few Magnum (K8s) clusters created a couple of years ago
>>>>>> (with Rocky and Stein versions) and forgotten. We started to delete
>>>>>> them this spring when we where running Train Neutron service.
>>>>>> Basically we managed to do this with the following sequence:
>>>>>>
>>>>>> - openstack coe cluster delete xxx and waiting for DELETE_FAILED
>>>>>> - Use openstack coe cluster show / openstack stack resource list -n2
>>>>>> to identify the neutron entry causing the error and pick the
>>>>>> corresponding resource ID
>>>>>> - Find the ports associated with the router with openstack port list
>>>>>> --router previously_found_id
>>>>>> - Use the port subnet to find the port corresponding lbaas load
>>>>>> balancer ID, use the neutron CLI to delete the load balancer
>>>>>> (deleting one by one all the dependencies preventing the load
>>>>>> balancer removal)
>>>>>> - Rerun openstack coe cluster delete
>>>>>>
>>>>>> For some reasons, we didn't cleanup all the abandoned clusters and
>>>>>> upgraded Neutron to Ussuri. Unfortunately, since then our previous
>>>>>> process is no longer working as it seems that the Neutron server
>>>>>> doesn't know anymore anything about the LBAAS load balancers (and
>>>>>> "neutron lbaas-loadbalancer list" returns nothing). In the neutron
>>>>>> server, any attempt to delete the subnet attached to the load
>>>>>> balancer (or to list them with Neutron CLI) results in the following
>>>>>> errors in Neutron server.log :
>>>>>>
>>>>>> ------
>>>>>>
>>>>>> 2023-07-07 16:27:31.139 14962 WARNING
>>>>>> neutron.pecan_wsgi.controllers.root
>>>>>> [req-71e712fc-d8a7-4815-90b3-b406c10e0caa
>>>>>> a2b4a88cfee0c18702fe89ccb07ae875de3f34f3f1bb43e505fd83aebcfc094c
>>>>>> 245bc968c1b7465dac1b93a30bf67ba9 - 1367c9a4d5da4b229c35789c271dc7aa
>>>>>> 1367c9a4d5da4b229c35789c271dc7aa] No controller found for: lbaas -
>>>>>> returning response code 404: pecan.routing.PecanNotFound
>>>>>> 2023-07-07 16:27:31.140 14962 INFO
>>>>>> neutron.pecan_wsgi.hooks.translation
>>>>>> [req-71e712fc-d8a7-4815-90b3-b406c10e0caa
>>>>>> a2b4a88cfee0c18702fe89ccb07ae875de3f34f3f1bb43e505fd83aebcfc094c
>>>>>> 245bc968c1b7465dac1b93a30bf67ba9 - 1367c9a4d5da4b229c35789c271dc7aa
>>>>>> 1367c9a4d5da4b229c35789c271dc7aa] GET failed (client error): The
>>>>>> resource could not be found.
>>>>>> 2023-07-07 16:27:31.141 14962 INFO neutron.wsgi
>>>>>> [req-71e712fc-d8a7-4815-90b3-b406c10e0caa
>>>>>> a2b4a88cfee0c18702fe89ccb07ae875de3f34f3f1bb43e505fd83aebcfc094c
>>>>>> 245bc968c1b7465dac1b93a30bf67ba9 - 1367c9a4d5da4b229c35789c271dc7aa
>>>>>> 1367c9a4d5da4b229c35789c271dc7aa] 157.136.249.153 "GET
>>>>>> /v2.0/lbaas/loadbalancers?name=kube_service_964f7e76-d2d5-4126-ab11-cd689f6dd9f9_runnerdeploy-wm9sm-5h52l_hello-node-x-default-x-runnerdeploy-wm9sm-5h52l HTTP/1.1" status: 404 len: 304
>>>>>> time:
>>>>>> 0.0052643
>>>>>> ------
>>>>>>
>>>>>> Any suggestion to workaround this problem and be able to
>>>>>> successfully delete our old Magnum clusters?
>>>>>>
>>>>>> Thanks in advance for any help. Best regards,
>>>>>>
>>>>>> Michel
>>
>>
>>
>>
More information about the openstack-discuss
mailing list