From yasufum.o at gmail.com Mon May 2 06:53:50 2022 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Mon, 2 May 2022 15:53:50 +0900 Subject: [tacker] Cancelling meeting this week Message-ID: Hi, Since most of us joining from Japan will be off Tuesday this week for a holiday, we will cancel the meeting. Thanks, Yasufumi From sbauza at redhat.com Mon May 2 13:54:31 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 2 May 2022 15:54:31 +0200 Subject: [nova][placement] Zed 1st spec review day on May 10th 2022 Message-ID: What is said above ^ Contributors, sharpen your pens and please be able to upload your spec in Gerrit for this day. Please try to look also at Gerrit during this day, and if you can do, please be around on #openstack-nova IRC channel [1] Reviewers, please try to be able to look at the specs for this day. In case you would prefer to discuss with the contributor, try to ping him or her by IRC. We will try to have a second spec review day around end of June but it would be better if we could at least accept some specs for the Zed release before :-) Thanks, -Sylvain [1] https://docs.openstack.org/contributors/common/irc.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon May 2 14:45:32 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 02 May 2022 16:45:32 +0200 Subject: [neutron] CI meeting 3.05 cancelled Message-ID: <2186985.Icojqenx9y@p1> Hi, Tomorrow I will not be able to chair our CI meeting. There is nothing urgent to discuss really so lets cancel meeting this week. See You on the meeting on May 10th. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From katonalala at gmail.com Mon May 2 15:01:55 2022 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 2 May 2022 17:01:55 +0200 Subject: [neutron] Team meeting 3. 5. cancelled Message-ID: Hi, Tomorrow (3. May.) it seems that a lot of people will be offline during the Neutron Team meeting, so let's cancel it for this week. See you at the meeting next week. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon May 2 15:49:09 2022 From: zigo at debian.org (Thomas Goirand) Date: Mon, 2 May 2022 17:49:09 +0200 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> Message-ID: <8a9d16a0-d4bb-8e45-9ff4-074f075fad81@debian.org> On 4/29/22 19:35, Ghanshyam Mann wrote: > [...] from tehnical perspective especially > while upgrade they are hard to know which year these releses were released. Excuse my words (I'm being direct, hopefully not offensive), but I'm calling "bullshit" on this one! :) Just read this page and you know: https://releases.openstack.org/ > And when we will have tick-tock release > model[1] number are more useful to know by operators what which one is tick release and which one is tock. With name > only it is not best way to find. Except that the above page could be modified to tell tick or tock... Also, hopefully, at some point we will forget about the "tock" release and decide it's ok to upgrade from 4 releases ago (how can you tell we're not going to do that one day?). Cheers, Thomas Goirand (zigo) From gmann at ghanshyammann.com Mon May 2 16:14:51 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 02 May 2022 11:14:51 -0500 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <8a9d16a0-d4bb-8e45-9ff4-074f075fad81@debian.org> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> <8a9d16a0-d4bb-8e45-9ff4-074f075fad81@debian.org> Message-ID: <180858d9949.d615945b489978.504505388478225272@ghanshyammann.com> ---- On Mon, 02 May 2022 10:49:09 -0500 Thomas Goirand wrote ---- > On 4/29/22 19:35, Ghanshyam Mann wrote: > > [...] from tehnical perspective especially > > while upgrade they are hard to know which year these releses were released. > > Excuse my words (I'm being direct, hopefully not offensive), but I'm > calling "bullshit" on this one! :) Just read this page and you know: Sorry, Thomas but being direct or justifying the disagreement is a separate thing from using such words, I do not find these appropriate. My humble request is to avoid these, please. > > https://releases.openstack.org/ Yes, this page has all information but that is what my point was, you need to look into the release page to get those details then just getting that information from release name/number. > > > And when we will have tick-tock release > > model[1] number are more useful to know by operators what which one is tick release and which one is tock. With name > > only it is not best way to find. > > Except that the above page could be modified to tell tick or tock... Yes, that is the plan but we are checking the legal things also in case any issue to use 'tick', 'tock' word. > > Also, hopefully, at some point we will forget about the "tock" release > and decide it's ok to upgrade from 4 releases ago (how can you tell > we're not going to do that one day?). Anything is possible in the future :) and nobody knows that. But on a serious note, yes there might be a chance that tock can go away or tick timeline can be increased. That is what we will see based on feedback and users' need. But as per the current plan, there are no such things planned and we will continue with tick, tock way. -gmann > > Cheers, > > Thomas Goirand (zigo) > > From zigo at debian.org Mon May 2 17:52:01 2022 From: zigo at debian.org (Thomas Goirand) Date: Mon, 2 May 2022 19:52:01 +0200 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <180858d9949.d615945b489978.504505388478225272@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> <8a9d16a0-d4bb-8e45-9ff4-074f075fad81@debian.org> <180858d9949.d615945b489978.504505388478225272@ghanshyammann.com> Message-ID: <2235d047-3b71-74d2-b7de-3862e5b41431@debian.org> On 5/2/22 18:14, Ghanshyam Mann wrote: > ---- On Mon, 02 May 2022 10:49:09 -0500 Thomas Goirand wrote ---- > > On 4/29/22 19:35, Ghanshyam Mann wrote: > > > [...] from tehnical perspective especially > > > while upgrade they are hard to know which year these releses were released. > > > > Excuse my words (I'm being direct, hopefully not offensive), but I'm > > calling "bullshit" on this one! :) Just read this page and you know: > > Sorry, Thomas but being direct or justifying the disagreement is a separate thing from using such > words, I do not find these appropriate. My humble request is to avoid these, please. Ok, sorry... :/ > Yes, this page has all information but that is what my point was, you need to look into the release > page to get those details then just getting that information from release name/number. The same way, you need to remember that 202x.1 is tick, and 202x.2 is tock. Oh, or is it the other way around?!? I actually think it is, with current plan... Cheers, Thomas Goirand (zigo) From nurmatov.mamatisa at huawei.com Mon May 2 18:22:44 2022 From: nurmatov.mamatisa at huawei.com (Nurmatov Mamatisa) Date: Mon, 2 May 2022 18:22:44 +0000 Subject: [neutron] Bug deputy April 25 to May 2 Message-ID: <8648832dedd94ebdabfac2d8d784adda@huawei.com> Hi, I was bug deputy last week. Below is the week summary. Undecided bugs needs further triage. One expired bug was restored. Details: Critical -------- - https://bugs.launchpad.net/neutron/+bug/1970679 - neutron-tempest-plugin-designate-scenario cross project job is failing on OVN - Confirmed - Related fix: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/839763 - Assigned to yatin High ---- - https://bugs.launchpad.net/neutron/+bug/1970216 - not sending Nova notification when using neutron API on mod_wsgi - In progress: https://review.opendev.org/c/openstack/neutron/+/839218 - Assigned to Krzysztof Tomaszewski - https://bugs.launchpad.net/neutron/+bug/1970606 - Live migration packet loss increasing as the number of security group rules increases - Confirmed - Unassigned - https://bugs.launchpad.net/neutron/+bug/1970759 - Race during removal of the network from DHCP agent - In progress: https://review.opendev.org/c/openstack/neutron/+/839779 - Assigned to Slawek Kaplonski Low --- - https://bugs.launchpad.net/neutron/+bug/1970944 - Refactor neutron-keepalived-state-change - In progress: https://review.opendev.org/c/openstack/neutron/+/836140 - Assigned to Rodolfo Alonso Undecided --------- - https://bugs.launchpad.net/neutron/+bug/1971050 - Nested KVM Networking Issue - New - https://bugs.launchpad.net/neutron/+bug/1970948 - [VPNAAS] No start possible without rootwrap - New Expired ------- - https://bugs.launchpad.net/neutron/+bug/1719806 - IPv4 subnets added when VM is already up on an IPv6 subnet on the same network, does not enable VM ports to get IPv4 address - New Best regards, Mamatisa Nurmatov Advanced Software Technology Lab / Cloud Technologies Research -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahdimnorouzii at gmail.com Mon May 2 18:07:14 2022 From: mahdimnorouzii at gmail.com (Mahdi Norouzi) Date: Mon, 2 May 2022 22:37:14 +0430 Subject: Fwd: question -dashboard horizon In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: mahdi n Date: Sun, May 1, 2022, 01:58 Subject: question -dashboard horizon To: I login-ed in horizon but in page project http://ip:8000/project show error that : You are not authorized to access this page please login screenshot how to solve this? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: openstack.jpg Type: image/jpeg Size: 37524 bytes Desc: not available URL: From laurentfdumont at gmail.com Mon May 2 21:50:43 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 2 May 2022 17:50:43 -0400 Subject: question -dashboard horizon In-Reply-To: References: Message-ID: We will need a lot more info to troubleshoot. - How did you deploy Openstack? - What do the Horizon logs show? - Does the CLI work? On Mon, May 2, 2022 at 3:45 PM Mahdi Norouzi wrote: > > > ---------- Forwarded message --------- > From: mahdi n > Date: Sun, May 1, 2022, 01:58 > Subject: question -dashboard horizon > To: > > > I login-ed in horizon but in page project http://ip:8000/project > > show error that : You are not authorized to access this page please login > > > screenshot > > > how to solve this? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openinfra.dev Mon May 2 22:50:29 2022 From: jimmy at openinfra.dev (Jimmy McArthur) Date: Mon, 2 May 2022 17:50:29 -0500 Subject: question -dashboard horizon In-Reply-To: References: Message-ID: <86F2F6FF-9178-46DB-B32C-E92D271A4FAA@getmailspring.com> Hi Mahdi, Something else to expose here is the distro you're using. The logo in the dashboard screencap is extremely outdated and customized, which would tell us you're running a very old distro. Can I suggest working with a more recent distro [1] or working with a consulting company[2] to give you a hand? Thank you, Jimmy McArthur [1] https://www.openstack.org/marketplace/distros/ [2] https://www.openstack.org/marketplace/consulting On Apr 30 2022, at 3:00 pm, mahdi n wrote: > I login-ed in horizon but in page project http://ip:8000/project > > show error that : You are not authorized to access this page please login > > > screenshot > > > how to solve this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue May 3 05:40:08 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 3 May 2022 11:10:08 +0530 Subject: Openstack Wallaby Overcloud Deployment- ERROR Message-ID: Hi, Team getting this error during Openstack Wallaby Deployment: Please check once and suggest your valuable input. r/log/heat-1651554808.1084332.log 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured while running the command: ValueError: Failed to deploy: ERROR: HEAT-E99001 Service neutron is not available for resource type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network endpoint is not in service catalog. Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/heat/common/context.py", line 416, in wrapped return func(self, ctx, *args, **kwargs) File "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 1300, in validate_template validate_res_tmpl_only=True) File "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrapper result = f(*args, **kwargs) File "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 971, in validate result = res.validate_template() File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in validate_template self.t.resource_type File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 200, in _validate_service_availability raise ex heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron is not available for resource type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network endpoint is not in service catalog. 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last): 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 394, in _try_overcloud_deploy_with_compat_yaml 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud deployment_options=deployment_options) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 164, in _heat_deploy 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud self.working_dir) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/workflows/parameters.py", line 134, in check_deprecated_parameters 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud valid=True) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/workflows/roles.py", line 49, in get_roles 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud files, env_files) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1829, in build_stack_data 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud result = orchestration_client.stacks.validate(**fields) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heatclient/v1/stacks.py", line 350, in validate 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud resp = self.client.post(url, **args) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heatclient/common/http.py", line 292, in post 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return self.client_request("POST", url, **kwargs) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heatclient/common/http.py", line 282, in client_request 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud resp, body = self.json_request(method, url, **kwargs) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heatclient/common/http.py", line 271, in json_request 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud resp = self._http_request(url, method, **kwargs) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heatclient/common/http.py", line 234, in _http_request 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud raise exc.from_response(resp) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud heatclient.exc.HTTPBadRequest: ERROR: HEAT-E99001 Service neutron is not available for resource type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network endpoint is not in service catalog. 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last): 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/common/context.py", line 416, in wrapped 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return func(self, ctx, *args, **kwargs) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 1300, in validate_template 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud validate_res_tmpl_only=True) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrapper 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud result = f(*args, **kwargs) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 971, in validate 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud result = res.validate_template() 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in validate_template 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud self.t.resource_type 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 200, in _validate_service_availability 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud raise ex 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron is not available for resource type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network endpoint is not in service catalog. 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud During handling of the above exception, another exception occurred: 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last): 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, self).run(parsed_args) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in run 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, self).run(parsed_args) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = self.take_action(parsed_args) or 0 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 1226, in take_action 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud user_tht_root, created_env_files) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 357, in deploy_tripleo_heat_templates 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud deployment_options=deployment_options) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 397, in _try_overcloud_deploy_with_compat_yaml 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud raise ValueError(messages) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud ValueError: Failed to deploy: ERROR: HEAT-E99001 Service neutron is not available for resource type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network endpoint is not in service catalog. 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last): 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/common/context.py", line 416, in wrapped 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return func(self, ctx, *args, **kwargs) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 1300, in validate_template 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud validate_res_tmpl_only=True) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrapper 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud result = f(*args, **kwargs) 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 971, in validate 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud result = res.validate_template() 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in validate_template 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud self.t.resource_type 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 200, in _validate_service_availability 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud raise ex 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron is not available for resource type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network endpoint is not in service catalog. 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.119 548199 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-03 10:44:41.157 548199 ERROR openstack [-] Failed to deploy: ERROR: HEAT-E99001 Service neutron is not available for resource type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network endpoint is not in service catalog. Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/heat/common/context.py", line 416, in wrapped return func(self, ctx, *args, **kwargs) File "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 1300, in validate_template validate_res_tmpl_only=True) File "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrapper result = f(*args, **kwargs) File "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 971, in validate result = res.validate_template() File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in validate_template self.t.resource_type File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 200, in _validate_service_availability raise ex heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron is not available for resource type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network endpoint is not in service catalog. 2022-05-03 10:44:41.209 548199 INFO osc_lib.shell [-] END return value: 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Tue May 3 12:00:29 2022 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Tue, 3 May 2022 14:00:29 +0200 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <2235d047-3b71-74d2-b7de-3862e5b41431@debian.org> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> <8a9d16a0-d4bb-8e45-9ff4-074f075fad81@debian.org> <180858d9949.d615945b489978.504505388478225272@ghanshyammann.com> <2235d047-3b71-74d2-b7de-3862e5b41431@debian.org> Message-ID: Hi, I replied in the patch, but I also would like to express my opinion in this thread. Release names can be problematic as described in this proposal. https://review.opendev.org/c/openstack/governance/+/839897 However, I still think that the release name is important for the community. It?s part of the OpenStack culture! Is something that we have since the release ?A? for ?Austin?. I never saw anyone mentioning that they are still running Nova with the release ?2011.3? and need to upgrade. Instead the community mentions the release name, ?Diablo? (for example...) Having said that, I also believe it?s important to have a clear release identification schema without the ambiguity of the alphabet iteration. In my opinion both should coexist. Belmiro On Mon, May 2, 2022 at 8:01 PM Thomas Goirand wrote: > On 5/2/22 18:14, Ghanshyam Mann wrote: > > ---- On Mon, 02 May 2022 10:49:09 -0500 Thomas Goirand < > zigo at debian.org> wrote ---- > > > On 4/29/22 19:35, Ghanshyam Mann wrote: > > > > [...] from tehnical perspective especially > > > > while upgrade they are hard to know which year these releses were > released. > > > > > > Excuse my words (I'm being direct, hopefully not offensive), but I'm > > > calling "bullshit" on this one! :) Just read this page and you know: > > > > Sorry, Thomas but being direct or justifying the disagreement is a > separate thing from using such > > words, I do not find these appropriate. My humble request is to avoid > these, please. > > Ok, sorry... :/ > > > Yes, this page has all information but that is what my point was, you > need to look into the release > > page to get those details then just getting that information from > release name/number. > > The same way, you need to remember that 202x.1 is tick, and 202x.2 is > tock. Oh, or is it the other way around?!? I actually think it is, with > current plan... > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue May 3 12:49:23 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 03 May 2022 13:49:23 +0100 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <2235d047-3b71-74d2-b7de-3862e5b41431@debian.org> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> <8a9d16a0-d4bb-8e45-9ff4-074f075fad81@debian.org> <180858d9949.d615945b489978.504505388478225272@ghanshyammann.com> <2235d047-3b71-74d2-b7de-3862e5b41431@debian.org> Message-ID: <1de4eff231ba6380e17b927c6c8429268690c233.camel@redhat.com> On Mon, 2022-05-02 at 19:52 +0200, Thomas Goirand wrote: > On 5/2/22 18:14, Ghanshyam Mann wrote: > > ---- On Mon, 02 May 2022 10:49:09 -0500 Thomas Goirand wrote ---- > > > On 4/29/22 19:35, Ghanshyam Mann wrote: > > > > [...] from tehnical perspective especially > > > > while upgrade they are hard to know which year these releses were released. > > > > > > Excuse my words (I'm being direct, hopefully not offensive), but I'm > > > calling "bullshit" on this one! :) Just read this page and you know: > > > > Sorry, Thomas but being direct or justifying the disagreement is a separate thing from using such > > words, I do not find these appropriate. My humble request is to avoid these, please. > > Ok, sorry... :/ i actuly dont find that to be offensive but im aware coluteral norms differ on this. if this was an in person conversation i would not consider it inappropriate, agressive or rude so i dont think its unsuitable for the list or bad nettiqute. > > > Yes, this page has all information but that is what my point was, you need to look into the release > > page to get those details then just getting that information from release name/number. > > The same way, you need to remember that 202x.1 is tick, and 202x.2 is > tock. Oh, or is it the other way around?!? I actually think it is, with > current plan... well ofr most people i dont think it will matter. the only real differnce between the two release is the upgrade testing and considerations . both will be as as stable and feature rich as each other, actully given my personal expericne the second release of the year tends to have slight less feature as more people tend to take time off in the nothern hemispehe summer then the do in the winter and as a result we deliver less feature in the second release. i strongly agree with what dan and you previrously said regarding not implying primacy of one release over the other. i would hope that deploying every release continues to be the norm and dont think we shoudl be doing anything with regard to nameing that would discurage that. i have fully expected that we woudl tag teh tick/tock release on the release page as zigo menthioned. although ill admit i find the half data half release number to be a bit confusing too if we are going date based i would have prefered 2022.03 2022.09 or basically year.month in YYYY.MM or YY.MM format that avoid any percetions ectra related to .0, .1 vs .2 in generally im also ok with keeping nameing and delegeting the selection to the foundation. i agree that the overheard of the curent tc process seams to outweigh the benifit and i think the comuinty vote was preferable to the current tc process but delegating release nameing to the foundation make is simple a marketing choice and remove much of the overhead. im not really stongly opipioned on named vs numbered releases. for ubuntu release i almost alwyas adress them by the numbers not the names as the names are kind of confusing to me since they wrapped around. for openstack i always went by the names. out downstream product is numbered so form a downstream perspective there is always an indirection anyway so name vs number will have littel impact from that point of view. although is we do use names i think aardvark is the prime candiate for AA :) > > Cheers, > > Thomas Goirand (zigo) > From jdratlif at globalnoc.iu.edu Tue May 3 13:14:41 2022 From: jdratlif at globalnoc.iu.edu (John Ratliff) Date: Tue, 03 May 2022 09:14:41 -0400 Subject: [sdk] creating a blank volume with image properties Message-ID: <5c2b6b7ff0e168ac6afc765408fbc6c1@globalnoc.iu.edu> We're standing up a second openstack cluster to do a leapfrog migration upgrade. We are using the same ceph storage backend. I need to create volumes in the new openstack cluster that use the underlying ceph rbd. Currently, we are creating a volume in the new openstack cluster, deleting the rbd created by openstack, and renaming the original rbd to match the new openstack mapping. This process loses the image properties from the original volumes. We are using virtio-scsi for the disk block device, and want to continue doing that in the new cluster. When we create the server using the new volume in the new openstack cluster, it reverts to using virtio-blk (/dev/vda). What can I do to keep using our image properties in the newly created volume? I thought I could just pass volume_image_metadata as an argument to create_volume(), but it doesn't work. I end up with an image that has no image metadata properties. import openstack openstack.enable_logging(debug=False) conn = openstack.connect(cloud="test") v = conn.create_volume( name="testvol", size=5, wait=True, bootable=True, volume_image_metadata={ "hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi", "hw_qemu_guest_agent": "yes", "os_require_quiesce": "yes", "hw_rng_model": "virtio", "image_id": "00000000-0000-0000-0000-000000000000", }, ) What can I do to create this openstack volume while preserving my original volume properties? Since we don't care about the image, I thought about creating a blank image that the volume could be based on with those properties, but I'm looking for suggestions. Thanks. --John From martin.chlumsky at gmail.com Tue May 3 14:06:58 2022 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Tue, 3 May 2022 10:06:58 -0400 Subject: [skyline] SSO support Message-ID: Hello, We are evaluating Skyline and one major blocker is Single sign-on (SSO) support (we currently federate Keystone/Horizon with AzureAD). I searched through the git repositories and couldn't find any relevant mention of SSO or federation (openidc or saml). Are there plans to support this feature (or is it supported and I just missed it somehow)? Thank you, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Tue May 3 14:28:48 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Tue, 3 May 2022 14:28:48 +0000 (UTC) Subject: OSA log location/enabling References: <1614624012.6087414.1651588128263.ref@mail.yahoo.com> Message-ID: <1614624012.6087414.1651588128263@mail.yahoo.com> Hi all, So after a few days of getting our config files correct we have a working horizon dashboard. We have created a network and flavour so we could spin up an instance to test what we've done so far. The issue is the image upload fails with?Unable to create the image.?and when we attached to the glance container there are no log files. In the openstack_user_config file we didn't specify a syslog server so were under the impression it would store them on the container or infra node. Maybe they need to be implicitly enabled somewhere that we've missed as the neutron & openvswitch logs are in /var/log/ on the infra node. If anyone has any info as to viewing or enabling the logs that would be greatly appreciated. Regards,Derek? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahdimnorouzii at gmail.com Tue May 3 15:06:53 2022 From: mahdimnorouzii at gmail.com (Mahdi Norouzi) Date: Tue, 3 May 2022 19:36:53 +0430 Subject: Fwd: question -dashboard horizon In-Reply-To: References: Message-ID: I login-ed in horizon but in page project http://ip:8000/project show error that : You are not authorized to access this page please login screenshot Logs in attachment how to solve this? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- ?/home/norouzi/Desktop/horizen/horizon/.tox/venv/lib/python3.6/site-packages/memcache.py:1381: ResourceWarning: unclosed if self._get_socket(): DEBUG keystoneauth.session REQ: curl -g -i -X GET http://key:5000/v3/users/d5728a4d25c1451897cbbed1a45e2136/projects? -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}6960397b1b7313e96806d1bc0bdcdefb094dc428aee959ac04d9677cb21f9797" DEBUG keystoneauth.session RESP: [200] Content-Length: 965 Content-Type: application/json Date: Mon, 02 May 2022 23:03:40 GMT Server: WSGIServer/0.1 Python/2.7.5 Vary: X-Auth-Token x-openstack-request-id: req-a95a4e52-8292-4398-955f-2f79c9cb1de4 DEBUG keystoneauth.session RESP BODY: {"links": {"self": "http://key:5000/v3/users/d5728a4d25c1451897cbbed1a45e2136/projects", "previous": null, "next": null}, "projects": [{"is_domain": false, "description": "", "links": {"self": "http://key:5000/v3/projects/1b33749815924a89b7778fd43af042ae"}, "tags": [], "enabled": true, "id": "1b33749815924a89b7778fd43af042ae", "parent_id": "default", "domain_id": "default", "name": "test"}, {"is_domain": false, "description": "Bootstrap project for initializing the cloud.", "links": {"self": "http://key:5000/v3/projects/4f60ef82336b4edf859a04a00b96e52c"}, "tags": [], "enabled": true, "id": "4f60ef82336b4edf859a04a00b96e52c", "parent_id": "default", "domain_id": "default", "name": "admin"}, {"is_domain": false, "description": "", "links": {"self": "http://key:5000/v3/projects/9c5a40cd282141b99994af3ca24b594f"}, "tags": [], "enabled": true, "id": "9c5a40cd282141b99994af3ca24b594f", "parent_id": "default", "domain_id": "default", "name": "MainProject"}]} DEBUG keystoneauth.session GET call to identity for http://key:5000/v3/users/d5728a4d25c1451897cbbed1a45e2136/projects used request id req-a95a4e52-8292-4398-955f-2f79c9cb1de4 /home/norouzi/Desktop/horizen/horizon/.tox/venv/lib/python3.6/site-packages/memcache.py:1381: ResourceWarning: unclosed if self._get_socket(): DEBUG oslo_policy.policy enforce: rule="get_images" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="os_compute_api:os-keypairs:index" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="os_compute_api:os-server-groups:index" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="volume:get_all" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="volume:get_all_snapshots" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="group:get_all" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="group:get_all_group_snapshots" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="identity:list_projects" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="os_compute_api:os-hypervisors" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy Rule [os_compute_api:os-hypervisors] does not exist DEBUG oslo_policy.policy enforce: rule="default" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy Rule [default] does not exist DEBUG oslo_policy.policy enforce: rule="compute_extension:aggregates" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy Rule [compute_extension:aggregates] does not exist DEBUG oslo_policy.policy enforce: rule="default" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy Rule [default] does not exist DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="os_compute_api:servers:detail" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="get_images" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="volume_extension:types_manage" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy Rule [volume_extension:types_manage] does not exist DEBUG oslo_policy.policy enforce: rule="default" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy Rule [default] does not exist DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="group:group_types_manage" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy Rule [group:group_types_manage] does not exist DEBUG oslo_policy.policy enforce: rule="default" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy Rule [default] does not exist DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} ERROR openstack_dashboard.dashboards.admin.rbac_policies.panel Call to list enabled services failed. This is likely due to a problem communicating with the Neutron endpoint. RBAC Policies panel will not be displayed. DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="get_metadef_namespaces" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="admin_required" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="identity:list_projects" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="context_is_admin" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="identity:get_domain" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="identity:list_projects" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="identity:get_user" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="identity:list_groups" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="identity:list_roles" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG openstack_dashboard.api.keystone Creating a new keystoneclient connection to http://key:5000/v3. DEBUG keystoneauth.session REQ: curl -g -i -X GET http://key:5000/v3 -H "Accept: application/json" -H "Forwarded: for=127.0.0.1;by=manage.py keystoneauth1/4.5.0 python-requests/2.27.1 CPython/3.6.9" -H "User-Agent: manage.py keystoneauth1/4.5.0 python-requests/2.27.1 CPython/3.6.9" DEBUG keystoneauth.session RESP: [200] Content-Length: 243 Content-Type: application/json Date: Mon, 02 May 2022 23:03:50 GMT Server: WSGIServer/0.1 Python/2.7.5 Vary: X-Auth-Token x-openstack-request-id: req-9a7e4a76-3409-4c15-81e7-702fb43e95e1 DEBUG keystoneauth.session RESP BODY: {"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": [{"href": "http://key:5000/v3/", "rel": "self"}]}} DEBUG keystoneauth.session GET call to http://key:5000/v3 used request id req-9a7e4a76-3409-4c15-81e7-702fb43e95e1 DEBUG oslo_policy.policy enforce: rule="identity:get_domain" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} DEBUG oslo_policy.policy enforce: rule="identity:list_projects" creds={"domain_id": "default", "is_admin": true, "project_id": "9c5a40cd282141b99994af3ca24b594f", "project_name": "MainProject", "roles": ["member", "test_member", "swiftoperator", "admin", "reader"], "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user_id": "d5728a4d25c1451897cbbed1a45e2136", "username": "admin"} target={"domain_id": "default", "group.domain_id": "default", "project.domain_id": "default", "project_id": "9c5a40cd282141b99994af3ca24b594f", "tenant_id": "9c5a40cd282141b99994af3ca24b594f", "user.domain_id": "default", "user_id": "d5728a4d25c1451897cbbed1a45e2136"} WARNING django.request Forbidden: /project/ WARNING django.server "GET /project/ HTTP/1.1" 403 26779 INFO django.server "GET /static/dashboard/js/output.d99d3c24731c.js HTTP/1.1" 304 0 /home/norouzi/Desktop/horizen/horizon/.tox/venv/lib/python3.6/site-packages/memcache.py:1381: ResourceWarning: unclosed if self._get_socket(): INFO django.server "GET /static/dashboard/js/angular_template_cache_preloads.2ce67c764ffb.js HTTP/1.1" 304 0 INFO django.server "GET /static/themes/material/js/material.hamburger.js HTTP/1.1" 304 0 INFO django.server "GET /static/dashboard/js/output.263e723015ab.js HTTP/1.1" 304 0 INFO django.server "GET /i18n/js/horizon+openstack_dashboard/ HTTP/1.1" 200 3195 WARNING django.server "GET /static/horizon/lib/roboto_fontface/fonts/Roboto/Roboto-Regular.woff2 HTTP/1.1" 404 2056 WARNING django.server "GET /static/horizon/lib/roboto_fontface/fonts/Roboto/Roboto-Bold.woff2 HTTP/1.1" 404 2047 /home/norouzi/Desktop/horizen/horizon/.tox/venv/lib/python3.6/site-packages/memcache.py:1381: ResourceWarning: unclosed if self._get_socket(): DEBUG keystoneauth.session REQ: curl -g -i -X GET http://key:5000/v3/users/d5728a4d25c1451897cbbed1a45e2136/projects? -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}6960397b1b7313e96806d1bc0bdcdefb094dc428aee959ac04d9677cb21f9797" WARNING django.server "GET /static/horizon/lib/roboto_fontface/fonts/Roboto/Roboto-Regular.woff HTTP/1.1" 404 2053 WARNING django.server "GET /static/horizon/lib/roboto_fontface/fonts/Roboto/Roboto-Bold.woff HTTP/1.1" 404 2044 WARNING django.server "GET /static/horizon/lib/roboto_fontface/fonts/Roboto/Roboto-Regular.ttf HTTP/1.1" 404 2050 WARNING django.server "GET /static/horizon/lib/roboto_fontface/fonts/Roboto/Roboto-Bold.ttf HTTP/1.1" 404 2041 DEBUG keystoneauth.session RESP: [200] Content-Length: 965 Content-Type: application/json Date: Mon, 02 May 2022 23:03:51 GMT Server: WSGIServer/0.1 Python/2.7.5 Vary: X-Auth-Token x-openstack-request-id: req-d646a100-a5e6-4cc6-b95e-103acdb48453 DEBUG keystoneauth.session RESP BODY: {"links": {"self": "http://key:5000/v3/users/d5728a4d25c1451897cbbed1a45e2136/projects", "previous": null, "next": null}, "projects": [{"is_domain": false, "description": "", "links": {"self": "http://key:5000/v3/projects/1b33749815924a89b7778fd43af042ae"}, "tags": [], "enabled": true, "id": "1b33749815924a89b7778fd43af042ae", "parent_id": "default", "domain_id": "default", "name": "test"}, {"is_domain": false, "description": "Bootstrap project for initializing the cloud.", "links": {"self": "http://key:5000/v3/projects/4f60ef82336b4edf859a04a00b96e52c"}, "tags": [], "enabled": true, "id": "4f60ef82336b4edf859a04a00b96e52c", "parent_id": "default", "domain_id": "default", "name": "admin"}, {"is_domain": false, "description": "", "links": {"self": "http://key:5000/v3/projects/9c5a40cd282141b99994af3ca24b594f"}, "tags": [], "enabled": true, "id": "9c5a40cd282141b99994af3ca24b594f", "parent_id": "default", "domain_id": "default", "name": "MainProject"}]} DEBUG keystoneauth.session GET call to identity for http://key:5000/v3/users/d5728a4d25c1451897cbbed1a45e2136/projects used request id req-d646a100-a5e6-4cc6-b95e-103acdb48453 INFO django.server "GET /header/ HTTP/1.1" 200 114 -------------- next part -------------- A non-text attachment was scrubbed... Name: openstack.jpg Type: image/jpeg Size: 37524 bytes Desc: not available URL: From tkajinam at redhat.com Tue May 3 15:16:23 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 4 May 2022 00:16:23 +0900 Subject: [puppet] Gate blocker: Installation of the systemtap package fails Message-ID: Hello, We are currently facing consistent failure in centos stream 9 integration jobs. which is caused by the new dyninst package. I've already reported the issue in bz[1] and we are currently waiting for the updated systemtap package. Please avoid rechecking until the package is released [1] https://bugzilla.redhat.com/show_bug.cgi?id=2079892 If the fix is not released for additional days then we can merge the temporal workaround to unblock our jobs. https://review.opendev.org/c/openstack/puppet-openstack-integration/+/840188 Last week we also saw centos stream 8 jobs consistently failing but it seems these jobs were already fixed by the new python3-qt5 package. [2] https://bugzilla.redhat.com/show_bug.cgi?id=2079895 Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Tue May 3 17:12:47 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Tue, 3 May 2022 18:12:47 +0100 Subject: [openstack-ansible] Re: OSA log location/enabling In-Reply-To: <1614624012.6087414.1651588128263@mail.yahoo.com> References: <1614624012.6087414.1651588128263.ref@mail.yahoo.com> <1614624012.6087414.1651588128263@mail.yahoo.com> Message-ID: Hi Derek, The default in openstack-ansible is to run as many services as possible using systemd and you will find the logs in the system journal. Use systemctl / journalctl to see the status of the services and read the logs as necessary. Regards, Jonathan. On 03/05/2022 15:28, Derek O keeffe wrote: > Hi all, > > So after a few days of getting our config files correct we have a > working horizon dashboard. We have created a network and flavour so we > could spin up an instance to test what we've done so far. > > The issue is the image upload fails with Unable to create the > image.?and when we attached to the glance container there are no log > files. In the openstack_user_config file we didn't specify a syslog > server so were under the impression it would store them on the > container or infra node. > > Maybe they need to be implicitly enabled somewhere that we've missed > as the neutron & openvswitch logs are in /var/log/ on the infra node. > > If anyone has any info as to viewing or enabling the logs that would > be greatly appreciated. > > Regards, > Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Tue May 3 18:22:35 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Tue, 3 May 2022 20:22:35 +0200 Subject: Magnum metrics-server not working Message-ID: Hi, Is the metrics-server broken by default in magnum? I tried for several hours to get it running, but running kubectl top nodes fails. Message: failing or missing response from https://10.100.2.4:8443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.2.4:8443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I also googled a lot but only found similar issues for k8s clusters not deployed via magnum. I remembered from my other cluster that I had to enable enable-aggregator-routing=true in kupe-apiserver and --kubelet-insecure-tls for the metrics-server. But this also doesn?t help. Any help would be highly appreciated. Best regards, Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Wed May 4 02:30:01 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 4 May 2022 08:00:01 +0530 Subject: Openstack Wallaby Overcloud Deployment- ERROR In-Reply-To: References: Message-ID: Hi Herald, Hope you are doing well. Please check the error once. Any information in this regards would be helpful. -Lokendra On Tue, 3 May 2022, 11:10 Lokendra Rathour, wrote: > > Hi, Team getting this error during Openstack Wallaby Deployment: > > Please check once and suggest your valuable input. > r/log/heat-1651554808.1084332.log > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured > while running the command: ValueError: Failed to deploy: ERROR: HEAT-E99001 > Service neutron is not available for resource type > OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network > endpoint is not in service catalog. > Traceback (most recent call last): > > File "/usr/lib/python3.6/site-packages/heat/common/context.py", line > 416, in wrapped > return func(self, ctx, *args, **kwargs) > > File "/usr/lib/python3.6/site-packages/heat/engine/service.py", line > 1300, in validate_template > validate_res_tmpl_only=True) > > File "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line > 160, in wrapper > result = f(*args, **kwargs) > > File "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 971, > in validate > result = res.validate_template() > > File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line > 1882, in validate_template > self.t.resource_type > > File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line > 200, in _validate_service_availability > raise ex > > heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron > is not available for resource type > OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network > endpoint is not in service catalog. > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent > call last): > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 394, in _try_overcloud_deploy_with_compat_yaml > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > deployment_options=deployment_options) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 164, in _heat_deploy > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud self.working_dir) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/workflows/parameters.py", > line 134, in check_deprecated_parameters > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud valid=True) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/workflows/roles.py", line > 49, in get_roles > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud files, env_files) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1829, in > build_stack_data > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud result = > orchestration_client.stacks.validate(**fields) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heatclient/v1/stacks.py", line 350, in > validate > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud resp = > self.client.post(url, **args) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heatclient/common/http.py", line 292, in > post > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return > self.client_request("POST", url, **kwargs) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heatclient/common/http.py", line 282, in > client_request > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud resp, body = > self.json_request(method, url, **kwargs) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heatclient/common/http.py", line 271, in > json_request > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud resp = > self._http_request(url, method, **kwargs) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heatclient/common/http.py", line 234, in > _http_request > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud raise > exc.from_response(resp) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > heatclient.exc.HTTPBadRequest: ERROR: HEAT-E99001 Service neutron is not > available for resource type > OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network > endpoint is not in service catalog. > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent > call last): > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/common/context.py", line 416, in > wrapped > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return func(self, > ctx, *args, **kwargs) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 1300, in > validate_template > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > validate_res_tmpl_only=True) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in > wrapper > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud result = f(*args, > **kwargs) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 971, in > validate > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud result = > res.validate_template() > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in > validate_template > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud self.t.resource_type > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 200, in > _validate_service_availability > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud raise ex > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron > is not available for resource type > OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network > endpoint is not in service catalog. > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud During handling of the > above exception, another exception occurred: > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent > call last): > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, > self).run(parsed_args) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in > run > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, > self).run(parsed_args) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = > self.take_action(parsed_args) or 0 > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 1226, in take_action > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud user_tht_root, > created_env_files) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 357, in deploy_tripleo_heat_templates > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > deployment_options=deployment_options) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 397, in _try_overcloud_deploy_with_compat_yaml > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud raise > ValueError(messages) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud ValueError: Failed to > deploy: ERROR: HEAT-E99001 Service neutron is not available for resource > type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron > network endpoint is not in service catalog. > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent > call last): > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/common/context.py", line 416, in > wrapped > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return func(self, > ctx, *args, **kwargs) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 1300, in > validate_template > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > validate_res_tmpl_only=True) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in > wrapper > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud result = f(*args, > **kwargs) > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 971, in > validate > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud result = > res.validate_template() > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in > validate_template > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud self.t.resource_type > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 200, in > _validate_service_availability > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud raise ex > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron > is not available for resource type > OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network > endpoint is not in service catalog. > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.119 548199 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-03 10:44:41.157 548199 ERROR openstack [-] Failed to deploy: > ERROR: HEAT-E99001 Service neutron is not available for resource type > OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network > endpoint is not in service catalog. > Traceback (most recent call last): > > File "/usr/lib/python3.6/site-packages/heat/common/context.py", line > 416, in wrapped > return func(self, ctx, *args, **kwargs) > > File "/usr/lib/python3.6/site-packages/heat/engine/service.py", line > 1300, in validate_template > validate_res_tmpl_only=True) > > File "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line > 160, in wrapper > result = f(*args, **kwargs) > > File "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 971, > in validate > result = res.validate_template() > > File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line > 1882, in validate_template > self.t.resource_type > > File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line > 200, in _validate_service_availability > raise ex > > heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron > is not available for resource type > OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network > endpoint is not in service catalog. > > 2022-05-03 10:44:41.209 548199 INFO osc_lib.shell [-] END return value: 1 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Wed May 4 06:57:44 2022 From: syedammad83 at gmail.com (Ammad Syed) Date: Wed, 4 May 2022 11:57:44 +0500 Subject: Magnum metrics-server not working In-Reply-To: References: Message-ID: Hi, You can use yoga release of magnum. Metrics server has fixed in it. On the older release use metrics server enable label to false. Ammad On Tue, May 3, 2022 at 11:36 PM Oliver Weinmann wrote: > Hi, > > Is the metrics-server broken by default in magnum? I tried for several > hours to get it running, but running kubectl top nodes fails. > > Message: failing or missing response from > https://10.100.2.4:8443/apis/metrics.k8s.io/v1beta1: Get " > https://10.100.2.4:8443/apis/metrics.k8s.io/v1beta1": net/http: request > canceled while waiting for connection (Client.Timeout exceeded while > awaiting headers) > > I also googled a lot but only found similar issues for k8s clusters not > deployed via magnum. I remembered from my other cluster that I had to > enable enable-aggregator-routing=true in kupe-apiserver and --kubelet-insecure-tls > for the metrics-server. > > But this also doesn?t help. Any help would be highly appreciated. > > Best regards, > Oliver > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Wed May 4 07:04:22 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Wed, 4 May 2022 09:04:22 +0200 Subject: Magnum metrics-server not working In-Reply-To: References: Message-ID: <96370E58-E99F-426F-9990-DC12233AAA5D@me.com> Hi ammad, Thanks for your reply. I will try this. Best regards Von meinem iPhone gesendet > Am 04.05.2022 um 09:00 schrieb Ammad Syed : > > ? > Hi, > > You can use yoga release of magnum. Metrics server has fixed in it. > > On the older release use metrics server enable label to false. > > Ammad >> On Tue, May 3, 2022 at 11:36 PM Oliver Weinmann wrote: >> Hi, >> >> Is the metrics-server broken by default in magnum? I tried for several hours to get it running, but running kubectl top nodes fails. >> >> Message: failing or missing response from https://10.100.2.4:8443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.2.4:8443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) >> >> I also googled a lot but only found similar issues for k8s clusters not deployed via magnum. I remembered from my other cluster that I had to enable enable-aggregator-routing=true in kupe-apiserver and --kubelet-insecure-tls for the metrics-server. >> >> But this also doesn?t help. Any help would be highly appreciated. >> >> Best regards, >> Oliver > -- > Regards, > > > Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed May 4 11:59:22 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 4 May 2022 08:59:22 -0300 Subject: [cinder] Bug deputy report for week of 05-04-2022 Message-ID: This is a bug report from 04-27-2022 to 05-04-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/os-brick/+bug/1961613 "Could not find any paths for the volume." after instance restart." Unassigned. Low - https://bugs.launchpad.net/cinder/+bug/1970768 "Temporary volume accepts deletion while it is used." Fix merged on master. - https://bugs.launchpad.net/cinder/+bug/1971483 "Temporary volume could be deleted with force." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1970624 "Volume reset-state API validation state checking is incorrect." Fix proposed to master. Wishlist - https://bugs.launchpad.net/cinder/+bug/1971154 "rbd_store_chunk_size in megabytes is an unwanted limitation." Unassigned. Cheers -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens.hardewig at crandale.de Wed May 4 12:29:28 2022 From: clemens.hardewig at crandale.de (Crandale) Date: Wed, 4 May 2022 14:29:28 +0200 Subject: [horizon] Yoga: Syntax of new Parameter SYSTEM_SCOPE_SERVICES in local_settings.py Message-ID: <7D1AAE3E-7A59-40A0-A142-2A897A9FD9CA@crandale.de> Hi there, We have started to analyze Openstack Yoga a bit and there, one of the major new feature is the activation of scope based token for regular use in nova. While after some long lasting back and forth in configuring our role assignments and policies we could make it work on one of our test environments (Ubuntu) via Openstack SDK, however, we are still struggling with some system scoped API calls to nova from horizon. We have an admin user for the domain 'Default' who has set the role ?admin' for 'system all': +-------+---------------+-------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +-------+---------------+-------+---------+--------+--------+-----------+ | admin | admin at Default | | | | all | False | +-------+---------------+-------+---------+--------+--------+-----------+ We have configured in local_settings.py: SYSTEM_SCOPE_SERVICES = ['compute', 'image', 'volume', 'network?] (Note: this config line has been reverse engineered from horizon source code as the syntax is nowhere possible to be found in the docs yet ? - so: not sure if it is correct) Policy files are identical for horizon as for the services. For the user admin, we then get an additional field in the domain/project top line menu adding a ?system scope? switch (this is what we understand how it should look like) and - when switching to system scope - also a system menu in the sidebar (also as expected). If we then go to System->Systeminformation to see the nova service list, we get an error ?Unable to get nova services list?, given reason is an error: 'Policy doesn?t allow os_compute_api:os-services:list to be performed (HTTP 403)?. Informations for network and volume services are shown normally (here scoped tokens are not activated yet). Further analysis indicated that horizon is using still a project-scoped token and not a system-scoped one for these requests although ?system scope? is active. Putting the same request from Openstack SDK with the same user admin results in $ openstack compute service list /usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead from cryptography.utils import int_from_bytes /usr/lib/python3/dist-packages/secretstorage/util.py:19: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead from cryptography.utils import int_from_bytes +----+------------------+-------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+-------------+----------+---------+-------+----------------------------+ | 4 | nova-consoleauth | controller | nova | enabled | down | 2019-10-31T14:59:33.000000 | | 5 | nova-scheduler | controller | nova | enabled | up | 2022-05-04T08:52:48.000000 | | 6 | nova-conductor | controller | nova | enabled | up | 2022-05-04T08:52:42.000000 | | 12 | nova-compute | compute3 | Crandale | enabled | up | 2022-05-04T08:52:40.000000 | | 13 | nova-conductor | controller3 | nova | enabled | down | 2020-06-28T14:45:31.000000 | | 14 | nova-scheduler | controller3 | nova | enabled | down | 2020-06-28T14:45:24.000000 | +----+------------------+-------------+----------+---------+-------+?????????????--------------?+ Which indicates that role assignments to user admin are correct. The same command with -?debug also proves that a system scoped token is generated. Before I consider to open a bug towards Horizon: Could someone indicate to me whether the syntax of the config needs some adaptions to make it work or confirm that it is correct? Is there any other aspect we overlooked? I am looking forward to your reply Best regards Clemens -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3911 bytes Desc: not available URL: From fungi at yuggoth.org Wed May 4 12:39:24 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 4 May 2022 12:39:24 +0000 Subject: [api-sig][i18n][infra][tc] Cleaning up more defunct mailing lists In-Reply-To: <20220224194126.3nh7karpemefjfvu@yuggoth.org> References: <20220224194126.3nh7karpemefjfvu@yuggoth.org> Message-ID: <20220504123923.enjfykr7wqfe4obt@yuggoth.org> On 2022-02-24 19:41:27 +0000 (+0000), Jeremy Stanley wrote: > A quick audit of the mailing lists we host at lists.openstack.org > turned up the following 11 which have had no new posts for at least > three years now: > > * openstack-api-consumers > * openstack-de > * openstack-el > * openstack-i18n-fr > * openstack-ir > * openstack-personas > * openstack-ru > * openstack-sos > * openstack-tw > * openstack-vi > * third-party-announce > > I'd like to clean them up by removing their list configurations, but > leaving any public archives intact. Are there any objections? [...] It's been a couple of months with no objections, so I've retired the above lists now. If there are other mailing lists at https://lists.openstack.org/ which you think may be due for retirement, please don't hesitate to let me know. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From wu.wenxiang at 99cloud.net Wed May 4 14:41:26 2022 From: wu.wenxiang at 99cloud.net (=?UTF-8?B?5ZC05paH55u4?=) Date: Wed, 04 May 2022 22:41:26 +0800 Subject: [skyline] SSO support In-Reply-To: References: Message-ID: <2A0F8DA5-4981-4862-91F1-295045533D1A@99cloud.net> Hello, Martin No plans for SSO support yet, you can open a ticket. We dev team would have a discussion & plan it according to the tickets. https://bugs.launchpad.net/skyline-apiserver/+bugs Thanks Best Regards Wenxiang Wu From: on behalf of Martin Chlumsky Date: Tuesday, May 3, 2022 at 22:07 To: Subject: [skyline] SSO support Hello, We are evaluating Skyline and one major blocker is Single sign-on (SSO) support (we currently federate Keystone/Horizon with AzureAD). I searched through the git repositories and couldn't find any relevant mention of SSO or federation (openidc or saml). Are there plans to support this feature (or is it supported and I just missed it somehow)? Thank you, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed May 4 17:15:49 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 04 May 2022 10:15:49 -0700 Subject: Retiring and shutting down the ethercalc.openstack.org service Message-ID: Hello, The https://ethercalc.openstack.org service has become problematic for several reasons: * Our configuration management for the service is still based on Puppet and needs to be modernized to continue running the service. * The service itself isn't well maintained. Its build system is apparently broken [0]. * The service does not capture a history of changes like etherpad, confusing users. * The service crashes semi frequently. The major known user of this service was the PTG planning process. We've talked to the folks that help organize and plan the PTG, and they think other tools (like PTGBot) can be used instead without major trouble. Considering all of this, the OpenDev team would like to retire and shutdown this service. The current plan is to keep the service up and running until the end of the month. Then on May 31, 2022 shut down the server hosting the service, snapshot this server, and finally delete it. Please back up any data you would like to preserve before May 31, 2022. [0] https://github.com/audreyt/ethercalc/pull/781 If you have any questions or concerns feel free to respond to this thread or reach out to the OpenDev team. Clark From manchandavishal143 at gmail.com Wed May 4 17:29:17 2022 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Wed, 4 May 2022 22:59:17 +0530 Subject: [horizon] Yoga: Syntax of new Parameter SYSTEM_SCOPE_SERVICES in local_settings.py In-Reply-To: <7D1AAE3E-7A59-40A0-A142-2A897A9FD9CA@crandale.de> References: <7D1AAE3E-7A59-40A0-A142-2A897A9FD9CA@crandale.de> Message-ID: Hi Crandale, On Wed, May 4, 2022 at 6:00 PM Crandale wrote: > Hi there, > > We have started to analyze Openstack Yoga a bit and there, one of the > major new feature is the activation of scope based token for regular use in > nova. While after some long lasting back and forth in configuring our role > assignments and policies we could make it work on one of our test > environments (Ubuntu) via Openstack SDK, however, we are still struggling > with some system scoped API calls to nova from horizon. > > We have an admin user for the domain 'Default' who has set the role > ?admin' for 'system all': > +-------+---------------+-------+---------+--------+--------+-----------+ > | Role | User | Group | Project | Domain | System | Inherited | > +-------+---------------+-------+---------+--------+--------+-----------+ > | admin | admin at Default | | | | all | False | > +-------+---------------+-------+---------+--------+--------+-----------+ > > We have configured in local_settings.py: > > SYSTEM_SCOPE_SERVICES = ['compute', 'image', 'volume', 'network?] > > (Note: this config line has been reverse engineered from horizon source > code as the syntax is nowhere possible to be found in the docs yet ? - so: > not sure if it is correct) > > Yes, the above syntax is correct to enable System Scope for different OpenStack services in the horizon. You can refer to horizon documentation for more info. [1]. I will push a patch to add more information about this in horizon docs. [1] https://docs.openstack.org/horizon/latest/configuration/settings.html#system-scope-services Policy files are identical for horizon as for the services. > > For the user admin, we then get an additional field in the domain/project > top line menu adding a ?system scope? switch (this is what we understand > how it should look like) and - when switching to system scope - also a > system menu in the sidebar (also as expected). > > If we then go to System->Systeminformation to see the nova service list, > we get an error ?Unable to get nova services list?, given reason is an > error: 'Policy doesn?t allow os_compute_api:os-services:list to be > performed (HTTP 403)?. Informations for network and volume services are > shown normally (here scoped tokens are not activated yet). > > Further analysis indicated that horizon is using still a project-scoped > token and not a system-scoped one for these requests although ?system > scope? is active. > > It is a known issue in the horizon, that's why we disable System Scope Support by default. As of now, only keystone and a few neutron panels work if you enable System Scope Support in the horizon. Horizon team plans to fix this issue once it is fixed on the nova side. Putting the same request from Openstack SDK with the same user admin > results in > > $ openstack compute service list > /usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: > CryptographyDeprecationWarning: int_from_bytes is deprecated, use > int.from_bytes instead > from cryptography.utils import int_from_bytes > /usr/lib/python3/dist-packages/secretstorage/util.py:19: > CryptographyDeprecationWarning: int_from_bytes is deprecated, use > int.from_bytes instead > from cryptography.utils import int_from_bytes > > +----+------------------+-------------+----------+---------+-------+----------------------------+ > | ID | Binary | Host | Zone | Status | State | > Updated At | > > +----+------------------+-------------+----------+---------+-------+----------------------------+ > | 4 | nova-consoleauth | controller | nova | enabled | down | > 2019-10-31T14:59:33.000000 | > | 5 | nova-scheduler | controller | nova | enabled | up | > 2022-05-04T08:52:48.000000 | > | 6 | nova-conductor | controller | nova | enabled | up | > 2022-05-04T08:52:42.000000 | > | 12 | nova-compute | compute3 | Crandale | enabled | up | > 2022-05-04T08:52:40.000000 | > | 13 | nova-conductor | controller3 | nova | enabled | down | > 2020-06-28T14:45:31.000000 | > | 14 | nova-scheduler | controller3 | nova | enabled | down | > 2020-06-28T14:45:24.000000 | > > +----+------------------+-------------+----------+---------+-------+?????????????--------------?+ > > Which indicates that role assignments to user admin are correct. The same > command with -?debug also proves that a system scoped token is generated. > > Before I consider to open a bug towards Horizon: Could someone indicate to > me whether the syntax of the config needs some adaptions to make it work or > confirm that it is correct? > > Is there any other aspect we overlooked? > > I am looking forward to your reply > > Best regards > > Clemens > > > > > Thanks & regards, Vishal Manchanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Wed May 4 19:36:01 2022 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 5 May 2022 01:06:01 +0530 Subject: Retiring and shutting down the ethercalc.openstack.org service In-Reply-To: References: Message-ID: On Wed, May 4, 2022 at 10:55 PM Clark Boylan wrote: > > Hello, > > The https://ethercalc.openstack.org service has become problematic for several reasons: > > * Our configuration management for the service is still based on Puppet and needs to be modernized to continue running the service. > * The service itself isn't well maintained. Its build system is apparently broken [0]. > * The service does not capture a history of changes like etherpad, confusing users. > * The service crashes semi frequently. > > The major known user of this service was the PTG planning process. We've talked to the folks that help organize and plan the PTG, and they think other tools (like PTGBot) can be used instead without major trouble. Considering all of this, the OpenDev team would like to retire and shutdown this service. > > The current plan is to keep the service up and running until the end of the month. Then on May 31, 2022 shut down the server hosting the service, snapshot this server, and finally delete it. Please back up any data you would like to preserve before May 31, 2022. > > [0] https://github.com/audreyt/ethercalc/pull/781 > > If you have any questions or concerns feel free to respond to this thread or reach out to the OpenDev team. Thanks a lot for keeping this up for all these years. I've no complaints, but manila contributors were ardent users of this ethercalc instance for tracking bugs and release activities through bugsquash events and hackathons (including the ongoing one [1]). Does anyone know of a decent free/opensource hosted spreadsheet alternative? [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028377.html > > Clark > From fungi at yuggoth.org Wed May 4 20:18:47 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 4 May 2022 20:18:47 +0000 Subject: Retiring and shutting down the ethercalc.openstack.org service In-Reply-To: References: Message-ID: <20220504201847.oygurwepzfeluvzj@yuggoth.org> On 2022-05-05 01:06:01 +0530 (+0530), Goutham Pacha Ravi wrote: [...] > Does anyone know of a decent free/opensource hosted spreadsheet > alternative? [...] There are public instances hosted at ethercalc.net and framacalc.org, but they likely suffer from most of the same deficiencies which have made us hesitant to do the work to continue running one ourselves. Hunting around briefly I find references to something similar called jQuery.sheet/WickedGrid, but it seems to have been abandoned for even longer (and I didn't spot any running public instance to try out). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu May 5 01:32:42 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 May 2022 20:32:42 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 5, 2022 at 1500 UTC Message-ID: <18091d90c9e.e172cdbd658539.2316267046333706209@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * CFN overview from Jie Niu, China Mobile Research Institute (30 min) ** http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028345.html * 'tick', 'tock' release cadence (~30 min) ** Legal checks on using 'tick', 'tock' *** https://review.opendev.org/c/openstack/governance/+/840354 ** release notes discussion * Join leadership meeting with Board of Directors * Gate health check ** Fixing Zuul config error in OpenStack *** https://etherpad.opendev.org/p/zuul-config-error-openstack * New ELK service dashboard: e-r service ** https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global ** https://review.opendev.org/c/openstack/governance-sigs/+/835838 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann From wodel.youchi at gmail.com Thu May 5 08:19:01 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 5 May 2022 09:19:01 +0100 Subject: [Kolla-ansible][xena] how to deploy multiple cinder backends? Message-ID: Hi, I want to deploy multiple cinder backends, ceph SSD pool for high performance VMS, ceph SAS pool for normal VMS and NFS share for small VMS and for cinder backup. Could that be done while deploying or it has to be done after the initial deployment? Could you assist me with some examples, sometimes the documentation is not that obvious to understand. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu May 5 08:35:43 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 5 May 2022 10:35:43 +0200 Subject: [sdk] creating a blank volume with image properties In-Reply-To: <5c2b6b7ff0e168ac6afc765408fbc6c1@globalnoc.iu.edu> References: <5c2b6b7ff0e168ac6afc765408fbc6c1@globalnoc.iu.edu> Message-ID: <20220505083543.4s4us3huiocmv4lb@localhost> On 03/05, John Ratliff wrote: > We're standing up a second openstack cluster to do a leapfrog migration > upgrade. We are using the same ceph storage backend. I need to create > volumes in the new openstack cluster that use the underlying ceph rbd. > > Currently, we are creating a volume in the new openstack cluster, deleting > the rbd created by openstack, and renaming the original rbd to match the new > openstack mapping. This process loses the image properties from the original > volumes. We are using virtio-scsi for the disk block device, and want to > continue doing that in the new cluster. When we create the server using the > new volume in the new openstack cluster, it reverts to using virtio-blk > (/dev/vda). > > What can I do to keep using our image properties in the newly created > volume? I thought I could just pass volume_image_metadata as an argument to > create_volume(), but it doesn't work. I end up with an image that has no > image metadata properties. > > import openstack > > openstack.enable_logging(debug=False) > > conn = openstack.connect(cloud="test") > > v = conn.create_volume( > name="testvol", > size=5, > wait=True, > bootable=True, > volume_image_metadata={ > "hw_scsi_model": "virtio-scsi", > "hw_disk_bus": "scsi", > "hw_qemu_guest_agent": "yes", > "os_require_quiesce": "yes", > "hw_rng_model": "virtio", > "image_id": "00000000-0000-0000-0000-000000000000", > }, > ) > > What can I do to create this openstack volume while preserving my original > volume properties? > > Since we don't care about the image, I thought about creating a blank image > that the volume could be based on with those properties, but I'm looking for > suggestions. > > Thanks. > > --John > > > Hi John, I don't know the OSC counterparts of the cinder client commands, so I'll only mention the cinder commands here. I would recommend trying to use the "cinder unmanage" and "cinder manage" commands (maybe in conjunction with "cinder manageable-list") instead of modifying image names manually in the Ceph cluster. There are also snapshot commands "snapshot-unmanage", "snapshot-manage", and "snapshot-manageable-list". For the glance image metadata you can use the "cinder image-metadata" command to set the image metadata values you want. So the migration script would: - Get the current glance metadata and volume metadata in the source volume. - Unmanage the source volume from the old Cinder deployment - Manage the volume in the new Cinder deployment - Set the glance image metadata Cheers, Gorka. From Danny.Webb at thehutgroup.com Thu May 5 08:58:20 2022 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Thu, 5 May 2022 08:58:20 +0000 Subject: [Kolla-ansible][xena] how to deploy multiple cinder backends? In-Reply-To: References: Message-ID: Hi Wodel, You simply need to create a cinder-volume.conf in your config override directory with the backend blocks you want, eg: $ cat cinder-volume.conf [DEFAULT] enabled_backends=nimble-az1,nimble-az2,ceph-nvme-az3 cross_az_attach = False [nimble-az1] san_ip = {{ nimble_az1_controller_vip }} san_login = {{ nimble_admin_user }} san_password = {{ nimble_admin_password }} volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver volume_backend_name = nimble backend_availability_zone = {{ availability_zone_1 }} backend_host = {{ nimble_hostgroup }} report_discard_supported = True enable_unsupported_driver = True suppress_requests_ssl_warnings = True use_multipath_for_image_xfer = False [nimble-az2] san_ip = {{ nimble_az1_controller_vip }} san_login = {{ nimble_admin_user }} san_password = {{ nimble_admin_password }} volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver volume_backend_name = nimble backend_availability_zone = {{ availability_zone_2 }} backend_host = {{ nimble_hostgroup }} report_discard_supported = True enable_unsupported_driver = True suppress_requests_ssl_warnings = True use_multipath_for_image_xfer = False [ceph-nvme-az3] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = {{ ceph_cinder_pool_name }} rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = 5 rbd_user = {{ ceph_cinder_user }} rbd_secret_uuid = {{ cinder_rbd_secret_uuid }} report_discard_supported = True image_upload_use_cinder_backend = True volume_backend_name = high-performance backend_availability_zone = {{ availability_zone_3 }} backend_host = ceph-nvme Please note that the ceph setup in kolla-ansible is fairly rigid and assumes a single cinder pool and a single cinder ceph user. You are probably better off defining your own ceph blocks if you need to split across the separate pools. If you better want to understand the workflow for kolla deploying ceph here are the relevant sections: https://github.com/openstack/kolla-ansible/blob/8371d979b6c07ec64eb56b5948c999672a67c6d5/ansible/roles/cinder/templates/cinder.conf.j2#L134 https://github.com/openstack/kolla-ansible/blob/stable/yoga/ansible/roles/cinder/tasks/external_ceph.yml Cheers, Danny ________________________________ From: wodel youchi Sent: 05 May 2022 09:19 To: OpenStack Discuss Subject: [Kolla-ansible][xena] how to deploy multiple cinder backends? CAUTION: This email originates from outside THG ________________________________ Hi, I want to deploy multiple cinder backends, ceph SSD pool for high performance VMS, ceph SAS pool for normal VMS and NFS share for small VMS and for cinder backup. Could that be done while deploying or it has to be done after the initial deployment? Could you assist me with some examples, sometimes the documentation is not that obvious to understand. Regards. Danny Webb Principal OpenStack Engineer The Hut Group Tel: Email: Danny.Webb at thehutgroup.com For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. Confidentiality Notice This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. Encryptions and Viruses Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. Monitoring Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkchn.in at gmail.com Thu May 5 09:16:11 2022 From: kkchn.in at gmail.com (KK CHN) Date: Thu, 5 May 2022 14:46:11 +0530 Subject: Data Center Survival in case of Disaster / HW Failure in DC Message-ID: List, We are having an old cloud setup with OpenStack Ussuri usng Debian OS, (Qemu KVM ). I know its very old and we can't upgrade to to new versions right now. The Deployment is as follows. A. 3 Controller in (cum compute nodes . VMs are running on controllers too..) in HA mode. B. 6 separate Compute nodes C. 3 separate Storage node with Ceph RBD Question is 1. In case of any Sudden Hardware failure of one or more controller node OR Compute node OR Storage Node what will be the immediate redundant recovery setup need to be employed ? 2. In case H/W failure our recovery need to as soon as possible. For example less than30 Minutes after the first failure occurs. 3. Is there setup options like a hot standby or similar setups or what we need to employ ? 4. To meet all RTO (< 30 Minutes down time ) and RPO(from the exact point of crash all applications and data must be consistent) . 5. Please share your thoughts for reliable crash/fault resistance configuration options in DC. We have a remote DR setup right now in a remote location. Also I would like to know if there is a recommended way to make the remote DR site Automatically up and run ? OR How to automate the service from DR site to meet exact RTO and RPO Any thoughts most welcom. Regards, Krish -------------- next part -------------- An HTML attachment was scrubbed... URL: From finarffin at gmail.com Thu May 5 09:31:16 2022 From: finarffin at gmail.com (Jan Wasilewski) Date: Thu, 5 May 2022 11:31:16 +0200 Subject: [nova][placement] Incomplete Consumers return negative value after upgrade Message-ID: Hi, after an upgrade from Stein to Train, I hit an issue with negative value during upgrade check for placement: # placement-status upgrade check+------------------------------------------------------------------+| Upgrade Check Results |+------------------------------------------------------------------+| Check: Missing Root Provider IDs || Result: Success || Details: None |+------------------------------------------------------------------+| Check: Incomplete Consumers || Result: Warning || Details: There are -20136 incomplete consumers table records for || existing allocations. Run the "placement-manage db || online_data_migrations" command. |+------------------------------------------------------------------+ Seems that negative value is a result that I get such values from tables consumer and allocations: mysql> select count(id), consumer_id from allocations group by consumer_id;...1035 rows in set (0.00 sec) mysql> select count(*) from consumers;+----------+| count(*) |+----------+| 21171 |+----------+1 row in set (0.04 sec) Unfortunately such warning cannot be solved by execution of suggested command( placement-manage db online_data_migrations) as it seems it adds records to consumers table - not to allocations, which looks like to be a problem here. I was following recommendations from this discussion: http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018536.html but unfortunately it doesn't solve the issue(even not changing a negative value). I'm just wondering if I skipped something important and you can suggest some (obvious?) solution. Thank you in advance for your time and help. Best regards, Jan -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu May 5 09:55:32 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 05 May 2022 11:55:32 +0200 Subject: [all][tc][Release Management] Improvements in project governance In-Reply-To: References: <1858624.taCxCBeP46@p1> Message-ID: <3198538.44csPzL39Z@p1> Hi, On ?roda, 20 kwietnia 2022 13:19:10 CEST El?d Ill?s wrote: > Hi, > > At the very same time at the PTG we discussed this on the Release > Management session [1] as well. To release deliverables without > significant content is not ideal and this came up in previous > discussions as well. On the other hand unfortunately this is the most > feasible solution from release management team perspective especially > because the team is quite small (new members are welcome! feel free to > join the release management team! :)). > > To change to independent release model is an option for some cases, but > not for every project. (It is less clear for consumers what version > is/should be used for which series; Fixing problems that comes up in > specific stable branches, is not possible; testing the deliverable > against a specific stable branch constraints is not possiblel; etc.) > > See some other comments inline. > > [1] https://etherpad.opendev.org/p/april2022-ptg-rel-mgt#L44 > > El?d > > On 2022. 04. 19. 18:01, Michael Johnson wrote: > > Comments inline. > > > > Michael > > > > On Tue, Apr 19, 2022 at 6:34 AM Slawek Kaplonski wrote: > >> Hi, > >> > >> > >> During the Zed PTG sessions in the TC room we were discussing some ideas how we can improve project governance. > >> > >> One of the topics was related to the projects which don't really have any changes in the cycle. Currently we are forcing to do new release of basically the same code when it comes to the end of the cycle. > >> > >> Can/Should we maybe change that and e.g. instead of forcing new release use last released version of the of the repo for new release too? > > In the past this has created confusion in the community about if a > > project has been dropped/removed from OpenStack. That said, I think > > this is the point of the "independent" release classification. > Yes, exactly as Michael says. > >> If yes, should we then automatically propose change of the release model to the "independent" maybe? > > Personally, I would prefer to send an email to the discuss list > > proposing the switch to independent. Patches can sometimes get merged > > before everyone gets to give input. Especially since the patch would > > be proposed in the "releases" project and may not be on the team's > > dashboards. > The release process catches libraries only (that had no merged change), > so the number is not that huge, sending a mail seems to be a fair option. > > (The process says: "Evaluate any libraries that did not have any change > merged over the cycle to see if it is time to transition them to the > independent release model > . > Note: client libraries (and other libraries strongly tied to another > deliverable) should generally follow their parent deliverable release > model, even if they did not have a lot of activity themselves).") > >> What would be the best way how Release Management team can maybe notify TC about such less active projects which don't needs any new release in the cycle? That could be one of the potential conditions to check project's health by the TC team. > > It seems like this would be a straight forward script to write given > > we already have tools to capture the list of changes included in a > > given release. > > There are a couple of good signals already for TC to catch inactive > projects, like the generated patches that are not merged, for example: > > https://review.opendev.org/q/topic:reno-yoga+is:open > https://review.opendev.org/q/topic:create-yoga+is:open > https://review.opendev.org/q/topic:add-xena-python-jobtemplates+is:open > > (Note that in the past not merged patches caused issues and discussing > with the TC resulted a suggestion to force-merge them to avoid future > issues) > > >> Another question is related to the projects which aren't really active and are broken during the final release time. We had such problem in the last cycle, see [1] for details. Should we still force pushing fixes for them to be able to release or maybe should we consider deprecation of such projects and not to release it at all? > > In the past we have simply not released projects that are broken and > > don't have people actively working on fixing them. It has been a > > signal to the community that if they value the project they need to > > contribute to it. > > Yes, that's a fair point, too, maybe those broken deliverables should > not be released at all. I'm not sure, but that might cause another > issues for release management tooling, though... > > Besides, during our PTG session we came to the conclusion that we need > another step in our process: > * "propose DNM changes on every repository by RequirementsFreeze (5 > weeks before final release) to check that tests are still passing with > the current set of dependencies" > Hopefully this will catch broken things well in advance. > > >> [1]http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027864.html > >> > >> > >> -- > >> > >> Slawek Kaplonski > >> > >> Principal Software Engineer > >> > >> Red Hat > Thx for all inputs in that topic so far. Here is my summary and conclusion of what was said in that thread: * we shouldn't try automatically switch such "inactive" projects to the independent model, and we should continue bumping versions of such projects every cycle as it makes many things easier, * Release Management team will test projects about 5 weeks before final release - that may help us find broken projects which then can be discussed and eventually marked as deprecated to not release broken code finally, * To check potentially inactive projects TC can: * finish script https://review.opendev.org/c/openstack/governance/+/810037[1] and use stats generate by that script to periodically check projects' health, * check projects with no merged generated patches, like: https://review.opendev.org/q/topic:reno-yoga+is:open[2] https://review.opendev.org/q/topic:create-yoga+is:open[3] https://review.opendev.org/q/topic:add-xena-python-jobtemplates+is:open[4] Feel free to add/change anything in that summary if I missed or misunderstood anything there or if You have any idea about other improvements we can do in that area. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/openstack/governance/+/810037 [2] https://review.opendev.org/q/topic:reno-yoga+is:open [3] https://review.opendev.org/q/topic:create-yoga+is:open [4] https://review.opendev.org/q/topic:add-xena-python-jobtemplates+is:open -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From rafaelweingartner at gmail.com Thu May 5 12:14:13 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Thu, 5 May 2022 09:14:13 -0300 Subject: [Kolla][CloudKitty] Wrong version in ubuntu-binary-cloudkitty-processor:yoga Message-ID: Hello guys, I was checking the upstream containers for CloudKittty for Yoga release; and, I noticed that the CloudKitty containers are using version 11.0.1, which is not the Yoga version of CloudKitty. I also checked Xena, and the same thing happens there. Do we need to define these component versions somewhere else for the Kolla containers that are built upstream? -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Thu May 5 12:37:51 2022 From: eblock at nde.ag (Eugen Block) Date: Thu, 05 May 2022 12:37:51 +0000 Subject: Data Center Survival in case of Disaster / HW Failure in DC In-Reply-To: Message-ID: <20220505123751.Horde.BvzneKkaM8LoS3CA89DFaJx@webmail.nde.ag> Hi, first, I wouldn't run VMs on control nodes, that way you mix roles (control and compute) and in case that one control node fails the VMs are not available. That would not be the case if the control node is only a control node and is also part of a highly available control plane (like yours appears to be). Depending on how your control plane is defined, the failure of one control node should be tolerable. There has been some work on making compute nodes highly available but I don't know the current status. But in case a compute node fails but nova is still responsive a live or cold migration could still be possible to evacuate that host. If a compute node fails and is unresponsive you'll probably need some DB tweaking to revive VMs on a different node. So you should have some spare resources to be able to recover from a compute node failure. As for ceph it should be configured properly to sustain the loss of a defined number of nodes or disks, I don't know your requirements. If your current cluster has "only" 3 nodes you probably run replicated pools with size 3 (I hope) with min_size 2 and failure-domain host. You could sustain one node failure without clients noticing it, a second node would cause the cluster to pause. Also you don't have the possibility to recover from a node failure until it is up again, meaning the degraded PGs can't be recovered on a different node. So this also depends on your actual resiliency requirements. If you have a second site you could use rbd mirroring [1] to sync all rbd images between sites. In case the primary site goes down entirely you could switch to the primary site by promoting the rbd images. So you see there is plenty of information to cover and careful planning is required. Regards, Eugen [1] https://docs.ceph.com/en/latest/rbd/rbd-mirroring/ Zitat von KK CHN : > List, > > We are having an old cloud setup with OpenStack Ussuri usng Debian OS, > (Qemu KVM ). I know its very old and we can't upgrade to to new versions > right now. > > The Deployment is as follows. > > A. 3 Controller in (cum compute nodes . VMs are running on controllers > too..) in HA mode. > > B. 6 separate Compute nodes > > C. 3 separate Storage node with Ceph RBD > > Question is > > 1. In case of any Sudden Hardware failure of one or more controller node > OR Compute node OR Storage Node what will be the immediate redundant > recovery setup need to be employed ? > > 2. In case H/W failure our recovery need to as soon as possible. For > example less than30 Minutes after the first failure occurs. > > 3. Is there setup options like a hot standby or similar setups or what we > need to employ ? > > 4. To meet all RTO (< 30 Minutes down time ) and RPO(from the exact point > of crash all applications and data must be consistent) . > > 5. Please share your thoughts for reliable crash/fault resistance > configuration options in DC. > > > We have a remote DR setup right now in a remote location. Also I would > like to know if there is a recommended way to make the remote DR site > Automatically up and run ? OR How to automate the service from DR site > to meet exact RTO and RPO > > Any thoughts most welcom. > > Regards, > Krish From marios at redhat.com Thu May 5 14:03:23 2022 From: marios at redhat.com (Marios Andreou) Date: Thu, 5 May 2022 17:03:23 +0300 Subject: [TripleO] gate blocker please hold rechecks tripleo-heat-templates (openstack-tox-tht) Message-ID: Hello we have a gate blocker for tripleo-heat-templates affecting master and stable/wallaby at https://bugs.launchpad.net/tripleo/+bug/1971703 If you have failing openstack-tox-tht please hold recheck until we find some resolution thank you! marios From stephane.chalansonnet at acoss.fr Thu May 5 14:31:09 2022 From: stephane.chalansonnet at acoss.fr (=?utf-8?B?Q0hBTEFOU09OTkVUIFN0w6lwaGFuZSAoQWNvc3Mp?=) Date: Thu, 5 May 2022 14:31:09 +0000 Subject: Data Center Survival in case of Disaster / HW Failure in DC Message-ID: Hello, First Openstack isnt VMware.Crtiticals applications need to be design with a multiples regions/AZ aware. Best way was two regions , 2 contr?le plane (keystone , Galera cluster on both (not the same) 2 storage system and application spitted on both is the best pratice ? You can do also with two AZ but the control plane is share between two AZ (galezra cluster and RabbitMq ) If you are trying to do the same thing as a streched VMware Cluster on a dual-site it's complicated, i you have 3 sites (low latencies <1ms) you can ! 1.In case of any Sudden Hardware failure of one or more controller node OR Compute node OR Storage Node what will be the immediate redundant recovery setup need to be employed ? => You need to splt control plane throught multiples AZ (3 is the best for Galera cluster and RabbitMQ , 2 if you have HA VMware) => Network compute work with keepalived so also need to split them on two AZ/region, you need . Dhcp , Metadata services are also redundancy on multiples network node => compute : HA OpenStack (like VMware HA) seems to work since Wallaby with Masakari . Actually you need to do the job by yourself if you need to failover instances. The storage is also the bggest problem . You need a streched storage that work with Cinder : Ceph can do the job but you also need three site (quorum on monitor need 2 node alives).Netapp solution seems to work well (trident NFS) 2. In case H/W failure our recovery need to as soon as possible. For example less than30 Minutes after the first failure occurs. Galera Cluster and RabbitMq need to be check first With two sites 30mn is really a big challenge, with 3 site it should good ! 3. Is there setup options like a hot standby or similar setups or what we need to employ ? 2 AZ , 2 storage system and application spitted on both is the best pratice ? 4. To meet all RTO (< 30 Minutes down time ) and RPO(from the exact point of crash all applications and data must be consistent) . Regards, St?phane Chalansonnet -----Message d'origine----- De?: openstack-discuss-request at lists.openstack.org Envoy??: jeudi 5 mai 2022 11:56 ??: openstack-discuss at lists.openstack.org Objet?: openstack-discuss Digest, Vol 43, Issue 12 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Data Center Survival in case of Disaster / HW Failure in DC (KK CHN) 2. [nova][placement] Incomplete Consumers return negative value after upgrade (Jan Wasilewski) 3. Re: [all][tc][Release Management] Improvements in project governance (Slawek Kaplonski) ---------------------------------------------------------------------- Message: 1 Date: Thu, 5 May 2022 14:46:11 +0530 From: KK CHN To: openstack-discuss at lists.openstack.org Subject: Data Center Survival in case of Disaster / HW Failure in DC Message-ID: Content-Type: text/plain; charset="utf-8" List, We are having an old cloud setup with OpenStack Ussuri usng Debian OS, (Qemu KVM ). I know its very old and we can't upgrade to to new versions right now. The Deployment is as follows. A. 3 Controller in (cum compute nodes . VMs are running on controllers too..) in HA mode. B. 6 separate Compute nodes C. 3 separate Storage node with Ceph RBD Question is 1. In case of any Sudden Hardware failure of one or more controller node OR Compute node OR Storage Node what will be the immediate redundant recovery setup need to be employed ? 2. In case H/W failure our recovery need to as soon as possible. For example less than30 Minutes after the first failure occurs. 3. Is there setup options like a hot standby or similar setups or what we need to employ ? 4. To meet all RTO (< 30 Minutes down time ) and RPO(from the exact point of crash all applications and data must be consistent) . 5. Please share your thoughts for reliable crash/fault resistance configuration options in DC. We have a remote DR setup right now in a remote location. Also I would like to know if there is a recommended way to make the remote DR site Automatically up and run ? OR How to automate the service from DR site to meet exact RTO and RPO Any thoughts most welcom. Regards, Krish -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 5 May 2022 11:31:16 +0200 From: Jan Wasilewski To: openstack-discuss at lists.openstack.org Subject: [nova][placement] Incomplete Consumers return negative value after upgrade Message-ID: Content-Type: text/plain; charset="utf-8" Hi, after an upgrade from Stein to Train, I hit an issue with negative value during upgrade check for placement: # placement-status upgrade check+------------------------------------------------------------------+| Upgrade Check Results |+------------------------------------------------------------------+| Check: Missing Root Provider IDs || Result: Success || Details: None |+------------------------------------------------------------------+| Check: Incomplete Consumers || Result: Warning || Details: There are -20136 incomplete consumers table records for || existing allocations. Run the "placement-manage db || online_data_migrations" command. |+------------------------------------------------------------------+ Seems that negative value is a result that I get such values from tables consumer and allocations: mysql> select count(id), consumer_id from allocations group by consumer_id;...1035 rows in set (0.00 sec) mysql> select count(*) from consumers;+----------+| count(*) |+----------+| 21171 |+----------+1 row in set (0.04 sec) Unfortunately such warning cannot be solved by execution of suggested command( placement-manage db online_data_migrations) as it seems it adds records to consumers table - not to allocations, which looks like to be a problem here. I was following recommendations from this discussion: http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018536.html but unfortunately it doesn't solve the issue(even not changing a negative value). I'm just wondering if I skipped something important and you can suggest some (obvious?) solution. Thank you in advance for your time and help. Best regards, Jan -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Thu, 05 May 2022 11:55:32 +0200 From: Slawek Kaplonski To: openstack-discuss at lists.openstack.org Cc: El?d Ill?s Subject: Re: [all][tc][Release Management] Improvements in project governance Message-ID: <3198538.44csPzL39Z at p1> Content-Type: text/plain; charset="utf-8" Hi, On ?roda, 20 kwietnia 2022 13:19:10 CEST El?d Ill?s wrote: > Hi, > > At the very same time at the PTG we discussed this on the Release > Management session [1] as well. To release deliverables without > significant content is not ideal and this came up in previous > discussions as well. On the other hand unfortunately this is the most > feasible solution from release management team perspective especially > because the team is quite small (new members are welcome! feel free to > join the release management team! :)). > > To change to independent release model is an option for some cases, > but not for every project. (It is less clear for consumers what > version is/should be used for which series; Fixing problems that comes > up in specific stable branches, is not possible; testing the > deliverable against a specific stable branch constraints is not > possiblel; etc.) > > See some other comments inline. > > [1] https://etherpad.opendev.org/p/april2022-ptg-rel-mgt#L44 > > El?d > > On 2022. 04. 19. 18:01, Michael Johnson wrote: > > Comments inline. > > > > Michael > > > > On Tue, Apr 19, 2022 at 6:34 AM Slawek Kaplonski wrote: > >> Hi, > >> > >> > >> During the Zed PTG sessions in the TC room we were discussing some ideas how we can improve project governance. > >> > >> One of the topics was related to the projects which don't really have any changes in the cycle. Currently we are forcing to do new release of basically the same code when it comes to the end of the cycle. > >> > >> Can/Should we maybe change that and e.g. instead of forcing new release use last released version of the of the repo for new release too? > > In the past this has created confusion in the community about if a > > project has been dropped/removed from OpenStack. That said, I think > > this is the point of the "independent" release classification. > Yes, exactly as Michael says. > >> If yes, should we then automatically propose change of the release model to the "independent" maybe? > > Personally, I would prefer to send an email to the discuss list > > proposing the switch to independent. Patches can sometimes get > > merged before everyone gets to give input. Especially since the > > patch would be proposed in the "releases" project and may not be on > > the team's dashboards. > The release process catches libraries only (that had no merged > change), so the number is not that huge, sending a mail seems to be a fair option. > > (The process says: "Evaluate any libraries that did not have any > change merged over the cycle to see if it is time to transition them > to the independent release model > . > Note: client libraries (and other libraries strongly tied to another > deliverable) should generally follow their parent deliverable release > model, even if they did not have a lot of activity themselves).") > >> What would be the best way how Release Management team can maybe notify TC about such less active projects which don't needs any new release in the cycle? That could be one of the potential conditions to check project's health by the TC team. > > It seems like this would be a straight forward script to write given > > we already have tools to capture the list of changes included in a > > given release. > > There are a couple of good signals already for TC to catch inactive > projects, like the generated patches that are not merged, for example: > > https://review.opendev.org/q/topic:reno-yoga+is:open > https://review.opendev.org/q/topic:create-yoga+is:open > https://review.opendev.org/q/topic:add-xena-python-jobtemplates+is:ope > n > > (Note that in the past not merged patches caused issues and discussing > with the TC resulted a suggestion to force-merge them to avoid future > issues) > > >> Another question is related to the projects which aren't really active and are broken during the final release time. We had such problem in the last cycle, see [1] for details. Should we still force pushing fixes for them to be able to release or maybe should we consider deprecation of such projects and not to release it at all? > > In the past we have simply not released projects that are broken and > > don't have people actively working on fixing them. It has been a > > signal to the community that if they value the project they need to > > contribute to it. > > Yes, that's a fair point, too, maybe those broken deliverables should > not be released at all. I'm not sure, but that might cause another > issues for release management tooling, though... > > Besides, during our PTG session we came to the conclusion that we need > another step in our process: > * "propose DNM changes on every repository by RequirementsFreeze (5 > weeks before final release) to check that tests are still passing with > the current set of dependencies" > Hopefully this will catch broken things well in advance. > > >> [1]http://lists.openstack.org/pipermail/openstack-discuss/2022-Marc > >> h/027864.html > >> > >> > >> -- > >> > >> Slawek Kaplonski > >> > >> Principal Software Engineer > >> > >> Red Hat > Thx for all inputs in that topic so far. Here is my summary and conclusion of what was said in that thread: * we shouldn't try automatically switch such "inactive" projects to the independent model, and we should continue bumping versions of such projects every cycle as it makes many things easier, * Release Management team will test projects about 5 weeks before final release - that may help us find broken projects which then can be discussed and eventually marked as deprecated to not release broken code finally, * To check potentially inactive projects TC can: * finish script https://review.opendev.org/c/openstack/governance/+/810037[1] and use stats generate by that script to periodically check projects' health, * check projects with no merged generated patches, like: https://review.opendev.org/q/topic:reno-yoga+is:open[2] https://review.opendev.org/q/topic:create-yoga+is:open[3] https://review.opendev.org/q/topic:add-xena-python-jobtemplates+is:open[4] Feel free to add/change anything in that summary if I missed or misunderstood anything there or if You have any idea about other improvements we can do in that area. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/openstack/governance/+/810037 [2] https://review.opendev.org/q/topic:reno-yoga+is:open [3] https://review.opendev.org/q/topic:create-yoga+is:open [4] https://review.opendev.org/q/topic:add-xena-python-jobtemplates+is:open -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 43, Issue 12 ************************************************* From tim.bell at cern.ch Thu May 5 14:46:25 2022 From: tim.bell at cern.ch (Tim Bell) Date: Thu, 5 May 2022 16:46:25 +0200 Subject: Data Center Survival in case of Disaster / HW Failure in DC In-Reply-To: <20220505123751.Horde.BvzneKkaM8LoS3CA89DFaJx@webmail.nde.ag> References: <20220505123751.Horde.BvzneKkaM8LoS3CA89DFaJx@webmail.nde.ag> Message-ID: <04FAD905-7A53-4DA7-AEDE-FD8E8B40EA61@cern.ch> Interesting - we?re starting work on exactly the same analysis at the moment. We?re looking at a separate region for the recovery site, this guarantees no dependencies in the control plane. Ideally, we?d be running active/active for the most critical applications (following AWS recommendations https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html ) but there are some issues we?re working through (such as how to replicate block/object stores between regions). Keeping images/projects in sync between regions also does not seem simple, especially where you want different quotas (e.g. you can have 100 cores in the production site but only 10 by default in the recovery site) As in any DR plan, testing is key - we?ve started to have a look at security groups to do a simulated disconnect test and see what?s not yet in the recovery site. Does anyone have some best practise recommendations or tools for OpenStack disaster recovery ? Cheers Tim > On 5 May 2022, at 14:37, Eugen Block wrote: > > Hi, > > first, I wouldn't run VMs on control nodes, that way you mix roles (control and compute) and in case that one control node fails the VMs are not available. That would not be the case if the control node is only a control node and is also part of a highly available control plane (like yours appears to be). Depending on how your control plane is defined, the failure of one control node should be tolerable. > There has been some work on making compute nodes highly available but I don't know the current status. But in case a compute node fails but nova is still responsive a live or cold migration could still be possible to evacuate that host. If a compute node fails and is unresponsive you'll probably need some DB tweaking to revive VMs on a different node. > So you should have some spare resources to be able to recover from a compute node failure. > As for ceph it should be configured properly to sustain the loss of a defined number of nodes or disks, I don't know your requirements. If your current cluster has "only" 3 nodes you probably run replicated pools with size 3 (I hope) with min_size 2 and failure-domain host. You could sustain one node failure without clients noticing it, a second node would cause the cluster to pause. Also you don't have the possibility to recover from a node failure until it is up again, meaning the degraded PGs can't be recovered on a different node. So this also depends on your actual resiliency requirements. If you have a second site you could use rbd mirroring [1] to sync all rbd images between sites. In case the primary site goes down entirely you could switch to the primary site by promoting the rbd images. > So you see there is plenty of information to cover and careful planning is required. > > Regards, > Eugen > > [1] https://docs.ceph.com/en/latest/rbd/rbd-mirroring/ > > Zitat von KK CHN : > >> List, >> >> We are having an old cloud setup with OpenStack Ussuri usng Debian OS, >> (Qemu KVM ). I know its very old and we can't upgrade to to new versions >> right now. >> >> The Deployment is as follows. >> >> A. 3 Controller in (cum compute nodes . VMs are running on controllers >> too..) in HA mode. >> >> B. 6 separate Compute nodes >> >> C. 3 separate Storage node with Ceph RBD >> >> Question is >> >> 1. In case of any Sudden Hardware failure of one or more controller node >> OR Compute node OR Storage Node what will be the immediate redundant >> recovery setup need to be employed ? >> >> 2. In case H/W failure our recovery need to as soon as possible. For >> example less than30 Minutes after the first failure occurs. >> >> 3. Is there setup options like a hot standby or similar setups or what we >> need to employ ? >> >> 4. To meet all RTO (< 30 Minutes down time ) and RPO(from the exact point >> of crash all applications and data must be consistent) . >> >> 5. Please share your thoughts for reliable crash/fault resistance >> configuration options in DC. >> >> >> We have a remote DR setup right now in a remote location. Also I would >> like to know if there is a recommended way to make the remote DR site >> Automatically up and run ? OR How to automate the service from DR site >> to meet exact RTO and RPO >> >> Any thoughts most welcom. >> >> Regards, >> Krish > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Thu May 5 15:24:28 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Thu, 5 May 2022 17:24:28 +0200 Subject: Magnum metrics-server not working In-Reply-To: <96370E58-E99F-426F-9990-DC12233AAA5D@me.com> References: <96370E58-E99F-426F-9990-DC12233AAA5D@me.com> Message-ID: <2a607c51-f173-b5d2-5327-ab5f9dd53f5b@me.com> Hi, thanks a lot for the hint. It works fine under Yoga as you said: $ kubectl get apiservice | grep metrics v1beta1.metrics.k8s.io kube-system/magnum-metrics-server?? True??????? 9m11s $ kubectl top nodes NAME???????????????????????????? CPU(cores)?? CPU% MEMORY(bytes)?? MEMORY% k8s-test-dhgkjeop7o3v-master-0?? 139m???????? 3% 1227Mi????????? 15% k8s-test-dhgkjeop7o3v-node-0???? 29m????????? 1% 495Mi?????????? 12% k8s-test-dhgkjeop7o3v-node-1???? 38m????????? 1% 567Mi?????????? 14% just out of curiosity, will this be fixed in e.g. Wallaby or Xena as well? Cheers, Oliver Am 04.05.2022 um 09:04 schrieb Oliver Weinmann: > Hi ammad, > > Thanks for your reply. I will try this. > > Best regards > > Von meinem iPhone gesendet > >> Am 04.05.2022 um 09:00 schrieb Ammad Syed : >> >> ? >> Hi, >> >> You can use yoga release of magnum. Metrics server has fixed in it. >> >> On the older release use metrics server enable label to false. >> >> Ammad >> On Tue, May 3, 2022 at 11:36 PM Oliver Weinmann >> wrote: >> >> Hi, >> >> Is the metrics-server broken by default in magnum? I tried for >> several hours to get it running, but running kubectl top nodes fails. >> >> Message:?????????????? failing or missing response from >> https://10.100.2.4:8443/apis/metrics.k8s.io/v1beta1:?Get >> "https://10.100.2.4:8443/apis/metrics.k8s.io/v1beta1": net/http: >> request canceled while waiting for connection (Client.Timeout >> exceeded while awaiting headers) >> >> >> I also googled a lot but only found similar issues for k8s >> clusters not deployed via magnum. I remembered from my other >> cluster that I had to enable enable-aggregator-routing=true in >> kupe-apiserver and --kubelet-insecure-tls for the metrics-server. >> But this also doesn?t help. Any help would be highly appreciated. >> >> Best regards, >> Oliver >> >> -- >> Regards, >> >> >> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu May 5 15:24:04 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 5 May 2022 17:24:04 +0200 Subject: [Kolla][CloudKitty] Wrong version in ubuntu-binary-cloudkitty-processor:yoga In-Reply-To: References: Message-ID: Hi Rafael, I believe this is caused by Ubuntu Cloud Archive (UCA) not providing packages for CloudKitty: https://openstack-ci-reports.ubuntu.com/reports/cloud-archive/yoga_versions.html As a result, Kolla installs cloudkitty packages from the focal universe repository, which provide version 11.0.1: https://packages.ubuntu.com/focal/cloudkitty-processor Installing a newer version of cloudkitty would require using a more recent version of Ubuntu? So for Kolla binary images, this would need to be fixed by UCA. Anyway, I would recommend switching to source images. Not only will you get the latest stable code, but also Kolla binary images are deprecated and will be removed in Zed. Cheers, Pierre Riteau (priteau) On Thu, 5 May 2022 at 14:18, Rafael Weing?rtner wrote: > Hello guys, > > I was checking the upstream containers for CloudKittty for Yoga release; > and, I noticed that the CloudKitty containers are using version 11.0.1, > which is not the Yoga version of CloudKitty. I also checked Xena, and the > same thing happens there. Do we need to define these component versions > somewhere else for the Kolla containers that are built upstream? > > -- > Rafael Weing?rtner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu May 5 16:58:10 2022 From: melwittt at gmail.com (melanie witt) Date: Thu, 5 May 2022 09:58:10 -0700 Subject: [nova][placement] Incomplete Consumers return negative value after upgrade In-Reply-To: References: Message-ID: On Thu May 05 2022 02:31:16 GMT-0700 (Pacific Daylight Time), Jan Wasilewski wrote: > Hi, > > after an upgrade from Stein to Train, I hit an issue with negative value > during upgrade check for placement: > > # placement-status upgrade check > +------------------------------------------------------------------+ > | Upgrade Check Results | > +------------------------------------------------------------------+ > | Check: Missing Root Provider IDs | > | Result: Success | > | Details: None | > +------------------------------------------------------------------+ > | Check: Incomplete Consumers | > | Result: Warning | > | Details: There are -20136 incomplete consumers table records for | > | existing allocations. Run the "placement-manage db | > | online_data_migrations" command. | > +------------------------------------------------------------------+ > > > Seems that negative value is a result that I get such values from tables > consumer and allocations: > > mysql> select count(id), consumer_id from allocations group by consumer_id; > ... > 1035 rows in set (0.00 sec) > > mysql> select count(*) from consumers; > +----------+ > | count(*) | > +----------+ > | 21171 | > +----------+ > 1 row in set (0.04 sec) > > Unfortunately such warning cannot be solved by execution of suggested > command( |placement-manage db online_data_migrations|) as it seems it > adds records to consumers table - not to allocations, which looks like > to be a problem here. I was following recommendations from this > discussion: > http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018536.html but unfortunately it doesn't solve the issue(even not changing a negative value). I'm just wondering if I skipped something important and you can suggest some (obvious?) solution. I think you need the following patch which changes the logic of the upgrade check to ignore orphaned consumers which are not relevant to the upgrade: https://review.opendev.org/c/openstack/placement/+/840704 The above originally landed in Xena but was overlooked for backport to older versions. I have proposed backports just now. If you can apply this patch for 'placement-status' and the check passes afterward, you are good. You may still have orphaned consumers table records in the placement database but they don't hurt anything. If you want/need to clean them up, it has to be done manually, something like (disclaimer I did not test this): delete from placement.consumers where placement.consumers.uuid not in (select nova_api.instance_mappings.instance_uuid from nova_api.instance_mappings where nova_api.instance_mappings.queued_for_delete = true); HTH, -melanie From melwittt at gmail.com Thu May 5 17:05:31 2022 From: melwittt at gmail.com (melanie witt) Date: Thu, 5 May 2022 10:05:31 -0700 Subject: [nova][placement] Incomplete Consumers return negative value after upgrade In-Reply-To: References: Message-ID: <230f0595-5715-f7be-0506-2f3673c83997@gmail.com> On Thu May 05 2022 09:58:10 GMT-0700 (Pacific Daylight Time), melanie witt wrote: > You may still have orphaned consumers table records in the placement > database but they don't hurt anything. If you want/need to clean them > up, it has to be done manually, something like (disclaimer I did not > test this): > > delete from placement.consumers where placement.consumers.uuid not in > (select nova_api.instance_mappings.instance_uuid from > nova_api.instance_mappings where > nova_api.instance_mappings.queued_for_delete = true); Sorry I got the where condition backwards, it should be: delete from placement.consumers where placement.consumers.uuid not in (select nova_api.instance_mappings.instance_uuid from nova_api.instance_mappings where nova_api.instance_mappings.queued_for_delete != true); And you might have s/placement/nova_api/ if you did not break out the separate placement database. -melanie From emccormick at cirrusseven.com Thu May 5 17:48:46 2022 From: emccormick at cirrusseven.com (Erik McCormick) Date: Thu, 5 May 2022 13:48:46 -0400 Subject: Data Center Survival in case of Disaster / HW Failure in DC In-Reply-To: <04FAD905-7A53-4DA7-AEDE-FD8E8B40EA61@cern.ch> References: <20220505123751.Horde.BvzneKkaM8LoS3CA89DFaJx@webmail.nde.ag> <04FAD905-7A53-4DA7-AEDE-FD8E8B40EA61@cern.ch> Message-ID: This sounds like a great topic we could discuss at the Ops Meetup in Berlin Friday after the Summit. I'm going to plop it in the planning etherpad [1] and we can brainstorm face to face. I encourage any operator of Openstack who's around for the Summit to stick around for an extra day and join us. *nudge Tim* Registration [2] is open. Visit the planning etherpad [1] to propose topics and +1 the ones you are interested it. [1] https://etherpad.opendev.org/p/ops-meetup-berlin-2022-planning [2] https://www.eventbrite.com/e/openstack-ops-meetup-tickets-322813472787 -Erik On Thu, May 5, 2022, 10:48 AM Tim Bell wrote: > Interesting - we?re starting work on exactly the same analysis at the > moment. > > We?re looking at a separate region for the recovery site, this guarantees > no dependencies in the control plane. > > Ideally, we?d be running active/active for the most critical applications > (following AWS recommendations > https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html) > but there are some issues we?re working through (such as how to replicate > block/object stores between regions). > > Keeping images/projects in sync between regions also does not seem simple, > especially where you want different quotas (e.g. you can have 100 cores in > the production site but only 10 by default in the recovery site) > > As in any DR plan, testing is key - we?ve started to have a look at > security groups to do a simulated disconnect test and see what?s not yet in > the recovery site. > > Does anyone have some best practise recommendations or tools for OpenStack > disaster recovery ? > > Cheers > Tim > > On 5 May 2022, at 14:37, Eugen Block wrote: > > Hi, > > first, I wouldn't run VMs on control nodes, that way you mix roles > (control and compute) and in case that one control node fails the VMs are > not available. That would not be the case if the control node is only a > control node and is also part of a highly available control plane (like > yours appears to be). Depending on how your control plane is defined, the > failure of one control node should be tolerable. > There has been some work on making compute nodes highly available but I > don't know the current status. But in case a compute node fails but nova is > still responsive a live or cold migration could still be possible to > evacuate that host. If a compute node fails and is unresponsive you'll > probably need some DB tweaking to revive VMs on a different node. > So you should have some spare resources to be able to recover from a > compute node failure. > As for ceph it should be configured properly to sustain the loss of a > defined number of nodes or disks, I don't know your requirements. If your > current cluster has "only" 3 nodes you probably run replicated pools with > size 3 (I hope) with min_size 2 and failure-domain host. You could sustain > one node failure without clients noticing it, a second node would cause the > cluster to pause. Also you don't have the possibility to recover from a > node failure until it is up again, meaning the degraded PGs can't be > recovered on a different node. So this also depends on your actual > resiliency requirements. If you have a second site you could use rbd > mirroring [1] to sync all rbd images between sites. In case the primary > site goes down entirely you could switch to the primary site by promoting > the rbd images. > So you see there is plenty of information to cover and careful planning is > required. > > Regards, > Eugen > > [1] https://docs.ceph.com/en/latest/rbd/rbd-mirroring/ > > Zitat von KK CHN : > > List, > > We are having an old cloud setup with OpenStack Ussuri usng Debian OS, > (Qemu KVM ). I know its very old and we can't upgrade to to new versions > right now. > > The Deployment is as follows. > > A. 3 Controller in (cum compute nodes . VMs are running on controllers > too..) in HA mode. > > B. 6 separate Compute nodes > > C. 3 separate Storage node with Ceph RBD > > Question is > > 1. In case of any Sudden Hardware failure of one or more controller node > OR Compute node OR Storage Node what will be the immediate redundant > recovery setup need to be employed ? > > 2. In case H/W failure our recovery need to as soon as possible. For > example less than30 Minutes after the first failure occurs. > > 3. Is there setup options like a hot standby or similar setups or what we > need to employ ? > > 4. To meet all RTO (< 30 Minutes down time ) and RPO(from the exact point > of crash all applications and data must be consistent) . > > 5. Please share your thoughts for reliable crash/fault resistance > configuration options in DC. > > > We have a remote DR setup right now in a remote location. Also I would > like to know if there is a recommended way to make the remote DR site > Automatically up and run ? OR How to automate the service from DR site > to meet exact RTO and RPO > > Any thoughts most welcom. > > Regards, > Krish > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu May 5 18:14:06 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 5 May 2022 20:14:06 +0200 Subject: [neutron] Drivers meeting - Friday 6.5.2022 - cancelled Message-ID: Hi Neutron Drivers! Due to the lack of agenda, let's cancel tomorrow's drivers meeting. See You on the meeting next week. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Thu May 5 18:35:28 2022 From: peter.matulis at canonical.com (Peter Matulis) Date: Thu, 5 May 2022 14:35:28 -0400 Subject: [charms] OpenStack Charms Yoga release is now available Message-ID: ?Version naming for OpenStack Charms releases has changed from being time-based (YY.MM) to one that is based on the newly-supported OpenStack release. Individual charms will also no longer be developed to work with every supported OpenStack release. Instead, each charm will leverage Charmhub tracks, each of which will determine a supported scenario. See the following resource for details: https://docs.openstack.org/charm-guide/latest/project/charm-delivery.html --- The Yoga release of the OpenStack Charms is now available. This release brings several new features to the existing OpenStack Charms deployments for Queens, Stein, Ussuri, Victoria, Wallaby, Xena, Yoga and many stable combinations of Ubuntu + OpenStack. Please see the release notes for full details: https://docs.openstack.org/charm-guide/latest/release-notes/yoga.html == Highlights == * OpenStack Yoga OpenStack Yoga is now supported on Ubuntu 20.04 LTS (via UCA) and Ubuntu 22.04 LTS natively. * vTPM support Virtual Trusted Platform Module (vTPM) devices are now supported on Compute nodes. TPM can be used to enhance computer security and privacy. * Glance storage backend: Cinder iSCSI/FC Cinder iSCSI/FC is now supported as a storage backend for Glance. * Cloud operational improvements Improvements have been implemented at the operational level through the addition of actions and configuration options to the current set of stable charms. * Tech-preview charms This release comes with three new charms, all currently in tech-preview: - cinder-nimblestorage Provides a NimbleStorage storage backend for Cinder. - cinder-solidfire Provides a SolidFire storage backend for Cinder. - nova-compute-nvidia-vgpu Provides Nvidia vGPU support for Nova. * Documentation updates Documentation highlights include: - More cloud operations - Improvements to the upgrade pages - Tutorial for deploying OpenStack - Pages for vTPM and vGPU - Charmhub information - Guidelines and resources for software and documentation contributions == OpenStack Charms team == The OpenStack Charms team can be contacted on the #openstack-charms IRC channel on OFTC or in the Juju user forum: https://discourse.charmhub.io/c/juju/ == Thank you == Lots of thanks to the 55 contributors below who squashed 120 bugs, enabled new features, and improved the documentation! Alex Kavanagh Andy Wu Anna Savchenko Aqsa Malik Arif Ali Aurelien Lourot Bartlomiej Poniecki-Klotz Bayani Carbone Billy Olsen Chris MacNaughton Corey Bryant Cornellius Metto David Ames Diko Parvanov Dmitrii Shcherbakov Edin Sarajlic Edward Hope-Morley Erlon R. Felipe Reyes Frode Nordahl Gustavo Sanchez Hemanth Nakkina Herv? Beraud James Lin James Page James Troup James Vaughn Jeff Hillman jiangzhilin Joe Guo Jorge Merlino Jos? Pekkarinen Juan Pablo Liam Young Linda Guo Luciano Lo Giudice Marcin Wilk Martin Kalcok Nicholas Malacarne Nobuto Murata Olivier Dufour-Cuvillier Paul Goins Pedro Castillo Peter Matulis Robert Gildein Rodrigo Barbieri Samuel Walladge Shayan Patel Simon Deziel Tianqi Xiao Yoshi Kadokawa Zeeshan Ali Zhang Hua Dongdong Tao Tiago Pasqualini -- OpenStack Charms Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Thu May 5 20:02:26 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Thu, 5 May 2022 17:02:26 -0300 Subject: [Kolla][CloudKitty] Wrong version in ubuntu-binary-cloudkitty-processor:yoga In-Reply-To: References: Message-ID: Thanks for the heads up. I checked the source images, and they are fine. On Thu, May 5, 2022 at 12:24 PM Pierre Riteau wrote: > Hi Rafael, > > I believe this is caused by Ubuntu Cloud Archive (UCA) not providing > packages for CloudKitty: > https://openstack-ci-reports.ubuntu.com/reports/cloud-archive/yoga_versions.html > As a result, Kolla installs cloudkitty packages from the focal universe > repository, which provide version 11.0.1: > https://packages.ubuntu.com/focal/cloudkitty-processor > Installing a newer version of cloudkitty would require using a more recent > version of Ubuntu? So for Kolla binary images, this would need to be fixed > by UCA. > > Anyway, I would recommend switching to source images. Not only will you > get the latest stable code, but also Kolla binary images are deprecated and > will be removed in Zed. > > Cheers, > Pierre Riteau (priteau) > > On Thu, 5 May 2022 at 14:18, Rafael Weing?rtner < > rafaelweingartner at gmail.com> wrote: > >> Hello guys, >> >> I was checking the upstream containers for CloudKittty for Yoga release; >> and, I noticed that the CloudKitty containers are using version 11.0.1, >> which is not the Yoga version of CloudKitty. I also checked Xena, and the >> same thing happens there. Do we need to define these component versions >> somewhere else for the Kolla containers that are built upstream? >> >> -- >> Rafael Weing?rtner >> > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsmigiel at redhat.com Thu May 5 22:58:28 2022 From: dsmigiel at redhat.com (Dariusz Smigiel) Date: Thu, 5 May 2022 15:58:28 -0700 Subject: [TripleO] gate blocker please hold rechecks tripleo-heat-templates (openstack-tox-tht) In-Reply-To: References: Message-ID: We merged a workaround on pinning ansible-runner [1] which should resolve the issue with gates. Thanks! [1]: https://review.opendev.org/c/openstack/tripleo-heat-templates/+/840683 On Thu, May 5, 2022 at 7:14 AM Marios Andreou wrote: > > Hello > > we have a gate blocker for tripleo-heat-templates affecting master and > stable/wallaby at https://bugs.launchpad.net/tripleo/+bug/1971703 > If you have failing openstack-tox-tht please hold recheck until we > find some resolution > > thank you! > > marios > > From tkajinam at redhat.com Fri May 6 02:41:37 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 6 May 2022 11:41:37 +0900 Subject: [puppet] Gate blocker: Installation of the systemtap package fails In-Reply-To: References: Message-ID: Because the new systemtap packages have not yet been released and our master gate job has been broken for more than a week, I merged the temporal workaround[1]. I'd keep that patch master only atm, but it takes additional days then I'll backport that to stable/yoga to unblock stable/yoga CI. [1] https://review.opendev.org/c/openstack/puppet-openstack-integration/+/840188 On Wed, May 4, 2022 at 12:16 AM Takashi Kajinami wrote: > Hello, > > We are currently facing consistent failure in centos stream 9 integration > jobs. > which is caused by the new dyninst package. > > I've already reported the issue in bz[1] and we are currently waiting for > the updated > systemtap package. Please avoid rechecking until the package is released > [1] https://bugzilla.redhat.com/show_bug.cgi?id=2079892 > > If the fix is not released for additional days then we can merge the > temporal workaround > to unblock our jobs. > > https://review.opendev.org/c/openstack/puppet-openstack-integration/+/840188 > > Last week we also saw centos stream 8 jobs consistently failing but it > seems these jobs were > already fixed by the new python3-qt5 package. > [2] https://bugzilla.redhat.com/show_bug.cgi?id=2079895 > > Thank you, > Takashi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri May 6 03:04:24 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 6 May 2022 12:04:24 +0900 Subject: [puppet] Removal of novajoin support Message-ID: Hello, Because the novajoin project[1] has been unmaintained for a while, we deprecated support for the service during the previous Yoga cycle[2]. [1] https://opendev.org/x/novajoin [2] https://review.opendev.org/c/openstack/puppet-nova/+/833507 I've submitted the removal patch[3] so that we can drop the whole implementation from Zed release. [3] https://review.opendev.org/c/openstack/puppet-nova/+/840802 In case anyone has any concern about the removal, please share your thoughts in the above review. I'll keep WIP on the patch for one week to be open for any feedback. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Fri May 6 05:49:35 2022 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Fri, 6 May 2022 07:49:35 +0200 Subject: [Kolla][CloudKitty] Wrong version in ubuntu-binary-cloudkitty-processor:yoga In-Reply-To: References: Message-ID: <061ff5d7-d3f7-ddcc-9a90-9d0fe60f2d7a@linaro.org> W dniu 05.05.2022 o 17:24, Pierre Riteau pisze: > I believe this is caused by Ubuntu Cloud Archive (UCA) not providing > packages for CloudKitty: > As a result, Kolla installs cloudkitty packages from the focal > universe repository, which provide version 11.0.1: > Installing a newer version of cloudkitty would require using a more > recent version of Ubuntu? So for Kolla binary images, this would need > to be fixed by UCA. https://review.opendev.org/c/openstack/kolla/+/840814 disables building of cloudkitty-processor for Ubuntu/binary. > Anyway, I would recommend switching to source images. Not only will > you get the latest stable code, but also Kolla binary images are > deprecated and will be removed in Zed. In Zed we already removed any trace of binary images. From kkchn.in at gmail.com Fri May 6 09:07:44 2022 From: kkchn.in at gmail.com (KK CHN) Date: Fri, 6 May 2022 14:37:44 +0530 Subject: Data Center Survival in case of Disaster / HW Failure in DC In-Reply-To: <20220505123751.Horde.BvzneKkaM8LoS3CA89DFaJx@webmail.nde.ag> References: <20220505123751.Horde.BvzneKkaM8LoS3CA89DFaJx@webmail.nde.ag> Message-ID: Thanks Eugen . I fully agree with not running VMs on Control nodes. When we rolled out the Controller resources we couldn't spare out only as a controller, Becoz of the utilization of the controller host machines resources, so we decided to use them as compute nodes also. On Thu, May 5, 2022 at 6:09 PM Eugen Block wrote: > Hi, > > first, I wouldn't run VMs on control nodes, that way you mix roles > (control and compute) and in case that one control node fails the VMs > are not available. That would not be the case if the control node is > only a control node and is also part of a highly available control > plane (like yours appears to be). Depending on how your control plane > is defined, the failure of one control node should be tolerable. > There has been some work on making compute nodes highly available but > I don't know the current status. could you point out the links/docs where I can refer for a proper setup. > But in case a compute node fails but > nova is still responsive a live or cold migration could still be > possible to evacuate that host. > If a compute node fails and is > unresponsive you'll probably need some DB tweaking to revive VMs on a > different node. > Don't know much about this, any reference is welcome. > So you should have some spare resources to be able to recover from a > compute node failure. > As for ceph it should be configured properly to sustain the loss of a > defined number of nodes or disks, I don't know your requirements. If > your current cluster has "only" 3 nodes you probably run replicated > pools with size 3 (I hope) with min_size 2 and failure-domain host. you mean 3 OSD s in single compute node ? I can follow this way is it the best way to do so ? > > Any reference to this best ceph deployment model which do the best fault tolerance, kindly share. > You could sustain one node failure without clients noticing it, a > second node would cause the cluster to pause. Also you don't have the > possibility to recover from a node failure until it is up again, > meaning the degraded PGs can't be recovered on a different node. So > this also depends on your actual resiliency requirements. If you have > a second site you could use rbd mirroring [1] to sync all rbd images > between sites. We have a connectivity link of only 1Gbps between DC and DR and DR is 300 miles away from DC. And the syncing enhancement, how can we achieve this ? Because our HDD writing speeds are too limited, maybe 80 Mbps to 100 Mbps .. SSDs are not available for all compute host machines. Each VM has 800 GB to 1 TB Disk size. Is there any best practice for syncing enhancement mechanisms for HDDs in DC hosts (with a connectivity of 1 Gbps between DR sites with HDD hosts . ?) > In case the primary site goes down entirely you could > switch to the primary site by promoting the rbd images. > So you see there is plenty of information to cover and careful > planning is required. > > Regards, > Eugen > Thanks again for sharing your thoughts. > > [1] https://docs.ceph.com/en/latest/rbd/rbd-mirroring/ > > Zitat von KK CHN : > > > List, > > > > We are having an old cloud setup with OpenStack Ussuri usng Debian OS, > > (Qemu KVM ). I know its very old and we can't upgrade to to new versions > > right now. > > > > The Deployment is as follows. > > > > A. 3 Controller in (cum compute nodes . VMs are running on controllers > > too..) in HA mode. > > > > B. 6 separate Compute nodes > > > > C. 3 separate Storage node with Ceph RBD > > > > Question is > > > > 1. In case of any Sudden Hardware failure of one or more controller > node > > OR Compute node OR Storage Node what will be the immediate redundant > > recovery setup need to be employed ? > > > > 2. In case H/W failure our recovery need to as soon as possible. For > > example less than30 Minutes after the first failure occurs. > > > > 3. Is there setup options like a hot standby or similar setups or what > we > > need to employ ? > > > > 4. To meet all RTO (< 30 Minutes down time ) and RPO(from the exact > point > > of crash all applications and data must be consistent) . > > > > 5. Please share your thoughts for reliable crash/fault resistance > > configuration options in DC. > > > > > > We have a remote DR setup right now in a remote location. Also I would > > like to know if there is a recommended way to make the remote DR site > > Automatically up and run ? OR How to automate the service from DR site > > to meet exact RTO and RPO > > > > Any thoughts most welcom. > > > > Regards, > > Krish > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finarffin at gmail.com Fri May 6 10:47:44 2022 From: finarffin at gmail.com (Jan Wasilewski) Date: Fri, 6 May 2022 12:47:44 +0200 Subject: [nova][placement] Incomplete Consumers return negative value after upgrade In-Reply-To: <230f0595-5715-f7be-0506-2f3673c83997@gmail.com> References: <230f0595-5715-f7be-0506-2f3673c83997@gmail.com> Message-ID: Hello Melanie, Thank you for your reply. It seems that everything is OK right now, after applying a patch I can see: # placement-status upgrade check+----------------------------------+| Upgrade Check Results |+----------------------------------+| Check: Missing Root Provider IDs || Result: Success || Details: None |+----------------------------------+| Check: Incomplete Consumers || Result: Success || Details: None |+----------------------------------+ I've seen that zuul added -1 to review for this check, however I will apply my internal patch for it, additionally that you mentioned that such orphaned records shouldn't hurt me. Once again - thank you for your help. Best regards, Jan czw., 5 maj 2022 o 19:05 melanie witt napisa?(a): > On Thu May 05 2022 09:58:10 GMT-0700 (Pacific Daylight Time), melanie > witt wrote: > > You may still have orphaned consumers table records in the placement > > database but they don't hurt anything. If you want/need to clean them > > up, it has to be done manually, something like (disclaimer I did not > > test this): > > > > delete from placement.consumers where placement.consumers.uuid not in > > (select nova_api.instance_mappings.instance_uuid from > > nova_api.instance_mappings where > > nova_api.instance_mappings.queued_for_delete = true); > > Sorry I got the where condition backwards, it should be: > > delete from placement.consumers where placement.consumers.uuid not in > (select nova_api.instance_mappings.instance_uuid from > nova_api.instance_mappings where > nova_api.instance_mappings.queued_for_delete != true); > > And you might have s/placement/nova_api/ if you did not break out the > separate placement database. > > -melanie > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Fri May 6 10:47:27 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Fri, 6 May 2022 07:47:27 -0300 Subject: [Kolla][CloudKitty] Wrong version in ubuntu-binary-cloudkitty-processor:yoga In-Reply-To: <061ff5d7-d3f7-ddcc-9a90-9d0fe60f2d7a@linaro.org> References: <061ff5d7-d3f7-ddcc-9a90-9d0fe60f2d7a@linaro.org> Message-ID: Ok, thanks. I was not aware of that On Fri, May 6, 2022 at 2:54 AM Marcin Juszkiewicz < marcin.juszkiewicz at linaro.org> wrote: > W dniu 05.05.2022 o 17:24, Pierre Riteau pisze: > > > I believe this is caused by Ubuntu Cloud Archive (UCA) not providing > > packages for CloudKitty: > > > As a result, Kolla installs cloudkitty packages from the focal > > universe repository, which provide version 11.0.1: > > > Installing a newer version of cloudkitty would require using a more > > recent version of Ubuntu? So for Kolla binary images, this would need > > to be fixed by UCA. > > https://review.opendev.org/c/openstack/kolla/+/840814 disables building > of cloudkitty-processor for Ubuntu/binary. > > > Anyway, I would recommend switching to source images. Not only will > > you get the latest stable code, but also Kolla binary images are > > deprecated and will be removed in Zed. > In Zed we already removed any trace of binary images. > > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Fri May 6 11:55:00 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 6 May 2022 11:55:00 +0000 (UTC) Subject: Openstack OSA Latest - Glance References: <1896835692.365118.1651838100242.ref@mail.yahoo.com> Message-ID: <1896835692.365118.1651838100242@mail.yahoo.com> Hi all, Could any one tell me if they have come across this error (in the chrome console) while trying to create a new image in horizon: Refused to connect to 'https://10.37.110.104:9292/v2/images/3e553b60-0a6c-48a0-b855-60088839201e/file' because it violates the following Content Security Policy directive: "default-src 'self'". Note that 'connect-src' was not explicitly set, so 'default-src' is used as a fallback If so, any advice? I can scp the image file to the utility container and upload it from there no problem. Also, journalctl -f on the glance container shows nothing when the process fails. We have set the internal and external lb vip addresses differently but both are on the same subnet (internal) as it's an in house deploy. So two addresses on the mgmt interface to be precise. Maybe that's causing an issue. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From ces.eduardo98 at gmail.com Fri May 6 12:11:17 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Fri, 6 May 2022 09:11:17 -0300 Subject: [manila] Zed cycle bug squash In-Reply-To: References: Message-ID: Hello! A quick update: due to the issues we ran into yesterday, today we will be using a different bridge to the wrap up call [1]. [1] https://bluejeans.com/9499863324 Thank you! carloss Em sex., 29 de abr. de 2022 ?s 20:34, Carlos Silva escreveu: > Greetings Zorillas and interested stackers! > > As mentioned in the previous weekly meetings, we will soon be meeting for > the first bugsquash of the Zed release! > > The event will be held from May 2nd to May 6th, providing an extended > contribution window. > > May 2nd 15:00 - 16:00 UTC - Kick off > May 5th 15:00 - 16:00 UTC - Mid term checkpoint (we won't have our regular > Manila meeting on this day) > May 6th 15:00 - 15:30 UTC - Wrap Up > > We will use a meetpad for these meetings [1]. > > The main idea of this event is to go over the list of stale bugs and act > on them, either seeing if they are incomplete or invalid at this point or > working on them. The stale bugs list will be available on [2]. > > [1] https://meetpad.opendev.org/ManilaZed1Bugsquash > [2] https://ethercalc.openstack.org/1nesczgjufb9 > > See you next week! > > Thank you, > carloss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri May 6 12:12:41 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 6 May 2022 14:12:41 +0200 Subject: Openstack OSA Latest - Glance In-Reply-To: <1896835692.365118.1651838100242@mail.yahoo.com> References: <1896835692.365118.1651838100242.ref@mail.yahoo.com> <1896835692.365118.1651838100242@mail.yahoo.com> Message-ID: Hi Derek. I believe that is the result of this change [1]. I can suggest adding `haproxy_security_headers_csp_report_only: True` to your user_variables.yml and re-running haproxy playbook. That should disable failure when verification of CSP not passing. At the same time I wonder why the issue has happened in the first place. I can imagine that happening when you're trying to upload via domain name while endpoints are set for IP only? [1] https://opendev.org/openstack/openstack-ansible/commit/b6fe07ecf810ede06cd5007396fd5286937e6616 ??, 6 ??? 2022 ?. ? 13:52, Derek O keeffe : > > Hi all, > > Could any one tell me if they have come across this error (in the chrome console) while trying to create a new image in horizon: > > Refused to connect to 'https://10.37.110.104:9292/v2/images/3e553b60-0a6c-48a0-b855-60088839201e/file' because it violates the following Content Security Policy directive: "default-src 'self'". Note that 'connect-src' was not explicitly set, so 'default-src' is used as a fallback > > If so, any advice? I can scp the image file to the utility container and upload it from there no problem. Also, journalctl -f on the glance container shows nothing when the process fails. We have set the internal and external lb vip addresses differently but both are on the same subnet (internal) as it's an in house deploy. > > So two addresses on the mgmt interface to be precise. Maybe that's causing an issue. > > Regards, > Derek From J.Horstmann at mittwald.de Fri May 6 13:10:34 2022 From: J.Horstmann at mittwald.de (Jan Horstmann) Date: Fri, 6 May 2022 13:10:34 +0000 Subject: [ops][octavia][neutron] Distributed Virtual Routing: Floating IPs attached to virtual IP addresses are assigned on network nodes Message-ID: <81936a3fc1eba4c9c32e897198c235c483796831.camel@mittwald.de> Hello! When we initially deployed openstack we thought that using distributed virtual routing with ml2/ovs-dvr would give us the ability to automatically scale our network capacity with the number of hypervisors we use. Our main workload are kubernetes clusters which receive ingress traffic via octavia loadbancers (configured to use the amphora driver). So the idea was that we could increase the number of loadbalancers to spread the traffic over more and more compute nodes. This would imply that any volume based (distributed) denial of service attack on a single loadbalancer would just saturate a single compute node and leave the rest of the system functional. We have recently learned that, no matter the loadbalancer topology, a virtual IP is created for it by octavia. This, and probably all virtual IPs in openstack, are reserved by an unbound and disabled port and then set as an allowed address pair on any server's port which might hold it. Up to this point our initial assumption should be true, as the server actually holding the virtual IP would reply to any ARP requests and thus any traffic should be routed to the node with the virtual machine of the octavia amphora. However, we are using our main provider network as a floating IP pool and do not allow direct port creation. When a floating IP is attached to the virtual IP it is assigned to the SNAT router namespace on a network node. Naturally in high traffic or even (distributed) denial of service situations the network node might become a bottleneck. A situation we thought we could avoid by using distributed virtual routing in the first place. This leads me to a rabbit hole of questions I hope someone might be able to help with: Is the assessment above correct or am I missing something? If it is correct, do we have any other options than vertically scaling our network nodes to handle traffic? Do other ml2 drivers (e.g. OVN) handle this scenario differently? If our network nodes need to handle most of the traffic anyway, do we still have any advantage using distributed virtual routing? Especially when considering the increased complexity compared to a non- distributed setup? Has anyone ever explored non-virtual IP based high availability options like e.g. BGP multipathing in a distributed virtual routing scenario? Any input is highly appreciated. Regards, Jan From anyrude10 at gmail.com Fri May 6 10:20:50 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 6 May 2022 15:50:50 +0530 Subject: [Tripleo] Support of PTP in Openstack Train Message-ID: Hi Team, I have installed Undercloud with Openstack Train Release successfully. I need to enable PTP service while deploying the overcloud for which I have included the service in my deployment openstack overcloud deploy --templates \ -n /home/stack/templates/network_data.yaml \ -r /home/stack/templates/roles_data.yaml \ -e /home/stack/templates/environment.yaml \ -e /home/stack/templates/environments/network-isolation.yaml \ -e /home/stack/templates/environments/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml \ * -e /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \* -e /home/stack/templates/ironic-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /home/stack/containers-prepare-parameter.yaml But it gives the following error 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host configuration to finish | overcloud-controller-0 | error={"ansible_job_id": "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --detailed-exitcodes --summarize --color=false /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/6.14/deprecated_language.html", "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5", "<13>May 6 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 puppet-user: Warning: Undefined variable '::deploy_config_name'; ", "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not connect to controller: Connection refused", "<13>May 6 11:30:08 puppet-user: *Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} Can someone please help in resolving this issue? Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From gzimin at mirantis.com Fri May 6 11:27:59 2022 From: gzimin at mirantis.com (Gleb Zimin) Date: Fri, 6 May 2022 15:27:59 +0400 Subject: [neutron][opencontrail]Trunk ports doesn't have standard attributes Message-ID: Environment: Openstack Victoria, OpenContrail (TungstenFabric) R2011. Problem: Trunk ports doesn't have standard attributes such as description, timestamp. I have an environment where core plugin is OpenContrail. OpenContrail has tf-neutron-plugin for proper work with neutron. There is TrunkPlugin, that proxies all trunk-related request to the OpenContrail backend. Here is the link for this plugin. https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/services/trunk/plugin.py;h=35fc3310911143fd3b4cf8997c23d0358d652dba;hb=refs/heads/master According to the openstack documentation: Resources that inherit from the HasStandardAttributes DB class can automatically have the extensions written for standard attributes (e.g. timestamps, revision number, etc) extend their resources by defining the ?api_collections? on their model. These are used by extensions for standard attr resources to generate the extended resources map. As I understood, it works only for plugins, which using Openstack database. For example, openstack native trunk plugin has models.py file where we can see this inheritance. https://github.com/openstack/neutron/blob/master/neutron/services/trunk/models.py#L26 In case of OpenContrail+OpenStack Trunk plugin only redirect requests. What can I do to make Contrail Trunk plugin works in the same way? I'll appreciate any help. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri May 6 15:13:02 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 6 May 2022 17:13:02 +0200 Subject: [neutron][opencontrail]Trunk ports doesn't have standard attributes In-Reply-To: References: Message-ID: Hello Gleb: The OC plugin is using the Neutron API to execute operations on the server. The "get_trunk" method calls "_get_resource" [1]. I've tested the Neutron API and this is what you get from the API: [2]. All standard attributes are returned. Maybe this could be related to the "fields" parameter you are passing in the method [3]. Check if you are filtering the API call and only retrieving a certain subset of fields. Regards. [1] https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/contrail_plugin.py;h=80104f1e673eb4583f7a15707c3590c510fd7667;hb=refs/heads/master#l326 [2]https://paste.opendev.org/show/bh1QD8VqLz4OKMKYBXUa/ [3] https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/services/trunk/plugin.py;h=35fc3310911143fd3b4cf8997c23d0358d652dba;hb=refs/heads/master#l50 On Fri, May 6, 2022 at 4:35 PM Gleb Zimin wrote: > Environment: Openstack Victoria, OpenContrail (TungstenFabric) R2011. > Problem: Trunk ports doesn't have standard attributes such as description, > timestamp. > I have an environment where core plugin is OpenContrail. OpenContrail has > tf-neutron-plugin for proper work with neutron. There is TrunkPlugin, that > proxies all trunk-related request to the OpenContrail backend. Here is the > link for this plugin. > https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/services/trunk/plugin.py;h=35fc3310911143fd3b4cf8997c23d0358d652dba;hb=refs/heads/master > According to the openstack documentation: Resources that inherit from the > HasStandardAttributes DB class can automatically have the extensions > written for standard attributes (e.g. timestamps, revision number, etc) > extend their resources by defining the ?api_collections? on their model. > These are used by extensions for standard attr resources to generate the > extended resources map. > As I understood, it works only for plugins, which using Openstack > database. For example, openstack native trunk plugin has models.py file > where we can see this inheritance. > https://github.com/openstack/neutron/blob/master/neutron/services/trunk/models.py#L26 > In case of OpenContrail+OpenStack Trunk plugin only redirect requests. > What can I do to make Contrail Trunk plugin works in the same way? > I'll appreciate any help. Thanks in advance. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp.methot at planethoster.info Fri May 6 18:14:47 2022 From: jp.methot at planethoster.info (J-P Methot) Date: Fri, 6 May 2022 14:14:47 -0400 Subject: [neutron] Static route not added in namespace using DVR on Wallaby Message-ID: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> Hi, We're in this situation where we are going to move some instances from one openstack cluster to another. After this process, we want our instances on the new openstack cluster to keep the same floating IPs but also to be able to communicate with some instances that are in the same public IP range on the first cluster. To accomplish this, we want to add static routes like 'X.X.X.X/32 via Y.Y.Y.Y'. However, we're using DVR and when we add the static routes, they do not show up anywhere in any of the namespaces. Is there a different way to add static routes on DVR instead of using openstack router add route ? -- Jean-Philippe M?thot Senior Openstack system administrator Administrateur syst?me Openstack s?nior PlanetHoster inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri May 6 18:40:32 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 06 May 2022 20:40:32 +0200 Subject: [neutron] Static route not added in namespace using DVR on Wallaby In-Reply-To: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> References: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> Message-ID: Hi, W dniu pi?, 6 maj 2022 o 14:14:47 -0400 u?ytkownik J-P Methot napisa?: > Hi, > > We're in this situation where we are going to move some instances > from one openstack cluster to another. After this process, we want > our instances on the new openstack cluster to keep the same floating > IPs but also to be able to communicate with some instances that are > in the same public IP range on the first cluster. > > To accomplish this, we want to add static routes like 'X.X.X.X/32 > via Y.Y.Y.Y'. However, we're using DVR and when we add the static > routes, they do not show up anywhere in any of the namespaces. Is > there a different way to add static routes on DVR instead of using > openstack router add route ? > No, there is no other way to add static routes to the dvr router. I don't have any DVR deployment now to check it but IIRC route should be added in the qrouter namespace in the compute nodes where router exists. If it's not there please check logs of the l3-agent on those hosts, maybe there are some errors there. > -- > Jean-Philippe M?thot > Senior Openstack system administrator > Administrateur syst?me Openstack s?nior > PlanetHoster inc. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri May 6 18:44:16 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 May 2022 13:44:16 -0500 Subject: [all][tc] What's happening in Technical Committee: summary May 6th, 22: Reading: 5 min Message-ID: <1809aafd708.dd60dbba796372.6518438551701526457@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on May 5. Most of the meeting discussions are summarized in this email. Meeting summary logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-05-05-15.00.log.html As it was a video call, the full recording is here @https://www.youtube.com/watch?v=kJdoWcBiMuY&t=205s * Next TC weekly meeting will be on May 12 Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by May 11. 2. What we completed this week: ========================= * Add the cinder-three-par charm to Openstack charms[2] * Update milestones for FIPS goal[3] * Add gmann to oslo tact-sig liaison[4] 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[5], we have started the many items. Open Reviews ----------------- * Nine open reviews for ongoing activities[6]. CFN overview from Jie Niu, China Mobile Research Institute ----------------------------------------------------------------------- As you might have seen CFN proposal on ML[7], China Mobile team(Yu and Jie) joined the TC meeting and presented the CFN idea, architecture, and roadmap in detail. We had a few question-answer at the end. As the discussion is going on between China Mobile team and the Foundation, TC did not take any decision on being CFN as SIG in OpenStack or not. We are open to ideas coming from the discussion with Foundation. 'tick' 'tock' release cadence open items ---------------------------------------------- We are discussing two things here: 1. release notes strategy: This is not concluded yet but the discussion point is how we should make tock release notes noticeable in the next tick release for operators upgrading to tick-tick release? Whether we should copy the key deprecated/upgrade release notes or use like or mention about reading the previous tock release notes if you upgrading from tick->tick? We will continue the discussion in next meeting and conclude. 2. tick tock name using as per legal check >From foundation legal checks, it seems it is not so clear how we should use tick or tock or better we should not use these names, discussion is on gerrit[8]. We discussed a few ideas about using SLU|SLURP (Skip level upgrade release process) but it is not agreed upon by all the TC members. We are stuck with the same "horror naming" show in this too as we are in release naming. We are open to ideas and if you have any better ideas, feel free to add them in Gerrit review[8] Improve project governance --------------------------------- As discussed in PTG (yoga and zed) we are going to try a new framework "Technical-preview" to catch the inactive project or new project moving towards the OpenStack way. Slawek has the proposal up and ready for review[9]. Change OpenStack release naming policy proposal ----------------------------------------------------------- No concrete update on this than what we are discussing on ML[10] and Gerrit[11]. Migration from old ELK service to new Dashboard ----------------------------------------------------------- The new dashboard[12] is ready to use, we encourage community members to use it and provide feedback if any. We discussed the e-r service and repo maintenance. Ananya from tripleO team joined the meeting and they are ok to merge the rdo branch with master. Once that is done we will figure out where we can host this service, Dainel has some ideas about it and will continue the discussion in the TC channel or meeting. Consistent and Secure Default RBAC ------------------------------------------- We had our weekly call on Tuesday 3rd May and discussed the need for 'service' role. There are points that we should not make 'service' role as a light admin but at the same time, it is needed for the inter-service communication. Discussion still going on and we will continue it next Tuesday's call along with heat discussion. Notes are in etherpad[13] and the next meeting details are in wiki page [14], FIPs community-wide goal ------------------------------- Ade proposal of the updated milestone has been merged and goal selection proposal is up[15]. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[16]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's the situation again on ML and hope Braden will be ready with their company side permission[17]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[18]. Project updates ------------------- * None 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[19]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [20] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/837781 [3] https://review.opendev.org/c/openstack/governance/+/838601 [4] https://review.opendev.org/c/openstack/governance/+/840351 [5] https://etherpad.opendev.org/p/tc-zed-tracker [6] https://review.opendev.org/q/projects:openstack/governance+status:open [7] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028345.html [8] https://review.opendev.org/c/openstack/governance/+/840354 [9] https://review.opendev.org/c/openstack/governance/+/839880 [10] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028354.html [11] https://review.opendev.org/c/openstack/governance/+/839897 [12] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028346.html [13] https://etherpad.opendev.org/p/rbac-zed-ptg#L103 [14] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting [15] https://review.opendev.org/c/openstack/governance/+/840920 [16] https://review.opendev.org/c/openstack/governance/+/836888 [17] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [18] https://etherpad.opendev.org/p/zuul-config-error-openstack [19] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [20] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From tiagohp at gmail.com Fri May 6 19:50:10 2022 From: tiagohp at gmail.com (Tiago Pires) Date: Fri, 6 May 2022 16:50:10 -0300 Subject: Neutron + OVN raft cluster Message-ID: Hi all, I was checking the mail list history and this thread https://mail.openvswitch.org/pipermail/ovs-discuss/2018-March/046438.html caught my attention about raft ovsdb clustering. In my setup (OVN 20.03 and Openstack Ussuri) on the ovn-controller we have configured the ovn-remote="tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642" with the 3 OVN central member that they are in cluster mode. Also on the neutron ML2 side: [ovn] ovn_native_dhcp = True ovn_nb_connection = tcp:10.2X.4X.4:6641,tcp:10.2X.4X.68:6641,tcp:10.2X.4X.132:6641 ovn_sb_connection = tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642 We are experiencing an issue with Neutron when the OVN leader decide to take a snapshot and by design another member became leader(more less every 8 minutes): 2022-05-05T16:57:42.135Z|17401|raft|INFO|Transferring leadership to write a snapshot. ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound 4a03 Name: OVN_Southbound Cluster ID: ca74 (ca744caf-40cd-4751-a2f2-86e35ad6541c) Server ID: 4a03 (4a0328dc-e9a4-495e-a4f1-0a0340fc6d19) Address: tcp:10.2X.4X.132:6644 Status: cluster member Role: leader Term: 1912 Leader: self Vote: self Election timer: 10000 Log: [497643, 498261] Entries not yet committed: 0 Entries not yet applied: 0 Connections: ->3d6c ->4ef0 <-3d6c <-4ef0 Servers: 4a03 (4a03 at tcp:10.2X.4X.132:6644) (self) next_index=497874 match_index=498260 3d6c (3d6c at tcp:10.2X.4X.68:6644) next_index=498261 match_index=498260 4ef0 (4ef0 at tcp:10.2X.4X.4:6644) next_index=498261 match_index=498260 As I understood the tcp connections from the Neutron (NB) and ovn-controllers (SB) to OVN Central are established only with the leader: #OVN central leader $ netstat -nap | grep 6642| more tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - tcp 0 0 10.2X.4X.132:6642 10.24.40.17:47278 ESTABLISHED - tcp 0 0 10.2X.4X.132:6642 10.24.40.76:36240 ESTABLISHED - tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47280 ESTABLISHED - tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43102 ESTABLISHED - tcp 0 0 10.2X.4X.132:6642 10.2X.4X.75:58890 ESTABLISHED - tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43108 ESTABLISHED - tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47142 ESTABLISHED - tcp 0 0 10.2X.4X.132:6642 10.2X.4X.71:48808 ESTABLISHED - tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47096 ESTABLISHED - #OVN follower 2 $ netstat -nap | grep 6642 tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - tcp 0 0 10.2X.4X.4:6642 10.2X.4X.76:57256 ESTABLISHED - tcp 0 0 10.2X.4X.4:6642 10.2X.4X.134:54026 ESTABLISHED - tcp 0 0 10.2X.4X.4:6642 10.2X.4X.10:34962 ESTABLISHED - tcp 0 0 10.2X.4X.4:6642 10.2X.4X.6:49238 ESTABLISHED - tcp 0 0 10.2X.4X.4:6642 10.2X.4X.135:59972 ESTABLISHED - tcp 0 0 10.2X.4X.4:6642 10.2X.4X.75:40162 ESTABLISHED - tcp 0 0 10.2X.4X.4:39566 10.2X.4X.132:6642 ESTABLISHED - #OVN follower 3 netstat -nap | grep 6642 tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - tcp 0 0 10.2X.4X.68:6642 10.2X.4X.70:40750 ESTABLISHED - tcp 0 0 10.2X.4X.68:6642 10.2X.4X.11:49718 ESTABLISHED - tcp 0 0 10.2X.4X.68:45632 10.2X.4X.132:6642 ESTABLISHED - tcp 0 0 10.2X.4X.68:6642 10.2X.4X.16:44816 ESTABLISHED - tcp 0 0 10.2X.4X.68:6642 10.2X.4X.7:45216 ESTABLISHED The issue that we are experiencing is on the neutron-server that disconnects when there is the ovn leader change (due snapshot like each 8 minutes) and reconnects to the next leader. It breaks the Openstack API when someone is trying to create a VM at the same time. First, is my current configuration correct? Should the leader change and break the neutron side? Or is there some missing configuration? I was wondering if it is possible to use a LB with VIP and this VIP balance the connections to the ovn central members and I would reconfigure on the neutron side only with the VIP and also on the ovs-controllers. Does that make sense? Thank you. Regards, Tiago Pires -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri May 6 21:21:42 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 6 May 2022 17:21:42 -0400 Subject: Neutron + OVN raft cluster In-Reply-To: References: Message-ID: Hi Tiago, Have you seen this? https://bugs.launchpad.net/nova/+bug/1969592 Mohammed On Fri, May 6, 2022 at 3:56 PM Tiago Pires wrote: > > Hi all, > > I was checking the mail list history and this thread https://mail.openvswitch.org/pipermail/ovs-discuss/2018-March/046438.html caught my attention about raft ovsdb clustering. > In my setup (OVN 20.03 and Openstack Ussuri) on the ovn-controller we have configured the ovn-remote="tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642" with the 3 OVN central member that they are in cluster mode. > Also on the neutron ML2 side: > [ovn] > ovn_native_dhcp = True > ovn_nb_connection = tcp:10.2X.4X.4:6641,tcp:10.2X.4X.68:6641,tcp:10.2X.4X.132:6641 > ovn_sb_connection = tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642 > > We are experiencing an issue with Neutron when the OVN leader decide to take a snapshot and by design another member became leader(more less every 8 minutes): > 2022-05-05T16:57:42.135Z|17401|raft|INFO|Transferring leadership to write a snapshot. > > ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound > 4a03 > Name: OVN_Southbound > Cluster ID: ca74 (ca744caf-40cd-4751-a2f2-86e35ad6541c) > Server ID: 4a03 (4a0328dc-e9a4-495e-a4f1-0a0340fc6d19) > Address: tcp:10.2X.4X.132:6644 > Status: cluster member > Role: leader > Term: 1912 > Leader: self > Vote: self > > Election timer: 10000 > Log: [497643, 498261] > Entries not yet committed: 0 > Entries not yet applied: 0 > Connections: ->3d6c ->4ef0 <-3d6c <-4ef0 > Servers: > 4a03 (4a03 at tcp:10.2X.4X.132:6644) (self) next_index=497874 match_index=498260 > 3d6c (3d6c at tcp:10.2X.4X.68:6644) next_index=498261 match_index=498260 > 4ef0 (4ef0 at tcp:10.2X.4X.4:6644) next_index=498261 match_index=498260 > > As I understood the tcp connections from the Neutron (NB) and ovn-controllers (SB) to OVN Central are established only with the leader: > > #OVN central leader > $ netstat -nap | grep 6642| more > > tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - > tcp 0 0 10.2X.4X.132:6642 10.24.40.17:47278 ESTABLISHED - > tcp 0 0 10.2X.4X.132:6642 10.24.40.76:36240 ESTABLISHED - > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47280 ESTABLISHED - > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43102 ESTABLISHED - > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.75:58890 ESTABLISHED - > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43108 ESTABLISHED - > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47142 ESTABLISHED - > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.71:48808 ESTABLISHED - > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47096 ESTABLISHED - > #OVN follower 2 > > $ netstat -nap | grep 6642 > > tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.76:57256 ESTABLISHED - > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.134:54026 ESTABLISHED - > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.10:34962 ESTABLISHED - > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.6:49238 ESTABLISHED - > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.135:59972 ESTABLISHED - > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.75:40162 ESTABLISHED - > tcp 0 0 10.2X.4X.4:39566 10.2X.4X.132:6642 ESTABLISHED - > #OVN follower 3 > > netstat -nap | grep 6642 > > tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.70:40750 ESTABLISHED - > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.11:49718 ESTABLISHED - > tcp 0 0 10.2X.4X.68:45632 10.2X.4X.132:6642 ESTABLISHED - > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.16:44816 ESTABLISHED - > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.7:45216 ESTABLISHED > > The issue that we are experiencing is on the neutron-server that disconnects when there is the ovn leader change (due snapshot like each 8 minutes) and reconnects to the next leader. It breaks the Openstack API when someone is trying to create a VM at the same time. > First, is my current configuration correct? Should the leader change and break the neutron side? Or is there some missing configuration? > I was wondering if it is possible to use a LB with VIP and this VIP balance the connections to the ovn central members and I would reconfigure on the neutron side only with the VIP and also on the ovs-controllers. Does that make sense? > > Thank you. > > Regards, > > Tiago Pires -- Mohammed Naser VEXXHOST, Inc. From johnsomor at gmail.com Fri May 6 21:41:01 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 6 May 2022 14:41:01 -0700 Subject: [ops][octavia][neutron] Distributed Virtual Routing: Floating IPs attached to virtual IP addresses are assigned on network nodes In-Reply-To: <81936a3fc1eba4c9c32e897198c235c483796831.camel@mittwald.de> References: <81936a3fc1eba4c9c32e897198c235c483796831.camel@mittwald.de> Message-ID: Hi Jan, If I understand correctly, the issue you are facing is with ovs-dvr, the floating IPs are implemented in the SNAT namespace on the network node, causing a congestion point for high traffic. You are looking for a way to implement floating IPs that are distributed across your deployment and not concentrated in the network nodes. Is that correct? If so, I think what you are looking for is distributed floating IPs with OVN[1]. I will let the OVN experts confirm this. Michael [1] https://docs.openstack.org/networking-ovn/latest/admin/refarch/refarch.html#distributed-floating-ips-dvr On Fri, May 6, 2022 at 6:19 AM Jan Horstmann wrote: > > Hello! > > When we initially deployed openstack we thought that using distributed > virtual routing with ml2/ovs-dvr would give us the ability to > automatically scale our network capacity with the number of > hypervisors we use. Our main workload are kubernetes clusters which > receive ingress traffic via octavia loadbancers (configured to use the > amphora driver). So the idea was that we could increase the number of > loadbalancers to spread the traffic over more and more compute nodes. > This would imply that any volume based (distributed) denial of service > attack on a single loadbalancer would just saturate a single compute > node and leave the rest of the system functional. > > We have recently learned that, no matter the loadbalancer topology, a > virtual IP is created for it by octavia. This, and probably all > virtual IPs in openstack, are reserved by an unbound and disabled port > and then set as an allowed address pair on any server's port which > might hold it. > Up to this point our initial assumption should be true, as the server > actually holding the virtual IP would reply to any ARP requests and > thus any traffic should be routed to the node with the virtual machine > of the octavia amphora. > However, we are using our main provider network as a floating IP pool > and do not allow direct port creation. When a floating IP is attached > to the virtual IP it is assigned to the SNAT router namespace on a > network node. Naturally in high traffic or even (distributed) denial > of service situations the network node might become a bottleneck. A > situation we thought we could avoid by using distributed virtual > routing in the first place. > > This leads me to a rabbit hole of questions I hope someone might be > able to help with: > > Is the assessment above correct or am I missing something? > > If it is correct, do we have any other options than vertically scaling > our network nodes to handle traffic? Do other ml2 drivers (e.g. OVN) > handle this scenario differently? > > If our network nodes need to handle most of the traffic anyway, do we > still have any advantage using distributed virtual routing? Especially > when considering the increased complexity compared to a non- > distributed setup? > > Has anyone ever explored non-virtual IP based high availability > options like e.g. BGP multipathing in a distributed virtual routing > scenario? > > Any input is highly appreciated. > Regards, > Jan > From tiagohp at gmail.com Fri May 6 22:04:10 2022 From: tiagohp at gmail.com (Tiago Pires) Date: Fri, 6 May 2022 19:04:10 -0300 Subject: Neutron + OVN raft cluster In-Reply-To: References: Message-ID: Hi Mohammed, It seems a little bit like our issue. Thank you. Tiago Pires Em sex., 6 de mai. de 2022 ?s 18:21, Mohammed Naser escreveu: > Hi Tiago, > > Have you seen this? > > https://bugs.launchpad.net/nova/+bug/1969592 > > Mohammed > > On Fri, May 6, 2022 at 3:56 PM Tiago Pires wrote: > > > > Hi all, > > > > I was checking the mail list history and this thread > https://mail.openvswitch.org/pipermail/ovs-discuss/2018-March/046438.html > caught my attention about raft ovsdb clustering. > > In my setup (OVN 20.03 and Openstack Ussuri) on the ovn-controller we > have configured the > ovn-remote="tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642" > with the 3 OVN central member that they are in cluster mode. > > Also on the neutron ML2 side: > > [ovn] > > ovn_native_dhcp = True > > ovn_nb_connection = > tcp:10.2X.4X.4:6641,tcp:10.2X.4X.68:6641,tcp:10.2X.4X.132:6641 > > ovn_sb_connection = > tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642 > > > > We are experiencing an issue with Neutron when the OVN leader decide to > take a snapshot and by design another member became leader(more less every > 8 minutes): > > 2022-05-05T16:57:42.135Z|17401|raft|INFO|Transferring leadership to > write a snapshot. > > > > ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound > > 4a03 > > Name: OVN_Southbound > > Cluster ID: ca74 (ca744caf-40cd-4751-a2f2-86e35ad6541c) > > Server ID: 4a03 (4a0328dc-e9a4-495e-a4f1-0a0340fc6d19) > > Address: tcp:10.2X.4X.132:6644 > > Status: cluster member > > Role: leader > > Term: 1912 > > Leader: self > > Vote: self > > > > Election timer: 10000 > > Log: [497643, 498261] > > Entries not yet committed: 0 > > Entries not yet applied: 0 > > Connections: ->3d6c ->4ef0 <-3d6c <-4ef0 > > Servers: > > 4a03 (4a03 at tcp:10.2X.4X.132:6644) (self) next_index=497874 > match_index=498260 > > 3d6c (3d6c at tcp:10.2X.4X.68:6644) next_index=498261 > match_index=498260 > > 4ef0 (4ef0 at tcp:10.2X.4X.4:6644) next_index=498261 > match_index=498260 > > > > As I understood the tcp connections from the Neutron (NB) and > ovn-controllers (SB) to OVN Central are established only with the leader: > > > > #OVN central leader > > $ netstat -nap | grep 6642| more > > > > tcp 0 0 0.0.0.0:6642 0.0.0.0:* > LISTEN - > > tcp 0 0 10.2X.4X.132:6642 10.24.40.17:47278 > ESTABLISHED - > > tcp 0 0 10.2X.4X.132:6642 10.24.40.76:36240 > ESTABLISHED - > > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47280 > ESTABLISHED - > > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43102 > ESTABLISHED - > > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.75:58890 > ESTABLISHED - > > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43108 > ESTABLISHED - > > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47142 > ESTABLISHED - > > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.71:48808 > ESTABLISHED - > > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47096 > ESTABLISHED - > > #OVN follower 2 > > > > $ netstat -nap | grep 6642 > > > > tcp 0 0 0.0.0.0:6642 0.0.0.0:* > LISTEN - > > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.76:57256 > ESTABLISHED - > > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.134:54026 > ESTABLISHED - > > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.10:34962 > ESTABLISHED - > > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.6:49238 > ESTABLISHED - > > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.135:59972 > ESTABLISHED - > > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.75:40162 > ESTABLISHED - > > tcp 0 0 10.2X.4X.4:39566 10.2X.4X.132:6642 > ESTABLISHED - > > #OVN follower 3 > > > > netstat -nap | grep 6642 > > > > tcp 0 0 0.0.0.0:6642 0.0.0.0:* > LISTEN - > > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.70:40750 > ESTABLISHED - > > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.11:49718 > ESTABLISHED - > > tcp 0 0 10.2X.4X.68:45632 10.2X.4X.132:6642 > ESTABLISHED - > > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.16:44816 > ESTABLISHED - > > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.7:45216 > ESTABLISHED > > > > The issue that we are experiencing is on the neutron-server that > disconnects when there is the ovn leader change (due snapshot like each 8 > minutes) and reconnects to the next leader. It breaks the Openstack API > when someone is trying to create a VM at the same time. > > First, is my current configuration correct? Should the leader change and > break the neutron side? Or is there some missing configuration? > > I was wondering if it is possible to use a LB with VIP and this VIP > balance the connections to the ovn central members and I would reconfigure > on the neutron side only with the VIP and also on the ovs-controllers. Does > that make sense? > > > > Thank you. > > > > Regards, > > > > Tiago Pires > > > > -- > Mohammed Naser > VEXXHOST, Inc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Sat May 7 01:09:48 2022 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 6 May 2022 18:09:48 -0700 Subject: [ops][octavia][neutron] Distributed Virtual Routing: Floating IPs attached to virtual IP addresses are assigned on network nodes In-Reply-To: References: <81936a3fc1eba4c9c32e897198c235c483796831.camel@mittwald.de> Message-ID: Distributed floating IPs with OVN will bypass the bottleneck imposed by centralized NAT, but by itself that won?t allow you to scale beyond a single Amphora instance for any given floating IP. I have been working with a team to develop BGP exporting of floating IP addresses with OVN using FRR running in a container. Our current implementation exports all floating IPs and provider VLAN IPs into BGP from each compute node in a DVR setup, which allows migration of floating IPs between compute nodes in a routed environment even if they do not share any layer 2 networks. This will allow you to route traffic to multiple VMs (which can be Amphora load-balancers) using the same floating IP with IP Anycast, and the network will use route traffic to the nearest instance or load-balance with ECMP if there are multiple instances with the same number of hops in the path. This should work with allowed-address-pairs. I will be presenting this solution at the OpenInfra Summit in Berlin along with Luis Tom?s, and you can try out the ovn-bgp-agent with the code here: https://github.com/luis5tb/bgp-agent And he documents the design and testing environment in his blog: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ Currently ingress and egress traffic is routed via the kernel, so this setup doesn?t yet work with DPDK, however you can scale using multiple load balancers to compensate for that limitation. I am hopeful we will overcome that limitation before too long. The BGPd can also receive routes from BGP peers, but there is no need to receive all routes, one or more default routes would be sufficient to use ECMP and BFD for northbound traffic. On the kubernetes side, I worked with the OpenShift engineers to add BGP support to MetalLB using FRR, so native load balancers in kubernetes can now export endpoint IPs into BGP routing fabric as well using a similar approach. I know there has been a massive amount of interest in this approach in the last year, so I expect this to become a very popular architecture in the near future. -Dan Sneddon On Fri, May 6, 2022 at 2:45 PM Michael Johnson wrote: > Hi Jan, > > If I understand correctly, the issue you are facing is with ovs-dvr, > the floating IPs are implemented in the SNAT namespace on the network > node, causing a congestion point for high traffic. You are looking for > a way to implement floating IPs that are distributed across your > deployment and not concentrated in the network nodes. > > Is that correct? > > If so, I think what you are looking for is distributed floating IPs > with OVN[1]. I will let the OVN experts confirm this. > > Michael > > [1] > https://docs.openstack.org/networking-ovn/latest/admin/refarch/refarch.html#distributed-floating-ips-dvr > > On Fri, May 6, 2022 at 6:19 AM Jan Horstmann > wrote: > > > > Hello! > > > > When we initially deployed openstack we thought that using distributed > > virtual routing with ml2/ovs-dvr would give us the ability to > > automatically scale our network capacity with the number of > > hypervisors we use. Our main workload are kubernetes clusters which > > receive ingress traffic via octavia loadbancers (configured to use the > > amphora driver). So the idea was that we could increase the number of > > loadbalancers to spread the traffic over more and more compute nodes. > > This would imply that any volume based (distributed) denial of service > > attack on a single loadbalancer would just saturate a single compute > > node and leave the rest of the system functional. > > > > We have recently learned that, no matter the loadbalancer topology, a > > virtual IP is created for it by octavia. This, and probably all > > virtual IPs in openstack, are reserved by an unbound and disabled port > > and then set as an allowed address pair on any server's port which > > might hold it. > > Up to this point our initial assumption should be true, as the server > > actually holding the virtual IP would reply to any ARP requests and > > thus any traffic should be routed to the node with the virtual machine > > of the octavia amphora. > > However, we are using our main provider network as a floating IP pool > > and do not allow direct port creation. When a floating IP is attached > > to the virtual IP it is assigned to the SNAT router namespace on a > > network node. Naturally in high traffic or even (distributed) denial > > of service situations the network node might become a bottleneck. A > > situation we thought we could avoid by using distributed virtual > > routing in the first place. > > > > This leads me to a rabbit hole of questions I hope someone might be > > able to help with: > > > > Is the assessment above correct or am I missing something? > > > > If it is correct, do we have any other options than vertically scaling > > our network nodes to handle traffic? Do other ml2 drivers (e.g. OVN) > > handle this scenario differently? > > > > If our network nodes need to handle most of the traffic anyway, do we > > still have any advantage using distributed virtual routing? Especially > > when considering the increased complexity compared to a non- > > distributed setup? > > > > Has anyone ever explored non-virtual IP based high availability > > options like e.g. BGP multipathing in a distributed virtual routing > > scenario? > > > > Any input is highly appreciated. > > Regards, > > Jan > > > > -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Sat May 7 01:20:55 2022 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 6 May 2022 18:20:55 -0700 Subject: [neutron] Static route not added in namespace using DVR on Wallaby In-Reply-To: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> References: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> Message-ID: On Fri, May 6, 2022 at 11:23 AM J-P Methot wrote: > Hi, > > We're in this situation where we are going to move some instances from one > openstack cluster to another. After this process, we want our instances on > the new openstack cluster to keep the same floating IPs but also to be able > to communicate with some instances that are in the same public IP range on > the first cluster. > > To accomplish this, we want to add static routes like 'X.X.X.X/32 via > Y.Y.Y.Y'. However, we're using DVR and when we add the static routes, they > do not show up anywhere in any of the namespaces. Is there a different way > to add static routes on DVR instead of using openstack router add route ? > > -- > Jean-Philippe M?thot > Senior Openstack system administrator > Administrateur syst?me Openstack s?nior > PlanetHoster inc. > > I don?t think that there is an automatic way to do what you are trying to do using static routes. One way to approach this would be to use dynamic routing to advertise the availability of the /32 route. I just described an approach to this using BGP in another thread on this list with the subject ?[ops][octavia][neutron] Distributed Virtual Routing: Floating IPs attached to virtual IP addresses are assigned on network nodes?. There are several BGP implementations which support this functionality, I believe Neutron Dynamic Routing for OVS works for Wallaby, Juniper Contrail (and probably the Open Source version Tungsten Fabric), and others. The implementation I describe is for OVN, but the same approach could be adapted to other Neutron drivers. -Dan Sneddon -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Mon May 9 04:01:23 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 9 May 2022 13:01:23 +0900 Subject: [puppet] Gate blocker: Installation of the systemtap package fails In-Reply-To: References: Message-ID: The new systemtap packages were released and stable/yoga should be unblocked now. I've reverted the temporal workaround and now integration jobs are passing without it. On Fri, May 6, 2022 at 11:41 AM Takashi Kajinami wrote: > Because the new systemtap packages have not yet been released and > our master gate job has been broken for more than a week, I merged > the temporal workaround[1]. > > I'd keep that patch master only atm, but it takes additional days then > I'll backport that to stable/yoga to unblock stable/yoga CI. > > [1] > https://review.opendev.org/c/openstack/puppet-openstack-integration/+/840188 > > > On Wed, May 4, 2022 at 12:16 AM Takashi Kajinami > wrote: > >> Hello, >> >> We are currently facing consistent failure in centos stream 9 integration >> jobs. >> which is caused by the new dyninst package. >> >> I've already reported the issue in bz[1] and we are currently waiting for >> the updated >> systemtap package. Please avoid rechecking until the package is released >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=2079892 >> >> If the fix is not released for additional days then we can merge the >> temporal workaround >> to unblock our jobs. >> >> https://review.opendev.org/c/openstack/puppet-openstack-integration/+/840188 >> >> Last week we also saw centos stream 8 jobs consistently failing but it >> seems these jobs were >> already fixed by the new python3-qt5 package. >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=2079895 >> >> Thank you, >> Takashi >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Mon May 9 06:23:35 2022 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 9 May 2022 08:23:35 +0200 Subject: [baremetal-sig][ironic] Tue May 10, 2022, 2pm UTC: "Looking back and forward: The Xena and Zed Cycles for Ironic" Message-ID: Dear all, The Bare Metal SIG will meet tomorrow Tue May 10, 2022, 2pm UTC featuring a topic of the day presentation by Ironic PTL Iury Gregory: "Looking back and forward: The Xena and Zed Cycles for Ironic" Everyone is welcome, all details on how to join can be found on the SIG's etherpad: https://etherpad.opendev.org/p/bare-metal-sig Hope to see you there! Arne -- Arne Wiebalck CERN IT From eblock at nde.ag Mon May 9 07:55:08 2022 From: eblock at nde.ag (Eugen Block) Date: Mon, 09 May 2022 07:55:08 +0000 Subject: Data Center Survival in case of Disaster / HW Failure in DC In-Reply-To: References: <20220505123751.Horde.BvzneKkaM8LoS3CA89DFaJx@webmail.nde.ag> Message-ID: <20220509075508.Horde.ufIgyDdiN2rf3u8086b-Em_@webmail.nde.ag> Hi, >> first, I wouldn't run VMs on control nodes, that way you mix roles >> (control and compute) and in case that one control node fails the VMs >> are not available. That would not be the case if the control node is >> only a control node and is also part of a highly available control >> plane (like yours appears to be). Depending on how your control plane >> is defined, the failure of one control node should be tolerable. >> There has been some work on making compute nodes highly available but >> I don't know the current status. > > > could you point out the links/docs where I can refer for a proper setup. you could check out the basic Openstack HA docs [2] which are outdated but can help to get a better understanding. It also contains a short section about HA compute nodes [3] which basically points to different sources, maybe check out [4], it also covers some basic aspects and refers to some possible solutions. But as I already wrote, that information is old, you'll need to follow up on Masakari or one of the other solutions presented there to see what the current status is. Because the HA docs [2] are not really detailed we had to collect most of the required information ourselves and I wrote some of it down in a blog post [5]. >> As for ceph it should be configured properly to sustain the loss of a >> defined number of nodes or disks, I don't know your requirements. If >> your current cluster has "only" 3 nodes you probably run replicated >> pools with size 3 (I hope) with min_size 2 and failure-domain host. > > you mean 3 OSD s in single compute node ? I can follow this way is it the > best way to do so ? No, I don't talk about compute nodes here but ceph. You only mention that you have three ceph nodes, we don't more so it's difficult to comment. >> If a compute node fails and is >> unresponsive you'll probably need some DB tweaking to revive VMs on a >> different node. >> > Don't know much about this, any reference is welcome. I don't have a reference but only my blog post again [6] which contains an example, but I advise against tweaking the database if you don't know what you're doing just because a blog post says so. ;-) There are so many aspects to consider, maybe you should consider professional consulting? Regards, Eugen [2] https://docs.openstack.org/ha-guide/intro-ha.html [3] https://docs.openstack.org/ha-guide/compute-node-ha.html [4] http://aspiers.github.io/openstack-summit-2016-austin-compute-ha/#/agenda [5] https://heiterbiswolkig.blogs.nde.ag/2020/08/21/openstack-ha-upgrade-part1/ [6] https://heiterbiswolkig.blogs.nde.ag/2020/09/25/openstack-ha-upgrade-part-v/ Zitat von KK CHN : > Thanks Eugen . > I fully agree with not running VMs on Control nodes. When we rolled out > the Controller resources we couldn't spare out only as a controller, Becoz > of the utilization of the controller host machines resources, so we decided > to use them as compute nodes also. > > On Thu, May 5, 2022 at 6:09 PM Eugen Block wrote: > >> Hi, >> >> first, I wouldn't run VMs on control nodes, that way you mix roles >> (control and compute) and in case that one control node fails the VMs >> are not available. That would not be the case if the control node is >> only a control node and is also part of a highly available control >> plane (like yours appears to be). Depending on how your control plane >> is defined, the failure of one control node should be tolerable. >> There has been some work on making compute nodes highly available but >> I don't know the current status. > > > could you point out the links/docs where I can refer for a proper setup. > > >> But in case a compute node fails but >> nova is still responsive a live or cold migration could still be >> possible to evacuate that host. > > > >> If a compute node fails and is >> unresponsive you'll probably need some DB tweaking to revive VMs on a >> different node. >> > Don't know much about this, any reference is welcome. > > >> So you should have some spare resources to be able to recover from a >> compute node failure. >> As for ceph it should be configured properly to sustain the loss of a >> defined number of nodes or disks, I don't know your requirements. If >> your current cluster has "only" 3 nodes you probably run replicated >> pools with size 3 (I hope) with min_size 2 and failure-domain host. > > you mean 3 OSD s in single compute node ? I can follow this way is it the > best way to do so ? > >> >> > Any reference to this best ceph deployment model which do the best fault > tolerance, kindly share. > > >> You could sustain one node failure without clients noticing it, a >> second node would cause the cluster to pause. Also you don't have the >> possibility to recover from a node failure until it is up again, >> meaning the degraded PGs can't be recovered on a different node. So >> this also depends on your actual resiliency requirements. If you have >> a second site you could use rbd mirroring [1] to sync all rbd images >> between sites. > > > We have a connectivity link of only 1Gbps between DC and DR and DR is 300 > miles away from DC. And the syncing enhancement, how can we achieve this ? > Because our HDD writing speeds are too limited, maybe 80 Mbps to 100 Mbps > .. SSDs are not available for all compute host machines. > Each VM has 800 GB to 1 TB Disk size. > > Is there any best practice for syncing enhancement mechanisms for HDDs in > DC hosts (with a connectivity of 1 Gbps between DR sites with HDD hosts . > ?) > > >> In case the primary site goes down entirely you could >> switch to the primary site by promoting the rbd images. >> So you see there is plenty of information to cover and careful >> planning is required. >> >> Regards, >> Eugen >> > > Thanks again for sharing your thoughts. > >> >> [1] https://docs.ceph.com/en/latest/rbd/rbd-mirroring/ >> >> Zitat von KK CHN : >> >> > List, >> > >> > We are having an old cloud setup with OpenStack Ussuri usng Debian OS, >> > (Qemu KVM ). I know its very old and we can't upgrade to to new versions >> > right now. >> > >> > The Deployment is as follows. >> > >> > A. 3 Controller in (cum compute nodes . VMs are running on controllers >> > too..) in HA mode. >> > >> > B. 6 separate Compute nodes >> > >> > C. 3 separate Storage node with Ceph RBD >> > >> > Question is >> > >> > 1. In case of any Sudden Hardware failure of one or more controller >> node >> > OR Compute node OR Storage Node what will be the immediate redundant >> > recovery setup need to be employed ? >> > >> > 2. In case H/W failure our recovery need to as soon as possible. For >> > example less than30 Minutes after the first failure occurs. >> > >> > 3. Is there setup options like a hot standby or similar setups or what >> we >> > need to employ ? >> > >> > 4. To meet all RTO (< 30 Minutes down time ) and RPO(from the exact >> point >> > of crash all applications and data must be consistent) . >> > >> > 5. Please share your thoughts for reliable crash/fault resistance >> > configuration options in DC. >> > >> > >> > We have a remote DR setup right now in a remote location. Also I would >> > like to know if there is a recommended way to make the remote DR site >> > Automatically up and run ? OR How to automate the service from DR site >> > to meet exact RTO and RPO >> > >> > Any thoughts most welcom. >> > >> > Regards, >> > Krish >> >> >> >> >> From derekokeeffe85 at yahoo.ie Mon May 9 08:16:16 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Mon, 9 May 2022 08:16:16 +0000 (UTC) Subject: Glance browser error References: <1810279729.1482300.1652084176734.ref@mail.yahoo.com> Message-ID: <1810279729.1482300.1652084176734@mail.yahoo.com> Hi all, Second mail about this issue. I'm getting the following error in the chrome console: Refused to connect to 'https://10.xx.xx.xxx:9292/v2/images/f7df36f3-33d8-49b6-95b6-b16e95bc3225/file' because it violates the following Content Security Policy directive: "default-src 'self'". Note that 'connect-src' was not explicitly set, so 'default-src' is used as a fallback I can't upload an image through horizon but I can do it from the cli with no issues. I can't find anything online to help with the issue so any input would be greatly appreciated. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Mon May 9 08:38:51 2022 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 9 May 2022 10:38:51 +0200 Subject: [magnum] Proposing Mohammed Naser for core-reviewer Message-ID: Dear all, I would like to nominate Mohammed Naser for core reviewer in the magnum project. Mohammed has a proven knowledge of the codebase with frequent contributions and reviews. Mohammed will be of great help in fast and reliable reviews. Cheers, Spyros -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Mon May 9 09:11:13 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Mon, 9 May 2022 11:11:13 +0200 Subject: Glance browser error In-Reply-To: <1810279729.1482300.1652084176734@mail.yahoo.com> References: <1810279729.1482300.1652084176734.ref@mail.yahoo.com> <1810279729.1482300.1652084176734@mail.yahoo.com> Message-ID: Hi, Please see a reply to your first email:) https://lists.openstack.org/pipermail/openstack-discuss/2022-May/028441.html ??, 9 ??? 2022 ?., 10:18 Derek O keeffe : > Hi all, > > Second mail about this issue. I'm getting the following error in the > chrome console: > > Refused to connect to ' > https://10.xx.xx.xxx:9292/v2/images/f7df36f3-33d8-49b6-95b6-b16e95bc3225/file' > because it violates the following Content Security Policy directive: > "default-src 'self'". Note that 'connect-src' was not explicitly set, so > 'default-src' is used as a fallback > > I can't upload an image through horizon but I can do it from the cli with > no issues. I can't find anything online to help with the issue so any input > would be greatly appreciated. > > Regards, > Derek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon May 9 09:30:15 2022 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 May 2022 11:30:15 +0200 Subject: [largescale-sig] Next meeting: May 11th, 15utc Message-ID: Hi everyone, The Large Scale SIG will be meeting this Wednesday in #openstack-operators on OFTC IRC, at 15UTC. You can check how that time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20220511T15 Feel free to add topics to the agenda: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From jonathan.rosser at rd.bbc.co.uk Mon May 9 09:37:40 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 9 May 2022 10:37:40 +0100 Subject: Glance browser error In-Reply-To: <1810279729.1482300.1652084176734@mail.yahoo.com> References: <1810279729.1482300.1652084176734.ref@mail.yahoo.com> <1810279729.1482300.1652084176734@mail.yahoo.com> Message-ID: Hi Derek, You can also join the IRC channel #openstack-ansible https://docs.openstack.org/contributors/common/irc.html if you'd like to chat real-time about any of this. Regards, Jonathan. On 09/05/2022 09:16, Derek O keeffe wrote: > Hi all, > > Second mail about this issue. I'm getting the following error in the > chrome console: > > Refused to connect to > 'https://10.xx.xx.xxx:9292/v2/images/f7df36f3-33d8-49b6-95b6-b16e95bc3225/file' > because it violates the following Content Security Policy directive: > "default-src 'self'". Note that 'connect-src' was not explicitly set, > so 'default-src' is used as a fallback > > I can't upload an image through horizon but I can do it from the cli > with no issues. I can't find anything online to help with the issue so > any input would be greatly appreciated. > > Regards, > Derek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Horstmann at mittwald.de Mon May 9 09:46:58 2022 From: J.Horstmann at mittwald.de (Jan Horstmann) Date: Mon, 9 May 2022 09:46:58 +0000 Subject: [ops][octavia][neutron] Distributed Virtual Routing: Floating IPs attached to virtual IP addresses are assigned on network nodes In-Reply-To: References: <81936a3fc1eba4c9c32e897198c235c483796831.camel@mittwald.de> Message-ID: <6539de497c4da86f2976a07ce41a78d7e74a1441.camel@mittwald.de> Thanks Michael and Dan, for confirming that this would not be an issue with ovn. On Fri, 2022-05-06 at 18:09 -0700, Dan Sneddon wrote: > Distributed floating IPs with OVN will bypass the bottleneck imposed > by centralized NAT, but by itself that won?t allow you to scale > beyond a single Amphora instance for any given floating IP. > For our use case we can probably shard our traffic over multiple loadbalancers. So we can either move to a centralized setup and invest in some capable network nodes or move to ovn and keep a distributed setup. With ovn we could probably also switch to ovn-octavia-provider. I still have to investigate this option to understand the traffic flow in that case. The work you outlined below sounds really interesting and will resolve a lot of scalability problems. I am still working though the information you provided and will definitely keep a close eye on this. > I have been working with a team to develop BGP exporting of floating > IP addresses with OVN using FRR running in a container. Our current > implementation exports all floating IPs and provider VLAN IPs into > BGP from each compute node in a DVR setup, which allows migration of > floating IPs between compute nodes in a routed environment even if > they do not share any layer 2 networks. > > This will allow you to route traffic to multiple VMs (which can be > Amphora load-balancers) using the same floating IP with IP Anycast, > and the network will use route traffic to the nearest instance or > load-balance with ECMP if there are multiple instances with the same > number of hops in the path. This should work with allowed-address- > pairs. > > I will be presenting this solution at the OpenInfra Summit in Berlin > along with Luis Tom?s, and you can try out the ovn-bgp-agent with > the code here: > > https://github.com/luis5tb/bgp-agent > > And he documents the design and testing environment in his blog: > > https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ > > Currently ingress and egress traffic is routed via the kernel, so > this setup doesn?t yet work with DPDK, however you can scale using > multiple load balancers to compensate for that limitation. I am > hopeful we will overcome that limitation before too long. The BGPd > can also receive routes from BGP peers, but there is no need to > receive all routes, one or more default routes would be sufficient > to use ECMP and BFD for northbound traffic. > > On the kubernetes side, I worked with the OpenShift engineers to add > BGP support to MetalLB using FRR, so native load balancers in > kubernetes can now export endpoint IPs into BGP routing fabric as > well using a similar approach. > > I know there has been a massive amount of interest in this approach > in the last year, so I expect this to become a very popular > architecture in the near future. > > -Dan Sneddon > > On Fri, May 6, 2022 at 2:45 PM Michael Johnson > wrote: > > Hi Jan, > > > > If I understand correctly, the issue you are facing is with ovs- > > dvr, > > the floating IPs are implemented in the SNAT namespace on the > > network > > node, causing a congestion point for high traffic. You are looking > > for > > a way to implement floating IPs that are distributed across your > > deployment and not concentrated in the network nodes. > > > > Is that correct? > > > > If so, I think what you are looking for is distributed floating > > IPs > > with OVN[1]. I will let the OVN experts confirm this. > > > > Michael > > > > [1] > > https://docs.openstack.org/networking-ovn/latest/admin/refarch/refarch.html#distributed-floating-ips-dvr > > > > On Fri, May 6, 2022 at 6:19 AM Jan Horstmann > > wrote: > > > > > > Hello! > > > > > > When we initially deployed openstack we thought that using > > > distributed > > > virtual routing with ml2/ovs-dvr would give us the ability to > > > automatically scale our network capacity with the number of > > > hypervisors we use. Our main workload are kubernetes clusters > > > which > > > receive ingress traffic via octavia loadbancers (configured to > > > use the > > > amphora driver). So the idea was that we could increase the > > > number of > > > loadbalancers to spread the traffic over more and more compute > > > nodes. > > > This would imply that any volume based (distributed) denial of > > > service > > > attack on a single loadbalancer would just saturate a single > > > compute > > > node and leave the rest of the system functional. > > > > > > We have recently learned that, no matter the loadbalancer > > > topology, a > > > virtual IP is created for it by octavia. This, and probably all > > > virtual IPs in openstack, are reserved by an unbound and > > > disabled port > > > and then set as an allowed address pair on any server's port > > > which > > > might hold it. > > > Up to this point our initial assumption should be true, as the > > > server > > > actually holding the virtual IP would reply to any ARP requests > > > and > > > thus any traffic should be routed to the node with the virtual > > > machine > > > of the octavia amphora. > > > However, we are using our main provider network as a floating IP > > > pool > > > and do not allow direct port creation. When a floating IP is > > > attached > > > to the virtual IP it is assigned to the SNAT router namespace on > > > a > > > network node. Naturally in high traffic or even (distributed) > > > denial > > > of service situations the network node might become a > > > bottleneck. A > > > situation we thought we could avoid by using distributed virtual > > > routing in the first place. > > > > > > This leads me to a rabbit hole of questions I hope someone might > > > be > > > able to help with: > > > > > > Is the assessment above correct or am I missing something? > > > > > > If it is correct, do we have any other options than vertically > > > scaling > > > our network nodes to handle traffic? Do other ml2 drivers (e.g. > > > OVN) > > > handle this scenario differently? > > > > > > If our network nodes need to handle most of the traffic anyway, > > > do we > > > still have any advantage using distributed virtual routing? > > > Especially > > > when considering the increased complexity compared to a non- > > > distributed setup? > > > > > > Has anyone ever explored non-virtual IP based high availability > > > options like e.g. BGP multipathing in a distributed virtual > > > routing > > > scenario? > > > > > > Any input is highly appreciated. > > > Regards, > > > Jan > > > > > -- Jan Horstmann Systementwickler?|?Infrastruktur _____ ? ? Mittwald CM Service GmbH & Co.?KG K?nigsberger Stra?e 4-6 32339 Espelkamp ? Tel.: 05772 / 293-900 Fax: 05772 / 293-333 ? j.horstmann at mittwald.de https://www.mittwald.de ? Gesch?ftsf?hrer: Robert Meyer, Florian J?rgens ? USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplement?rin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Gesch?ftst?tigkeit? gem?? Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. From egarciar at redhat.com Mon May 9 09:47:43 2022 From: egarciar at redhat.com (Elvira Garcia Ruiz) Date: Mon, 9 May 2022 11:47:43 +0200 Subject: [neutron] Bug Deputy Report May 03 - 09 Message-ID: Hi, I was the bug deputy last week. Find the summary below: High: ------- - [OVN] Add support for Baremetal provisioning with ML2/OVN https://bugs.launchpad.net/neutron/+bug/1971431 Proposed fix: https://review.opendev.org/c/openstack/neutron/+/840287 - [neutron][api] test_log_deleted_with_corresponding_security_group failing randomly https://bugs.launchpad.net/neutron/+bug/1971569 Unassigned - [FT] Error in "test_virtual_port_host_update" https://bugs.launchpad.net/neutron/+bug/1971672 Unassigned Medium: ----------- - [api] add port_forwarding_id when list floatingip https://bugs.launchpad.net/neutron/+bug/1971646 Fix proposed: https://review.opendev.org/c/openstack/neutron/+/840565 RFE: ------- - [RFE][fwaas][OVN]support l3 firewall for ovn driver https://bugs.launchpad.net/neutron/+bug/1971958 Invalid: --------- - [Neutron][OpenContrail]Trunk ports don't have standard attributes. https://bugs.launchpad.net/neutron/+bug/1971697 Looks like a request for help with an external plugin, I redirected them to the openstack discuss mailing list since I have no idea of how the OpenContrail driver works. Undecided: --------------- - evacuation failure causes POST_FAILURE in nova-ovs-hybrid-plug job https://bugs.launchpad.net/nova/+bug/1971563 I think we might not need to do anything else regarding this bug from the Neutron side, I asked Sean to be sure just in case. Kind regards! Elvira -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Mon May 9 12:54:46 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Mon, 9 May 2022 08:54:46 -0400 Subject: [TripleO] Tear down of Train centOS 7 check jobs and integration line In-Reply-To: References: Message-ID: On Wed, Apr 27, 2022 at 9:00 AM Marios Andreou wrote: > On Wed, Apr 27, 2022 at 3:56 PM Ronelle Landy wrote: > > > > Hello All, > > > > We have been running check/gate testing and integration lines for the > Train release on both CentOS7 and CentOS 8. > > > > Following the work on CentOS 9 and the longevity of the Train release, > we proposed removing the CentOS 7 Train jobs and building/testing changes > to this release on CentOS 8 only. We floated this proposal with some > interested parties and received no objections so we are beginning work on > this tear down. > > > > we also discussed this at PTG recently during one of the CI sessions > [1] and there were no objections raised there > > the topic branch for the c7 jobs removal is at [2] > > thanks, marios > > [1] https://etherpad.opendev.org/p/tripleo-zed-ci-load > [2] https://review.opendev.org/q/topic:ooo_c7_teardown > > > We anticipate that removing the CentOS 7 jobs will allow patches to > merge quicker and will free up resources for future work. Please respond if > you have any questions or concerns. > Since we received no concerns or negative feedback, Train CentOS 7 tear down work is now going on. The last current-tripleo promoted hash is: https://trunk.rdoproject.org/centos7-train/current-tripleo/commit.yaml > > > > Thanks, > > Ronelle (for the TripleO CI team) > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From twilson at redhat.com Mon May 9 13:45:18 2022 From: twilson at redhat.com (Terry Wilson) Date: Mon, 9 May 2022 08:45:18 -0500 Subject: Neutron + OVN raft cluster In-Reply-To: References: Message-ID: Sorry, I was on PTO. Jakub is right, w/o using python-ovs 2.17, when ovs breaks the connection for the leadership clients will re-download the entire content of their registered tables. With 2.17, monitor-cond-since/update3 support is added to python-ovs and it should just download the changes since they reconnected. As long as the client code handles reconnections, this reconnecting should not be an issue. It is still possible that there is code that doesn't properly handle reconnections in general, but I'd start with trying ovs 2.17. The disonnections will always happen, but they shouldn't break things. On Fri, May 6, 2022 at 5:13 PM Tiago Pires wrote: > > Hi Mohammed, > > It seems a little bit like our issue. > > Thank you. > > Tiago Pires > > Em sex., 6 de mai. de 2022 ?s 18:21, Mohammed Naser escreveu: >> >> Hi Tiago, >> >> Have you seen this? >> >> https://bugs.launchpad.net/nova/+bug/1969592 >> >> Mohammed >> >> On Fri, May 6, 2022 at 3:56 PM Tiago Pires wrote: >> > >> > Hi all, >> > >> > I was checking the mail list history and this thread https://mail.openvswitch.org/pipermail/ovs-discuss/2018-March/046438.html caught my attention about raft ovsdb clustering. >> > In my setup (OVN 20.03 and Openstack Ussuri) on the ovn-controller we have configured the ovn-remote="tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642" with the 3 OVN central member that they are in cluster mode. >> > Also on the neutron ML2 side: >> > [ovn] >> > ovn_native_dhcp = True >> > ovn_nb_connection = tcp:10.2X.4X.4:6641,tcp:10.2X.4X.68:6641,tcp:10.2X.4X.132:6641 >> > ovn_sb_connection = tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642 >> > >> > We are experiencing an issue with Neutron when the OVN leader decide to take a snapshot and by design another member became leader(more less every 8 minutes): >> > 2022-05-05T16:57:42.135Z|17401|raft|INFO|Transferring leadership to write a snapshot. >> > >> > ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound >> > 4a03 >> > Name: OVN_Southbound >> > Cluster ID: ca74 (ca744caf-40cd-4751-a2f2-86e35ad6541c) >> > Server ID: 4a03 (4a0328dc-e9a4-495e-a4f1-0a0340fc6d19) >> > Address: tcp:10.2X.4X.132:6644 >> > Status: cluster member >> > Role: leader >> > Term: 1912 >> > Leader: self >> > Vote: self >> > >> > Election timer: 10000 >> > Log: [497643, 498261] >> > Entries not yet committed: 0 >> > Entries not yet applied: 0 >> > Connections: ->3d6c ->4ef0 <-3d6c <-4ef0 >> > Servers: >> > 4a03 (4a03 at tcp:10.2X.4X.132:6644) (self) next_index=497874 match_index=498260 >> > 3d6c (3d6c at tcp:10.2X.4X.68:6644) next_index=498261 match_index=498260 >> > 4ef0 (4ef0 at tcp:10.2X.4X.4:6644) next_index=498261 match_index=498260 >> > >> > As I understood the tcp connections from the Neutron (NB) and ovn-controllers (SB) to OVN Central are established only with the leader: >> > >> > #OVN central leader >> > $ netstat -nap | grep 6642| more >> > >> > tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - >> > tcp 0 0 10.2X.4X.132:6642 10.24.40.17:47278 ESTABLISHED - >> > tcp 0 0 10.2X.4X.132:6642 10.24.40.76:36240 ESTABLISHED - >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47280 ESTABLISHED - >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43102 ESTABLISHED - >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.75:58890 ESTABLISHED - >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43108 ESTABLISHED - >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47142 ESTABLISHED - >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.71:48808 ESTABLISHED - >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47096 ESTABLISHED - >> > #OVN follower 2 >> > >> > $ netstat -nap | grep 6642 >> > >> > tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.76:57256 ESTABLISHED - >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.134:54026 ESTABLISHED - >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.10:34962 ESTABLISHED - >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.6:49238 ESTABLISHED - >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.135:59972 ESTABLISHED - >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.75:40162 ESTABLISHED - >> > tcp 0 0 10.2X.4X.4:39566 10.2X.4X.132:6642 ESTABLISHED - >> > #OVN follower 3 >> > >> > netstat -nap | grep 6642 >> > >> > tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.70:40750 ESTABLISHED - >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.11:49718 ESTABLISHED - >> > tcp 0 0 10.2X.4X.68:45632 10.2X.4X.132:6642 ESTABLISHED - >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.16:44816 ESTABLISHED - >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.7:45216 ESTABLISHED >> > >> > The issue that we are experiencing is on the neutron-server that disconnects when there is the ovn leader change (due snapshot like each 8 minutes) and reconnects to the next leader. It breaks the Openstack API when someone is trying to create a VM at the same time. >> > First, is my current configuration correct? Should the leader change and break the neutron side? Or is there some missing configuration? >> > I was wondering if it is possible to use a LB with VIP and this VIP balance the connections to the ovn central members and I would reconfigure on the neutron side only with the VIP and also on the ovs-controllers. Does that make sense? >> > >> > Thank you. >> > >> > Regards, >> > >> > Tiago Pires >> >> >> >> -- >> Mohammed Naser >> VEXXHOST, Inc. From anyrude10 at gmail.com Mon May 9 11:51:33 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Mon, 9 May 2022 17:21:33 +0530 Subject: [TripleO] Support of PTP in Openstack Train Message-ID: Hi Team, Is there any Support for PTP in Openstack TripleO ? When I was executing the Overcloud deployment script, passing the PTP yaml, it gave the following option at the starting *service OS::TripleO::Services::Ptp is enabled in /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to continue with deployment [y/N]* even if passing Y, it starts executing for sometime and the gives the following error *Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} Can someone suggest some pointers in order to resolve this issue and move forward? Regards Anirudh Gupta On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta wrote: > Hi Team, > > I have installed Undercloud with Openstack Train Release successfully. > I need to enable PTP service while deploying the overcloud for which I > have included the service in my deployment > > openstack overcloud deploy --templates \ > -n /home/stack/templates/network_data.yaml \ > -r /home/stack/templates/roles_data.yaml \ > -e /home/stack/templates/environment.yaml \ > -e /home/stack/templates/environments/network-isolation.yaml \ > -e /home/stack/templates/environments/network-environment.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml > \ > * -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml > \* > -e /home/stack/templates/ironic-config.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ > -e /home/stack/containers-prepare-parameter.yaml > > But it gives the following error > > 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde | FATAL > | Wait for puppet host configuration to finish | overcloud-controller-0 | > error={"ansible_job_id": "5188783868.37685", "attempts": 3, "changed": > true, "cmd": "set -o pipefail; puppet apply > --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules > --detailed-exitcodes --summarize --color=false > /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t > puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 > 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": > "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", > "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is > deprecated in favor of using 'lookup'. See > https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May 6 > 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 > puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 > is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 > puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 > puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May > 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 > puppet-user: Warning: Unknown variable: '::deployment_type'. (file: > /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, > line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not > connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: > Error: Evaluation Error: Error while evaluating a Function Call, Could not > find class ::tripleo::profile::base::time::ptp for > overcloud-controller-0.localdomain (file: > /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node > overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 > puppet-user: Warning: The function 'hiera' is deprecated in favor of using > 'lookup'. See https://puppet.com/docs/puppet/6.14/deprecated_language.html", > "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 > 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' > version 3 is deprecated. It should be converted to version 5", "<13>May 6 > 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 > puppet-user: Warning: Undefined variable '::deploy_config_name'; ", > "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 > 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. > (file: > /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, > line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not > connect to controller: Connection refused", "<13>May 6 11:30:08 > puppet-user: *Error: Evaluation Error: Error while evaluating a Function > Call, Could not find class ::tripleo::profile::base::time::ptp for > overcloud-controller-0.localdomain (file: > /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* > overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} > > > Can someone please help in resolving this issue? > > Regards > Anirudh Gupta > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon May 9 16:50:33 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 9 May 2022 18:50:33 +0200 Subject: [venus] Yoga tarball is still missing Message-ID: Dear Venus team, The Yoga tarball is still missing. [1] As we are releasing Kolla now, we are forced to temporarily exclude Venus from the Yoga release. [1] https://tarballs.opendev.org/openstack/venus/ Kind regards, -yoctozepto From marcin.juszkiewicz at linaro.org Mon May 9 16:55:44 2022 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Mon, 9 May 2022 18:55:44 +0200 Subject: [venus] Yoga tarball is still missing In-Reply-To: References: Message-ID: W dniu 09.05.2022 o?18:50, Rados?aw Piliszek pisze: > The Yoga tarball is still missing. [1] > As we are releasing Kolla now, we are forced to temporarily exclude > Venus from the Yoga release. [1] patch disables Venus in Kolla. Once [2] gets merged and Venus gets Yoga release we can revert and allow to build it in Kolla. 1. https://review.opendev.org/c/openstack/kolla/+/841147 2. https://review.opendev.org/c/openstack/releases/+/824394 From sunny at openstack.org Mon May 9 17:27:16 2022 From: sunny at openstack.org (Sunny Cai) Date: Mon, 9 May 2022 10:27:16 -0700 Subject: Call for OpenStack projects virtual updates References: <261c32d7-bf30-4573-943d-811964c0e9e8@Spark> Message-ID: <5d6721a4-7e29-458a-a275-09223b3d5ecb@Spark> Hi everyone, As the Berlin Summit is approaching in June, we are collecting project update recordings from each OpenInfa community to showcase what each project has accomplished in the past year/release. Please let me know if you?re interested in doing a prerecorded video of your project?s most recent updates. We will post all recordings on the OpenInfra Foundation YouTube channel and promote them at the Summit. Recordings will also be posted in the project navigator. We recommend the recordings to be less than 10 minutes long. If you would like to present with slides, here is a slide deck template if you need [1]. If you can submit your project recording to me by?Friday,?May 27, we?d love to promote them at the upcoming Berlin Summit. If you prefer to submit it after the Summit, I?ll send out another reminder on June 15 to collect any reminding recordings. Please let me know if you have any questions. [1]?https://docs.google.com/presentation/d/1SlWayfGc9CYAsKnS43UxVjO8NYr1pR-0Gi4R6bLCjUs/edit?usp=sharing Thanks, Sunny Cai OpenInfra Foundation Marketing & Community -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 9 17:49:09 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 09 May 2022 12:49:09 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 12, 2022 at 1500 UTC Message-ID: <180a9f072e7.ea1ec8a190505.8898309317901204614@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for May 12, 2022 at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, May 11, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From jp.methot at planethoster.info Mon May 9 17:49:22 2022 From: jp.methot at planethoster.info (J-P Methot) Date: Mon, 9 May 2022 13:49:22 -0400 Subject: [neutron] Static route not added in namespace using DVR on Wallaby In-Reply-To: References: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> Message-ID: <8f613f51-4df1-7927-347d-c17e01a68055@planethoster.info> I tested this on my own DVR test environment with a random static route and I'm getting the same results as on production. Here's what I get in the logs : 2022-05-09 17:28:50.018 691 INFO neutron.agent.l3.agent [-] Starting processing update 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait time elapsed: 0.001 2022-05-09 17:28:50.019 691 INFO neutron.agent.l3.agent [-] Starting router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait time elapsed: 0.002 2022-05-09 17:28:51.640 691 INFO neutron.agent.l3.agent [-] Finished a router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, update_id 9e112de1-f538-4a41-9526-152aa3937129. Time elapsed: 1.622 As you can see, there was an attempt at updating the router and it did return as successful. However, there was no new route added in the router or floating ip namespace. No error either. On 5/6/22 14:40, Slawek Kaplonski wrote: > Hi, > > W?dniu pi?, 6 maj 2022 o?14:14:47 -0400 u?ytkownik J-P Methot > napisa?: >> >> Hi, >> >> We're in this situation where we are going to move some instances >> from one openstack cluster to another. After this process, we want >> our instances on the new openstack cluster to keep the same floating >> IPs but also to be able to communicate with some instances that are >> in the same public IP range on the first cluster. >> >> To accomplish this, we want to add static routes like 'X.X.X.X/32 via >> Y.Y.Y.Y'. However, we're using DVR and when we add the static routes, >> they do not show up anywhere in any of the namespaces. Is there a >> different way to add static routes on DVR instead of using openstack >> router add route ? >> > No, there is no other way to add static routes to the dvr router. I > don't have any DVR deployment now to check it but IIRC route should be > added in the qrouter namespace in the compute nodes where router > exists. If it's not there please check logs of the l3-agent on those > hosts, maybe there are some errors there. >> -- >> Jean-Philippe M?thot >> Senior Openstack system administrator >> Administrateur syst?me Openstack s?nior >> PlanetHoster inc. > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Jean-Philippe M?thot Senior Openstack system administrator Administrateur syst?me Openstack s?nior PlanetHoster inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 9 17:54:01 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 09 May 2022 12:54:01 -0500 Subject: [all][tc][policy][heat][cinder] Continuing the RBAC PTG discussion + policy pop-up meeting new time In-Reply-To: <1804765e168.c7627ca0318612.1596099052066040022@ghanshyammann.com> References: <1804765e168.c7627ca0318612.1596099052066040022@ghanshyammann.com> Message-ID: <180a9f4e924.1037aeffa90689.3078879274219839648@ghanshyammann.com> Hello Everyone, RBAC next call is tomorrow Tuesday 10th, we will continue the discussion on the heat and service role. Meeting details: https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting -gmann ---- On Wed, 20 Apr 2022 09:35:00 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > As we said in PTG about continuing the RBAC discussion on open questions (currently from Heat and Cinder), we > are scheduling the call on April 26 Tuesday from 14:30-15:00 UTC. And we will use the same time for the policy-popup > team meeting every alternate Tuesday to answer RBAC queries from any projects. > > Meeting Details: > - https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting > > Agenda (you can add for this or coming meetings) > - https://etherpad.opendev.org/p/rbac-zed-ptg#L97 > > -gmann > > From tiagohp at gmail.com Mon May 9 19:39:55 2022 From: tiagohp at gmail.com (Tiago Pires) Date: Mon, 9 May 2022 16:39:55 -0300 Subject: Neutron + OVN raft cluster In-Reply-To: References: Message-ID: Hi all, Thanks Terry. As I'm using an old OVS (2.13)/OVN(20.3) version due openstack release (Ussuri), would it be possible to upgrade only the OVN/OVS without upgrading the whole openstack? I'm only checking options in this case, how are you guys dealing with this in production? Regards, Tiago Pires Em seg., 9 de mai. de 2022 ?s 10:45, Terry Wilson escreveu: > Sorry, I was on PTO. Jakub is right, w/o using python-ovs 2.17, when > ovs breaks the connection for the leadership clients will re-download > the entire content of their registered tables. With 2.17, > monitor-cond-since/update3 support is added to python-ovs and it > should just download the changes since they reconnected. As long as > the client code handles reconnections, this reconnecting should not be > an issue. It is still possible that there is code that doesn't > properly handle reconnections in general, but I'd start with trying > ovs 2.17. The disonnections will always happen, but they shouldn't > break things. > > On Fri, May 6, 2022 at 5:13 PM Tiago Pires wrote: > > > > Hi Mohammed, > > > > It seems a little bit like our issue. > > > > Thank you. > > > > Tiago Pires > > > > Em sex., 6 de mai. de 2022 ?s 18:21, Mohammed Naser > escreveu: > >> > >> Hi Tiago, > >> > >> Have you seen this? > >> > >> https://bugs.launchpad.net/nova/+bug/1969592 > >> > >> Mohammed > >> > >> On Fri, May 6, 2022 at 3:56 PM Tiago Pires wrote: > >> > > >> > Hi all, > >> > > >> > I was checking the mail list history and this thread > https://mail.openvswitch.org/pipermail/ovs-discuss/2018-March/046438.html > caught my attention about raft ovsdb clustering. > >> > In my setup (OVN 20.03 and Openstack Ussuri) on the ovn-controller we > have configured the > ovn-remote="tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642" > with the 3 OVN central member that they are in cluster mode. > >> > Also on the neutron ML2 side: > >> > [ovn] > >> > ovn_native_dhcp = True > >> > ovn_nb_connection = > tcp:10.2X.4X.4:6641,tcp:10.2X.4X.68:6641,tcp:10.2X.4X.132:6641 > >> > ovn_sb_connection = > tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642 > >> > > >> > We are experiencing an issue with Neutron when the OVN leader decide > to take a snapshot and by design another member became leader(more less > every 8 minutes): > >> > 2022-05-05T16:57:42.135Z|17401|raft|INFO|Transferring leadership to > write a snapshot. > >> > > >> > ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound > >> > 4a03 > >> > Name: OVN_Southbound > >> > Cluster ID: ca74 (ca744caf-40cd-4751-a2f2-86e35ad6541c) > >> > Server ID: 4a03 (4a0328dc-e9a4-495e-a4f1-0a0340fc6d19) > >> > Address: tcp:10.2X.4X.132:6644 > >> > Status: cluster member > >> > Role: leader > >> > Term: 1912 > >> > Leader: self > >> > Vote: self > >> > > >> > Election timer: 10000 > >> > Log: [497643, 498261] > >> > Entries not yet committed: 0 > >> > Entries not yet applied: 0 > >> > Connections: ->3d6c ->4ef0 <-3d6c <-4ef0 > >> > Servers: > >> > 4a03 (4a03 at tcp:10.2X.4X.132:6644) (self) next_index=497874 > match_index=498260 > >> > 3d6c (3d6c at tcp:10.2X.4X.68:6644) next_index=498261 > match_index=498260 > >> > 4ef0 (4ef0 at tcp:10.2X.4X.4:6644) next_index=498261 > match_index=498260 > >> > > >> > As I understood the tcp connections from the Neutron (NB) and > ovn-controllers (SB) to OVN Central are established only with the leader: > >> > > >> > #OVN central leader > >> > $ netstat -nap | grep 6642| more > >> > > >> > tcp 0 0 0.0.0.0:6642 0.0.0.0:* > LISTEN - > >> > tcp 0 0 10.2X.4X.132:6642 10.24.40.17:47278 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.132:6642 10.24.40.76:36240 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47280 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43102 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.75:58890 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43108 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47142 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.71:48808 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47096 > ESTABLISHED - > >> > #OVN follower 2 > >> > > >> > $ netstat -nap | grep 6642 > >> > > >> > tcp 0 0 0.0.0.0:6642 0.0.0.0:* > LISTEN - > >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.76:57256 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.134:54026 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.10:34962 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.6:49238 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.135:59972 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.75:40162 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.4:39566 10.2X.4X.132:6642 > ESTABLISHED - > >> > #OVN follower 3 > >> > > >> > netstat -nap | grep 6642 > >> > > >> > tcp 0 0 0.0.0.0:6642 0.0.0.0:* > LISTEN - > >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.70:40750 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.11:49718 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.68:45632 10.2X.4X.132:6642 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.16:44816 > ESTABLISHED - > >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.7:45216 > ESTABLISHED > >> > > >> > The issue that we are experiencing is on the neutron-server that > disconnects when there is the ovn leader change (due snapshot like each 8 > minutes) and reconnects to the next leader. It breaks the Openstack API > when someone is trying to create a VM at the same time. > >> > First, is my current configuration correct? Should the leader change > and break the neutron side? Or is there some missing configuration? > >> > I was wondering if it is possible to use a LB with VIP and this VIP > balance the connections to the ovn central members and I would reconfigure > on the neutron side only with the VIP and also on the ovs-controllers. Does > that make sense? > >> > > >> > Thank you. > >> > > >> > Regards, > >> > > >> > Tiago Pires > >> > >> > >> > >> -- > >> Mohammed Naser > >> VEXXHOST, Inc. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Mon May 9 18:02:16 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Mon, 9 May 2022 23:32:16 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi All Any update on this? Regards Anirudh Gupta On Mon, 9 May, 2022, 17:21 Anirudh Gupta, wrote: > Hi Team, > > Is there any Support for PTP in Openstack TripleO ? > > When I was executing the Overcloud deployment script, passing the PTP > yaml, it gave the following option at the starting > > > *service OS::TripleO::Services::Ptp is enabled in > /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. > Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to > continue with deployment [y/N]* > > even if passing Y, it starts executing for sometime and the gives the > following error > > *Error: Evaluation Error: Error while evaluating a Function Call, Could > not find class ::tripleo::profile::base::time::ptp for > overcloud-controller-0.localdomain (file: > /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], > "stdout": "", "stdout_lines": []} > > > Can someone suggest some pointers in order to resolve this issue and move > forward? > > Regards > Anirudh Gupta > > > > On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta wrote: > >> Hi Team, >> >> I have installed Undercloud with Openstack Train Release successfully. >> I need to enable PTP service while deploying the overcloud for which I >> have included the service in my deployment >> >> openstack overcloud deploy --templates \ >> -n /home/stack/templates/network_data.yaml \ >> -r /home/stack/templates/roles_data.yaml \ >> -e /home/stack/templates/environment.yaml \ >> -e /home/stack/templates/environments/network-isolation.yaml \ >> -e /home/stack/templates/environments/network-environment.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >> \ >> * -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >> \* >> -e /home/stack/templates/ironic-config.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >> -e /home/stack/containers-prepare-parameter.yaml >> >> But it gives the following error >> >> 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde | FATAL >> | Wait for puppet host configuration to finish | overcloud-controller-0 | >> error={"ansible_job_id": "5188783868.37685", "attempts": 3, "changed": >> true, "cmd": "set -o pipefail; puppet apply >> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >> --detailed-exitcodes --summarize --color=false >> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >> deprecated in favor of using 'lookup'. See >> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May 6 >> 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >> Error: Evaluation Error: Error while evaluating a Function Call, Could not >> find class ::tripleo::profile::base::time::ptp for >> overcloud-controller-0.localdomain (file: >> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >> 'lookup'. See >> https://puppet.com/docs/puppet/6.14/deprecated_language.html", "<13>May >> 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 11:30:08 >> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >> is deprecated. It should be converted to version 5", "<13>May 6 11:30:08 >> puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >> (file: >> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >> connect to controller: Connection refused", "<13>May 6 11:30:08 >> puppet-user: *Error: Evaluation Error: Error while evaluating a Function >> Call, Could not find class ::tripleo::profile::base::time::ptp for >> overcloud-controller-0.localdomain (file: >> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >> >> >> Can someone please help in resolving this issue? >> >> Regards >> Anirudh Gupta >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.chlumsky at gmail.com Mon May 9 21:58:24 2022 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Mon, 9 May 2022 17:58:24 -0400 Subject: [skyline] SSO support In-Reply-To: <2A0F8DA5-4981-4862-91F1-295045533D1A@99cloud.net> References: <2A0F8DA5-4981-4862-91F1-295045533D1A@99cloud.net> Message-ID: https://bugs.launchpad.net/skyline-apiserver/+bug/1972736 Thank you! Martin On Wed, May 4, 2022 at 10:41 AM ??? wrote: > Hello, Martin > > > > No plans for SSO support yet, you can open a ticket. > > We dev team would have a discussion & plan it according to the tickets. > > > > https://bugs.launchpad.net/skyline-apiserver/+bugs > > > > Thanks > > > > Best Regards > > Wenxiang Wu > > > > *From: * 99cloud.net at lists.openstack.org> on behalf of Martin Chlumsky < > martin.chlumsky at gmail.com> > *Date: *Tuesday, May 3, 2022 at 22:07 > *To: * > *Subject: *[skyline] SSO support > > > > Hello, > > We are evaluating Skyline and one major blocker is Single sign-on (SSO) > support (we currently federate Keystone/Horizon with AzureAD). I searched > through the git repositories and couldn't find any relevant mention of SSO > or federation (openidc or saml). > > Are there plans to support this feature (or is it supported and I just > missed it somehow)? > > Thank you, > > Martin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pangliye at inspur.com Tue May 10 05:59:54 2022 From: pangliye at inspur.com (=?utf-8?B?TGl5ZSBQYW5nKOmAhOeri+S4mik=?=) Date: Tue, 10 May 2022 05:59:54 +0000 Subject: =?utf-8?B?562U5aSNOiBbdmVudXNdIFlvZ2EgdGFyYmFsbCBpcyBzdGlsbCBtaXNzaW5n?= In-Reply-To: References: Message-ID: <778dac77fe4149ce90e8618da78ec751@inspur.com> Hello: Because it is not yet complete, so venus will not be released in Yoga release, please help to handle it. Terribly sorry. -----????----- ???: Rados?aw Piliszek ????: 2022?5?10? 0:51 ???: openstack-discuss ??: [venus] Yoga tarball is still missing Dear Venus team, The Yoga tarball is still missing. [1] As we are releasing Kolla now, we are forced to temporarily exclude Venus from the Yoga release. [1] https://tarballs.opendev.org/openstack/venus/ Kind regards, -yoctozepto -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From skaplons at redhat.com Tue May 10 06:38:54 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 10 May 2022 08:38:54 +0200 Subject: [neutron] Static route not added in namespace using DVR on Wallaby In-Reply-To: <8f613f51-4df1-7927-347d-c17e01a68055@planethoster.info> References: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> <8f613f51-4df1-7927-347d-c17e01a68055@planethoster.info> Message-ID: Hi, W dniu pon, 9 maj 2022 o 13:49:22 -0400 u?ytkownik J-P Methot napisa?: > I tested this on my own DVR test environment with a random static > route and I'm getting the same results as on production. Here's what > I get in the logs : > > 2022-05-09 17:28:50.018 691 INFO neutron.agent.l3.agent [-] Starting > processing update 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, > priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait time > elapsed: 0.001 > 2022-05-09 17:28:50.019 691 INFO neutron.agent.l3.agent [-] Starting > router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, > priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait time > elapsed: 0.002 > 2022-05-09 17:28:51.640 691 INFO neutron.agent.l3.agent [-] Finished > a router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, update_id > 9e112de1-f538-4a41-9526-152aa3937129. Time elapsed: 1.622 > > As you can see, there was an attempt at updating the router and it > did return as successful. However, there was no new route added in > the router or floating ip namespace. No error either. > Can You do the same with debug logs enabled? > On 5/6/22 14:40, Slawek Kaplonski wrote: >> Hi, >> >> W dniu pi?, 6 maj 2022 o 14:14:47 -0400 u?ytkownik J-P Methot >> >> napisa?: >>> Hi, >>> >>> We're in this situation where we are going to move some instances >>> from one openstack cluster to another. After this process, we want >>> our instances on the new openstack cluster to keep the same >>> floating IPs but also to be able to communicate with some instances >>> that are in the same public IP range on the first cluster. >>> >>> To accomplish this, we want to add static routes like 'X.X.X.X/32 >>> via Y.Y.Y.Y'. However, we're using DVR and when we add the static >>> routes, they do not show up anywhere in any of the namespaces. Is >>> there a different way to add static routes on DVR instead of using >>> openstack router add route ? >>> >> No, there is no other way to add static routes to the dvr router. I >> don't have any DVR deployment now to check it but IIRC route should >> be added in the qrouter namespace in the compute nodes where router >> exists. If it's not there please check logs of the l3-agent on those >> hosts, maybe there are some errors there. >> >>> -- >>> Jean-Philippe M?thot >>> Senior Openstack system administrator >>> Administrateur syst?me Openstack s?nior >>> PlanetHoster inc. >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat > -- > Jean-Philippe M?thot > Senior Openstack system administrator > Administrateur syst?me Openstack s?nior > PlanetHoster inc. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Tue May 10 07:22:40 2022 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Tue, 10 May 2022 09:22:40 +0200 Subject: Neutron + OVN raft cluster In-Reply-To: References: Message-ID: On Mon, May 9, 2022 at 9:53 PM Tiago Pires wrote: > > Hi all, > > Thanks Terry. > As I'm using an old OVS (2.13)/OVN(20.3) version due openstack release (Ussuri), would it be possible to upgrade only the OVN/OVS without upgrading the whole openstack? > I'm only checking options in this case, how are you guys dealing with this in production? We are currently looking into the possibility of providing an OVN enablement PPA of some sort, which would allow you to upgrade just the OVS/OVN components to 2.17/22.03. There is outstanding work before that can be done successfully. There are many patches to OpenStack components to deal with the change of behavior that OVS 2.17 brings, both for the Python IDL API changes and the more frequent OVSDB Server leader changes. Many patches have already made it into the stable/ussuri branch, and there are some more to go (this one is ready in-flight [0], but there may be more required) before this will work. Validation of this combination is ongoing, and if it is successful we will most likely request a new Neutron Ussuri point release as well as providing the above mentioned PPA. We'll know more over the course of the next couple of weeks. As always, the goodness of OpenStack Yoga and all the latest OVS/OVN bits is already available in the most recent release. 0: https://review.opendev.org/c/openstack/neutron/+/840744 -- Frode Nordahl > Regards, > > Tiago Pires > > Em seg., 9 de mai. de 2022 ?s 10:45, Terry Wilson escreveu: >> >> Sorry, I was on PTO. Jakub is right, w/o using python-ovs 2.17, when >> ovs breaks the connection for the leadership clients will re-download >> the entire content of their registered tables. With 2.17, >> monitor-cond-since/update3 support is added to python-ovs and it >> should just download the changes since they reconnected. As long as >> the client code handles reconnections, this reconnecting should not be >> an issue. It is still possible that there is code that doesn't >> properly handle reconnections in general, but I'd start with trying >> ovs 2.17. The disonnections will always happen, but they shouldn't >> break things. >> >> On Fri, May 6, 2022 at 5:13 PM Tiago Pires wrote: >> > >> > Hi Mohammed, >> > >> > It seems a little bit like our issue. >> > >> > Thank you. >> > >> > Tiago Pires >> > >> > Em sex., 6 de mai. de 2022 ?s 18:21, Mohammed Naser escreveu: >> >> >> >> Hi Tiago, >> >> >> >> Have you seen this? >> >> >> >> https://bugs.launchpad.net/nova/+bug/1969592 >> >> >> >> Mohammed >> >> >> >> On Fri, May 6, 2022 at 3:56 PM Tiago Pires wrote: >> >> > >> >> > Hi all, >> >> > >> >> > I was checking the mail list history and this thread https://mail.openvswitch.org/pipermail/ovs-discuss/2018-March/046438.html caught my attention about raft ovsdb clustering. >> >> > In my setup (OVN 20.03 and Openstack Ussuri) on the ovn-controller we have configured the ovn-remote="tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642" with the 3 OVN central member that they are in cluster mode. >> >> > Also on the neutron ML2 side: >> >> > [ovn] >> >> > ovn_native_dhcp = True >> >> > ovn_nb_connection = tcp:10.2X.4X.4:6641,tcp:10.2X.4X.68:6641,tcp:10.2X.4X.132:6641 >> >> > ovn_sb_connection = tcp:10.2X.4X.4:6642,tcp:10.2X.4X.68:6642,tcp:10.2X.4X.132:6642 >> >> > >> >> > We are experiencing an issue with Neutron when the OVN leader decide to take a snapshot and by design another member became leader(more less every 8 minutes): >> >> > 2022-05-05T16:57:42.135Z|17401|raft|INFO|Transferring leadership to write a snapshot. >> >> > >> >> > ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound >> >> > 4a03 >> >> > Name: OVN_Southbound >> >> > Cluster ID: ca74 (ca744caf-40cd-4751-a2f2-86e35ad6541c) >> >> > Server ID: 4a03 (4a0328dc-e9a4-495e-a4f1-0a0340fc6d19) >> >> > Address: tcp:10.2X.4X.132:6644 >> >> > Status: cluster member >> >> > Role: leader >> >> > Term: 1912 >> >> > Leader: self >> >> > Vote: self >> >> > >> >> > Election timer: 10000 >> >> > Log: [497643, 498261] >> >> > Entries not yet committed: 0 >> >> > Entries not yet applied: 0 >> >> > Connections: ->3d6c ->4ef0 <-3d6c <-4ef0 >> >> > Servers: >> >> > 4a03 (4a03 at tcp:10.2X.4X.132:6644) (self) next_index=497874 match_index=498260 >> >> > 3d6c (3d6c at tcp:10.2X.4X.68:6644) next_index=498261 match_index=498260 >> >> > 4ef0 (4ef0 at tcp:10.2X.4X.4:6644) next_index=498261 match_index=498260 >> >> > >> >> > As I understood the tcp connections from the Neutron (NB) and ovn-controllers (SB) to OVN Central are established only with the leader: >> >> > >> >> > #OVN central leader >> >> > $ netstat -nap | grep 6642| more >> >> > >> >> > tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - >> >> > tcp 0 0 10.2X.4X.132:6642 10.24.40.17:47278 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.132:6642 10.24.40.76:36240 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47280 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43102 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.75:58890 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.6:43108 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47142 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.71:48808 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.132:6642 10.2X.4X.17:47096 ESTABLISHED - >> >> > #OVN follower 2 >> >> > >> >> > $ netstat -nap | grep 6642 >> >> > >> >> > tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - >> >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.76:57256 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.134:54026 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.10:34962 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.6:49238 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.135:59972 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.4:6642 10.2X.4X.75:40162 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.4:39566 10.2X.4X.132:6642 ESTABLISHED - >> >> > #OVN follower 3 >> >> > >> >> > netstat -nap | grep 6642 >> >> > >> >> > tcp 0 0 0.0.0.0:6642 0.0.0.0:* LISTEN - >> >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.70:40750 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.11:49718 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.68:45632 10.2X.4X.132:6642 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.16:44816 ESTABLISHED - >> >> > tcp 0 0 10.2X.4X.68:6642 10.2X.4X.7:45216 ESTABLISHED >> >> > >> >> > The issue that we are experiencing is on the neutron-server that disconnects when there is the ovn leader change (due snapshot like each 8 minutes) and reconnects to the next leader. It breaks the Openstack API when someone is trying to create a VM at the same time. >> >> > First, is my current configuration correct? Should the leader change and break the neutron side? Or is there some missing configuration? >> >> > I was wondering if it is possible to use a LB with VIP and this VIP balance the connections to the ovn central members and I would reconfigure on the neutron side only with the VIP and also on the ovs-controllers. Does that make sense? >> >> > >> >> > Thank you. >> >> > >> >> > Regards, >> >> > >> >> > Tiago Pires >> >> >> >> >> >> >> >> -- >> >> Mohammed Naser >> >> VEXXHOST, Inc. >> From tom_toworld at 163.com Tue May 10 08:10:25 2022 From: tom_toworld at 163.com (=?GBK?B?1uyzrA==?=) Date: Tue, 10 May 2022 16:10:25 +0800 (CST) Subject: Hi, Nice to meet you, I am very happly to find faimly. Message-ID: <48e53a77.504e.180ad04f5a1.Coremail.tom_toworld@163.com> Hi, Nice to meet you, I am very happly to find faimly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chaotomzhu at gmail.com Tue May 10 08:20:21 2022 From: chaotomzhu at gmail.com (Tom NewChao) Date: Tue, 10 May 2022 16:20:21 +0800 Subject: Hi, I am very happy to meet all of you. I hope to exchange more knowledge about openstack with you. Message-ID: I am very happy to meet all of you. I hope to exchange more knowledge about openstack with you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom_toworld at 163.com Tue May 10 08:30:50 2022 From: tom_toworld at 163.com (=?GBK?B?1uyzrA==?=) Date: Tue, 10 May 2022 16:30:50 +0800 (CST) Subject: Hi, Nice to meet you, I am very happly to find faimly. Message-ID: Hi, Nice to meet you, I am very happly to find faimly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue May 10 08:47:57 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 10 May 2022 17:47:57 +0900 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: I'm not familiar with PTP, but the error you pasted indicates that the required puppet manifest does not exist in your overcloud node/image. https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp This should not happen and the class should exist as long as you have puppet-tripleo from stable/train installed. I'd recommend you check installed tripleo/puppet packages and ensure everything is in the consistent release. On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta wrote: > Hi All > > Any update on this? > > Regards > Anirudh Gupta > > On Mon, 9 May, 2022, 17:21 Anirudh Gupta, wrote: > >> Hi Team, >> >> Is there any Support for PTP in Openstack TripleO ? >> >> When I was executing the Overcloud deployment script, passing the PTP >> yaml, it gave the following option at the starting >> >> >> *service OS::TripleO::Services::Ptp is enabled in >> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >> continue with deployment [y/N]* >> >> even if passing Y, it starts executing for sometime and the gives the >> following error >> >> *Error: Evaluation Error: Error while evaluating a Function Call, Could >> not find class ::tripleo::profile::base::time::ptp for >> overcloud-controller-0.localdomain (file: >> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >> "stdout": "", "stdout_lines": []} >> >> >> Can someone suggest some pointers in order to resolve this issue and move >> forward? >> >> Regards >> Anirudh Gupta >> >> >> >> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta wrote: >> >>> Hi Team, >>> >>> I have installed Undercloud with Openstack Train Release successfully. >>> I need to enable PTP service while deploying the overcloud for which I >>> have included the service in my deployment >>> >>> openstack overcloud deploy --templates \ >>> -n /home/stack/templates/network_data.yaml \ >>> -r /home/stack/templates/roles_data.yaml \ >>> -e /home/stack/templates/environment.yaml \ >>> -e /home/stack/templates/environments/network-isolation.yaml \ >>> -e /home/stack/templates/environments/network-environment.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>> \ >>> * -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>> \* >>> -e /home/stack/templates/ironic-config.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>> -e /home/stack/containers-prepare-parameter.yaml >>> >>> But it gives the following error >>> >>> 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde | >>> FATAL | Wait for puppet host configuration to finish | >>> overcloud-controller-0 | error={"ansible_job_id": "5188783868.37685", >>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>> --detailed-exitcodes --summarize --color=false >>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>> deprecated in favor of using 'lookup'. See >>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May 6 >>> 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>> find class ::tripleo::profile::base::time::ptp for >>> overcloud-controller-0.localdomain (file: >>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>> 'lookup'. See >>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", "<13>May >>> 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 11:30:08 >>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>> is deprecated. It should be converted to version 5", "<13>May 6 11:30:08 >>> puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>> (file: >>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>> puppet-user: *Error: Evaluation Error: Error while evaluating a >>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>> overcloud-controller-0.localdomain (file: >>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>> >>> >>> Can someone please help in resolving this issue? >>> >>> Regards >>> Anirudh Gupta >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Tue May 10 08:55:53 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Tue, 10 May 2022 10:55:53 +0200 Subject: [designate] How to avoid NXDOMAIN or stale data during cold start of a (new) machine Message-ID: <69ab8e54-f419-4cd1-f289-a0b5efb7f723@inovex.de> Hello openstack-discuss, I have a designate setup using bind9 as the user-serving DNS server. When starting a machine with either very old or no zones at all, NXDOMAIN or other actually stale data is sent out to clients as designate is not done doing an initial full sync / reconciliation. * What is the "proper" way to tackle this cold-start issue and to keep the bind from serving wrong data? ** Did I miss on any options to handle this startup case? * What is the usual runtime for an initial sync that you observe in case the backend DNS server has no zones at all anymore? Regards Christian From skaplons at redhat.com Tue May 10 09:10:57 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 10 May 2022 11:10:57 +0200 Subject: [neutron] CI meeting agenda - 10th May Message-ID: <2198420.iZASKD2KPV@p1> Hi, It's just gentle reminder that we will have Neutron CI meeting today at 1500 UTC. Agenda for the meeting is at [1]. This will be video meeting [2]. [1] https://etherpad.opendev.org/p/neutron-ci-meetings[1] [2] https://meetpad.opendev.org/neutron-ci-meetings -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://etherpad.opendev.org/p/neutron-ci-meetings -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From tkajinam at redhat.com Tue May 10 11:33:15 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 10 May 2022 20:33:15 +0900 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta wrote: > Hi Takashi, > > Thanks for your reply. > > I have checked on my machine and the file "ptp.pp" do exist at path " > *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* > " > Did you check this in your undercloud or overcloud ? During the deployment all configuration files are generated using puppet modules installed in overcloud nodes, so you should check this in overcloud nodes. Also, the deprecation warning is not implemented > I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" > for controller and compute *before rendering the templates, but still I > am getting the same issue on all the 3 Controllers and 1 Compute > IIUC you don't need this because OS::TripleO::Services::Timesync becomes an alias to the Ptp service resource when you use the ptp environment file. https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 > > *Error: Evaluation Error: Error while evaluating a Function Call, Could > not find class ::tripleo::profile::base::time::ptp for > overcloud-controller-0.localdomain (file: > /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], > "stdout": "", "stdout_lines": []} > > Can you suggest any workarounds or any pointers to look further in order > to resolve this issue? > > Regards > Anirudh Gupta > > > On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami > wrote: > >> I'm not familiar with PTP, but the error you pasted indicates that the >> required puppet manifest does not exist in your overcloud node/image. >> >> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >> >> This should not happen and the class should exist as long as you have >> puppet-tripleo from stable/train installed. >> >> I'd recommend you check installed tripleo/puppet packages and ensure >> everything is in the consistent release. >> >> >> >> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta >> wrote: >> >>> Hi All >>> >>> Any update on this? >>> >>> Regards >>> Anirudh Gupta >>> >>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, wrote: >>> >>>> Hi Team, >>>> >>>> Is there any Support for PTP in Openstack TripleO ? >>>> >>>> When I was executing the Overcloud deployment script, passing the PTP >>>> yaml, it gave the following option at the starting >>>> >>>> >>>> *service OS::TripleO::Services::Ptp is enabled in >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>> continue with deployment [y/N]* >>>> >>>> even if passing Y, it starts executing for sometime and the gives the >>>> following error >>>> >>>> *Error: Evaluation Error: Error while evaluating a Function Call, Could >>>> not find class ::tripleo::profile::base::time::ptp for >>>> overcloud-controller-0.localdomain (file: >>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>> "stdout": "", "stdout_lines": []} >>>> >>>> >>>> Can someone suggest some pointers in order to resolve this issue and >>>> move forward? >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> >>>> >>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta >>>> wrote: >>>> >>>>> Hi Team, >>>>> >>>>> I have installed Undercloud with Openstack Train Release successfully. >>>>> I need to enable PTP service while deploying the overcloud for which I >>>>> have included the service in my deployment >>>>> >>>>> openstack overcloud deploy --templates \ >>>>> -n /home/stack/templates/network_data.yaml \ >>>>> -r /home/stack/templates/roles_data.yaml \ >>>>> -e /home/stack/templates/environment.yaml \ >>>>> -e /home/stack/templates/environments/network-isolation.yaml \ >>>>> -e /home/stack/templates/environments/network-environment.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>> \ >>>>> * -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>> \* >>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>> >>>>> But it gives the following error >>>>> >>>>> 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde | >>>>> FATAL | Wait for puppet host configuration to finish | >>>>> overcloud-controller-0 | error={"ansible_job_id": "5188783868.37685", >>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>> --detailed-exitcodes --summarize --color=false >>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>> deprecated in favor of using 'lookup'. See >>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>> find class ::tripleo::profile::base::time::ptp for >>>>> overcloud-controller-0.localdomain (file: >>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>> 'lookup'. See >>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>> (file: >>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>> puppet-user: *Error: Evaluation Error: Error while evaluating a >>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>> overcloud-controller-0.localdomain (file: >>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>>> >>>>> >>>>> Can someone please help in resolving this issue? >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue May 10 11:38:29 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 10 May 2022 20:38:29 +0900 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami wrote: > > > On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta wrote: > >> Hi Takashi, >> >> Thanks for your reply. >> >> I have checked on my machine and the file "ptp.pp" do exist at path " >> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >> " >> > Did you check this in your undercloud or overcloud ? > During the deployment all configuration files are generated using puppet > modules > installed in overcloud nodes, so you should check this in overcloud nodes. > > Also, the deprecation warning is not implemented > Ignore this incomplete line. I was looking for the implementation which shows the warning but I found it in tripleoclient and it looks reasonable according to what we have in environments/services/ptp.yaml . > > >> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >> for controller and compute *before rendering the templates, but still I >> am getting the same issue on all the 3 Controllers and 1 Compute >> > > IIUC you don't need this because OS::TripleO::Services::Timesync becomes > an alias > to the Ptp service resource when you use the ptp environment file. > > https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 > > >> >> *Error: Evaluation Error: Error while evaluating a Function Call, Could >> not find class ::tripleo::profile::base::time::ptp for >> overcloud-controller-0.localdomain (file: >> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >> "stdout": "", "stdout_lines": []} >> >> Can you suggest any workarounds or any pointers to look further in order >> to resolve this issue? >> > >> Regards >> Anirudh Gupta >> >> >> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami >> wrote: >> >>> I'm not familiar with PTP, but the error you pasted indicates that the >>> required puppet manifest does not exist in your overcloud node/image. >>> >>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>> >>> This should not happen and the class should exist as long as you have >>> puppet-tripleo from stable/train installed. >>> >>> I'd recommend you check installed tripleo/puppet packages and ensure >>> everything is in the consistent release. >>> >>> >>> >>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta >>> wrote: >>> >>>> Hi All >>>> >>>> Any update on this? >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, wrote: >>>> >>>>> Hi Team, >>>>> >>>>> Is there any Support for PTP in Openstack TripleO ? >>>>> >>>>> When I was executing the Overcloud deployment script, passing the PTP >>>>> yaml, it gave the following option at the starting >>>>> >>>>> >>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>> continue with deployment [y/N]* >>>>> >>>>> even if passing Y, it starts executing for sometime and the gives the >>>>> following error >>>>> >>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>> overcloud-controller-0.localdomain (file: >>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>> "stdout": "", "stdout_lines": []} >>>>> >>>>> >>>>> Can someone suggest some pointers in order to resolve this issue and >>>>> move forward? >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> >>>>> >>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta >>>>> wrote: >>>>> >>>>>> Hi Team, >>>>>> >>>>>> I have installed Undercloud with Openstack Train Release successfully. >>>>>> I need to enable PTP service while deploying the overcloud for which >>>>>> I have included the service in my deployment >>>>>> >>>>>> openstack overcloud deploy --templates \ >>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>> -e /home/stack/templates/environment.yaml \ >>>>>> -e /home/stack/templates/environments/network-isolation.yaml \ >>>>>> -e /home/stack/templates/environments/network-environment.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>> \ >>>>>> * -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>> \* >>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>> >>>>>> But it gives the following error >>>>>> >>>>>> 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde | >>>>>> FATAL | Wait for puppet host configuration to finish | >>>>>> overcloud-controller-0 | error={"ansible_job_id": "5188783868.37685", >>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>> --detailed-exitcodes --summarize --color=false >>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>> deprecated in favor of using 'lookup'. See >>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>> overcloud-controller-0.localdomain (file: >>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>> 'lookup'. See >>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>> (file: >>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>> puppet-user: *Error: Evaluation Error: Error while evaluating a >>>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>> overcloud-controller-0.localdomain (file: >>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>>>> >>>>>> >>>>>> Can someone please help in resolving this issue? >>>>>> >>>>>> Regards >>>>>> Anirudh Gupta >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue May 10 12:14:08 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 10 May 2022 21:14:08 +0900 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta wrote: > Hi Takashi > > I have checked this in undercloud only. > I don't find any such file in overcloud. Could this be a concern? > The manifest should exist in overcloud nodes and the missing file is the exact cause of that puppet failure during deployment. Please check your overcloud images used to install overcloud nodes and ensure that you're using the right one. You might be using the image for a different release. We removed the manifest file during the Wallaby cycle. > > Regards > Anirudh Gupta > > > > On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami > wrote: > >> >> >> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami >> wrote: >> >>> >>> >>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta >>> wrote: >>> >>>> Hi Takashi, >>>> >>>> Thanks for your reply. >>>> >>>> I have checked on my machine and the file "ptp.pp" do exist at path " >>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>> " >>>> >>> Did you check this in your undercloud or overcloud ? >>> During the deployment all configuration files are generated using puppet >>> modules >>> installed in overcloud nodes, so you should check this in overcloud >>> nodes. >>> >>> Also, the deprecation warning is not implemented >>> >> Ignore this incomplete line. I was looking for the implementation which >> shows the warning >> but I found it in tripleoclient and it looks reasonable according to what >> we have in >> environments/services/ptp.yaml . >> >> >>> >>> >>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>> for controller and compute *before rendering the templates, but still >>>> I am getting the same issue on all the 3 Controllers and 1 Compute >>>> >>> >>> IIUC you don't need this because OS::TripleO::Services::Timesync becomes >>> an alias >>> to the Ptp service resource when you use the ptp environment file. >>> >>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>> >>> >>>> >>>> *Error: Evaluation Error: Error while evaluating a Function Call, Could >>>> not find class ::tripleo::profile::base::time::ptp for >>>> overcloud-controller-0.localdomain (file: >>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>> "stdout": "", "stdout_lines": []} >>>> >>>> Can you suggest any workarounds or any pointers to look further in >>>> order to resolve this issue? >>>> >>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> >>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami >>>> wrote: >>>> >>>>> I'm not familiar with PTP, but the error you pasted indicates that the >>>>> required puppet manifest does not exist in your overcloud node/image. >>>>> >>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>> >>>>> This should not happen and the class should exist as long as you have >>>>> puppet-tripleo from stable/train installed. >>>>> >>>>> I'd recommend you check installed tripleo/puppet packages and ensure >>>>> everything is in the consistent release. >>>>> >>>>> >>>>> >>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta >>>>> wrote: >>>>> >>>>>> Hi All >>>>>> >>>>>> Any update on this? >>>>>> >>>>>> Regards >>>>>> Anirudh Gupta >>>>>> >>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, >>>>>> wrote: >>>>>> >>>>>>> Hi Team, >>>>>>> >>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>> >>>>>>> When I was executing the Overcloud deployment script, passing the >>>>>>> PTP yaml, it gave the following option at the starting >>>>>>> >>>>>>> >>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>> continue with deployment [y/N]* >>>>>>> >>>>>>> even if passing Y, it starts executing for sometime and the gives >>>>>>> the following error >>>>>>> >>>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>>> overcloud-controller-0.localdomain (file: >>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>> "stdout": "", "stdout_lines": []} >>>>>>> >>>>>>> >>>>>>> Can someone suggest some pointers in order to resolve this issue and >>>>>>> move forward? >>>>>>> >>>>>>> Regards >>>>>>> Anirudh Gupta >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Team, >>>>>>>> >>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>> successfully. >>>>>>>> I need to enable PTP service while deploying the overcloud for >>>>>>>> which I have included the service in my deployment >>>>>>>> >>>>>>>> openstack overcloud deploy --templates \ >>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>> -e /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>> -e /home/stack/templates/environments/network-environment.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>> \ >>>>>>>> * -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>> \* >>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>> >>>>>>>> But it gives the following error >>>>>>>> >>>>>>>> 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde | >>>>>>>> FATAL | Wait for puppet host configuration to finish | >>>>>>>> overcloud-controller-0 | error={"ansible_job_id": "5188783868.37685", >>>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>> 'lookup'. See >>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>> (file: >>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>> puppet-user: *Error: Evaluation Error: Error while evaluating a >>>>>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>>>>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>>>>>> >>>>>>>> >>>>>>>> Can someone please help in resolving this issue? >>>>>>>> >>>>>>>> Regards >>>>>>>> Anirudh Gupta >>>>>>>> >>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Tue May 10 14:56:30 2022 From: mthode at mthode.org (Matthew Thode) Date: Tue, 10 May 2022 09:56:30 -0500 Subject: [requirements][oslo.log] failure to update to oslo.log===4.8.0 Message-ID: <20220510145630.f5qtjzlozeoiuwej@mthode.org> Hi all, It looks like the latest oslo.log update is failing to pass tempest. If anyone is around to look I'd appreciate it. https://review.opendev.org/840630 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tkajinam at redhat.com Tue May 10 15:13:57 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 11 May 2022 00:13:57 +0900 Subject: [ops][heat][rbac] Call for feedback: Splitting stack to "system" stack and "project" stack Message-ID: Hello, tl;dr We are looking for some feedback from anyone developing their tool/software creating/managing heat stacks, about the new requirement we are considering. Recently we've been discussing the issue with Heat and Secure RBAC work[1]. The current target of SRBAC work requires the appropriate scope according to the resources. - Project resources like instance, volume or network can be created by project-scoped token - Project resources like flavor, user, project or role can be created by system-scoped token This is causing a problem with heat stacks which have both project resources and system resources, because heat currently uses the single token provided by the user in a single stack API call. As part of discussions we have discussed the "split stack" concept, which requires creating separate stacks per scope. This means If you want to create project resources and system resources by Heat, you should create two separate heat stacks and call heat stack api separately using different credentials. While we still need to investigate the feasibility of this option (eg. how smooth we can make the migration), we'd like to hear any feedback about the impact of the "split stack" concept on any external toolings depending on Heat, because this would require some workflow/architecture change in the toolings. If we hear many negative feedback/concerns then we would examine different approaches. Thank you, Takashi [1] https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 10 16:47:46 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 May 2022 11:47:46 -0500 Subject: [all][tc][policy][heat][cinder] Continuing the RBAC PTG discussion + policy pop-up meeting new time In-Reply-To: <180a9f4e924.1037aeffa90689.3078879274219839648@ghanshyammann.com> References: <1804765e168.c7627ca0318612.1596099052066040022@ghanshyammann.com> <180a9f4e924.1037aeffa90689.3078879274219839648@ghanshyammann.com> Message-ID: <180aede9d93.bbe3faf1165835.8277369949418537412@ghanshyammann.com> Hello Everyone, Just to update here, we had a call today and discussed heat, 'service role' and the best possible plan for the zed cycle. Please find the summary in etherpad: https://etherpad.opendev.org/p/rbac-zed-ptg#L97 We will meet next on 24th, 14 UTC, details are in https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting One thing we are looking forward to before our next meeting is to get feedback on the heat 'split stack' approach, please reply to Takashi's email - http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028490.html -gmann ---- On Mon, 09 May 2022 12:54:01 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > RBAC next call is tomorrow Tuesday 10th, we will continue the discussion on the heat and service role. > > Meeting details: https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting > > -gmann > > ---- On Wed, 20 Apr 2022 09:35:00 -0500 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > As we said in PTG about continuing the RBAC discussion on open questions (currently from Heat and Cinder), we > > are scheduling the call on April 26 Tuesday from 14:30-15:00 UTC. And we will use the same time for the policy-popup > > team meeting every alternate Tuesday to answer RBAC queries from any projects. > > > > Meeting Details: > > - https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting > > > > Agenda (you can add for this or coming meetings) > > - https://etherpad.opendev.org/p/rbac-zed-ptg#L97 > > > > -gmann > > > > > > From anyrude10 at gmail.com Tue May 10 09:58:16 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Tue, 10 May 2022 15:28:16 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi Takashi, Thanks for your reply. I have checked on my machine and the file "ptp.pp" do exist at path " *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* " I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" for controller and compute *before rendering the templates, but still I am getting the same issue on all the 3 Controllers and 1 Compute *Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} Can you suggest any workarounds or any pointers to look further in order to resolve this issue? Regards Anirudh Gupta On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami wrote: > I'm not familiar with PTP, but the error you pasted indicates that the > required puppet manifest does not exist in your overcloud node/image. > > https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp > > This should not happen and the class should exist as long as you have > puppet-tripleo from stable/train installed. > > I'd recommend you check installed tripleo/puppet packages and ensure > everything is in the consistent release. > > > > On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta wrote: > >> Hi All >> >> Any update on this? >> >> Regards >> Anirudh Gupta >> >> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, wrote: >> >>> Hi Team, >>> >>> Is there any Support for PTP in Openstack TripleO ? >>> >>> When I was executing the Overcloud deployment script, passing the PTP >>> yaml, it gave the following option at the starting >>> >>> >>> *service OS::TripleO::Services::Ptp is enabled in >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>> continue with deployment [y/N]* >>> >>> even if passing Y, it starts executing for sometime and the gives the >>> following error >>> >>> *Error: Evaluation Error: Error while evaluating a Function Call, Could >>> not find class ::tripleo::profile::base::time::ptp for >>> overcloud-controller-0.localdomain (file: >>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>> "stdout": "", "stdout_lines": []} >>> >>> >>> Can someone suggest some pointers in order to resolve this issue and >>> move forward? >>> >>> Regards >>> Anirudh Gupta >>> >>> >>> >>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta >>> wrote: >>> >>>> Hi Team, >>>> >>>> I have installed Undercloud with Openstack Train Release successfully. >>>> I need to enable PTP service while deploying the overcloud for which I >>>> have included the service in my deployment >>>> >>>> openstack overcloud deploy --templates \ >>>> -n /home/stack/templates/network_data.yaml \ >>>> -r /home/stack/templates/roles_data.yaml \ >>>> -e /home/stack/templates/environment.yaml \ >>>> -e /home/stack/templates/environments/network-isolation.yaml \ >>>> -e /home/stack/templates/environments/network-environment.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>> \ >>>> * -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>> \* >>>> -e /home/stack/templates/ironic-config.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>> -e /home/stack/containers-prepare-parameter.yaml >>>> >>>> But it gives the following error >>>> >>>> 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde | >>>> FATAL | Wait for puppet host configuration to finish | >>>> overcloud-controller-0 | error={"ansible_job_id": "5188783868.37685", >>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>> --detailed-exitcodes --summarize --color=false >>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>> deprecated in favor of using 'lookup'. See >>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>> find class ::tripleo::profile::base::time::ptp for >>>> overcloud-controller-0.localdomain (file: >>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>> 'lookup'. See >>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>> (file: >>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>> puppet-user: *Error: Evaluation Error: Error while evaluating a >>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>> overcloud-controller-0.localdomain (file: >>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>> >>>> >>>> Can someone please help in resolving this issue? >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Tue May 10 11:57:03 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Tue, 10 May 2022 17:27:03 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi Takashi I have checked this in undercloud only. I don't find any such file in overcloud. Could this be a concern? Regards Anirudh Gupta On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami wrote: > > > On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami > wrote: > >> >> >> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta >> wrote: >> >>> Hi Takashi, >>> >>> Thanks for your reply. >>> >>> I have checked on my machine and the file "ptp.pp" do exist at path " >>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>> " >>> >> Did you check this in your undercloud or overcloud ? >> During the deployment all configuration files are generated using puppet >> modules >> installed in overcloud nodes, so you should check this in overcloud nodes. >> >> Also, the deprecation warning is not implemented >> > Ignore this incomplete line. I was looking for the implementation which > shows the warning > but I found it in tripleoclient and it looks reasonable according to what > we have in > environments/services/ptp.yaml . > > >> >> >>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>> for controller and compute *before rendering the templates, but still I >>> am getting the same issue on all the 3 Controllers and 1 Compute >>> >> >> IIUC you don't need this because OS::TripleO::Services::Timesync becomes >> an alias >> to the Ptp service resource when you use the ptp environment file. >> >> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >> >> >>> >>> *Error: Evaluation Error: Error while evaluating a Function Call, Could >>> not find class ::tripleo::profile::base::time::ptp for >>> overcloud-controller-0.localdomain (file: >>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>> "stdout": "", "stdout_lines": []} >>> >>> Can you suggest any workarounds or any pointers to look further in order >>> to resolve this issue? >>> >> >>> Regards >>> Anirudh Gupta >>> >>> >>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami >>> wrote: >>> >>>> I'm not familiar with PTP, but the error you pasted indicates that the >>>> required puppet manifest does not exist in your overcloud node/image. >>>> >>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>> >>>> This should not happen and the class should exist as long as you have >>>> puppet-tripleo from stable/train installed. >>>> >>>> I'd recommend you check installed tripleo/puppet packages and ensure >>>> everything is in the consistent release. >>>> >>>> >>>> >>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta >>>> wrote: >>>> >>>>> Hi All >>>>> >>>>> Any update on this? >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, wrote: >>>>> >>>>>> Hi Team, >>>>>> >>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>> >>>>>> When I was executing the Overcloud deployment script, passing the PTP >>>>>> yaml, it gave the following option at the starting >>>>>> >>>>>> >>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>> continue with deployment [y/N]* >>>>>> >>>>>> even if passing Y, it starts executing for sometime and the gives the >>>>>> following error >>>>>> >>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>> overcloud-controller-0.localdomain (file: >>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>> "stdout": "", "stdout_lines": []} >>>>>> >>>>>> >>>>>> Can someone suggest some pointers in order to resolve this issue and >>>>>> move forward? >>>>>> >>>>>> Regards >>>>>> Anirudh Gupta >>>>>> >>>>>> >>>>>> >>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta >>>>>> wrote: >>>>>> >>>>>>> Hi Team, >>>>>>> >>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>> successfully. >>>>>>> I need to enable PTP service while deploying the overcloud for which >>>>>>> I have included the service in my deployment >>>>>>> >>>>>>> openstack overcloud deploy --templates \ >>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>> -e /home/stack/templates/environments/network-isolation.yaml \ >>>>>>> -e /home/stack/templates/environments/network-environment.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>> \ >>>>>>> * -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>> \* >>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>> >>>>>>> But it gives the following error >>>>>>> >>>>>>> 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde | >>>>>>> FATAL | Wait for puppet host configuration to finish | >>>>>>> overcloud-controller-0 | error={"ansible_job_id": "5188783868.37685", >>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>> deprecated in favor of using 'lookup'. See >>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>> overcloud-controller-0.localdomain (file: >>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>> 'lookup'. See >>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>> (file: >>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>> puppet-user: *Error: Evaluation Error: Error while evaluating a >>>>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>> overcloud-controller-0.localdomain (file: >>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>>>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>>>>> >>>>>>> >>>>>>> Can someone please help in resolving this issue? >>>>>>> >>>>>>> Regards >>>>>>> Anirudh Gupta >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Tue May 10 12:37:16 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Tue, 10 May 2022 18:07:16 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: I'll check that well. By the way, I downloaded the images from the below link https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ They seem to be updated yesterday, I'll download and try the deployment with the latest images. Also are you pointing that the support for PTP would not be there in Wallaby Release? Regards Anirudh Gupta On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami wrote: > > On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta wrote: > >> Hi Takashi >> >> I have checked this in undercloud only. >> I don't find any such file in overcloud. Could this be a concern? >> > > The manifest should exist in overcloud nodes and the missing file is the > exact cause > of that puppet failure during deployment. > > Please check your overcloud images used to install overcloud nodes and > ensure that > you're using the right one. You might be using the image for a different > release. > We removed the manifest file during the Wallaby cycle. > > >> >> Regards >> Anirudh Gupta >> >> >> >> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami >> wrote: >> >>> >>> >>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami >>> wrote: >>> >>>> >>>> >>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta >>>> wrote: >>>> >>>>> Hi Takashi, >>>>> >>>>> Thanks for your reply. >>>>> >>>>> I have checked on my machine and the file "ptp.pp" do exist at path " >>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>> " >>>>> >>>> Did you check this in your undercloud or overcloud ? >>>> During the deployment all configuration files are generated using >>>> puppet modules >>>> installed in overcloud nodes, so you should check this in overcloud >>>> nodes. >>>> >>>> Also, the deprecation warning is not implemented >>>> >>> Ignore this incomplete line. I was looking for the implementation which >>> shows the warning >>> but I found it in tripleoclient and it looks reasonable according to >>> what we have in >>> environments/services/ptp.yaml . >>> >>> >>>> >>>> >>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>> for controller and compute *before rendering the templates, but still >>>>> I am getting the same issue on all the 3 Controllers and 1 Compute >>>>> >>>> >>>> IIUC you don't need this because OS::TripleO::Services::Timesync >>>> becomes an alias >>>> to the Ptp service resource when you use the ptp environment file. >>>> >>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>> >>>> >>>>> >>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>> overcloud-controller-0.localdomain (file: >>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>> "stdout": "", "stdout_lines": []} >>>>> >>>>> Can you suggest any workarounds or any pointers to look further in >>>>> order to resolve this issue? >>>>> >>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> >>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> I'm not familiar with PTP, but the error you pasted indicates that >>>>>> the required puppet manifest does not exist in your overcloud node/image. >>>>>> >>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>> >>>>>> This should not happen and the class should exist as long as you have >>>>>> puppet-tripleo from stable/train installed. >>>>>> >>>>>> I'd recommend you check installed tripleo/puppet packages and ensure >>>>>> everything is in the consistent release. >>>>>> >>>>>> >>>>>> >>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta >>>>>> wrote: >>>>>> >>>>>>> Hi All >>>>>>> >>>>>>> Any update on this? >>>>>>> >>>>>>> Regards >>>>>>> Anirudh Gupta >>>>>>> >>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Team, >>>>>>>> >>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>> >>>>>>>> When I was executing the Overcloud deployment script, passing the >>>>>>>> PTP yaml, it gave the following option at the starting >>>>>>>> >>>>>>>> >>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>> continue with deployment [y/N]* >>>>>>>> >>>>>>>> even if passing Y, it starts executing for sometime and the gives >>>>>>>> the following error >>>>>>>> >>>>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>> >>>>>>>> >>>>>>>> Can someone suggest some pointers in order to resolve this issue >>>>>>>> and move forward? >>>>>>>> >>>>>>>> Regards >>>>>>>> Anirudh Gupta >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Team, >>>>>>>>> >>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>> successfully. >>>>>>>>> I need to enable PTP service while deploying the overcloud for >>>>>>>>> which I have included the service in my deployment >>>>>>>>> >>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>> -e /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>> -e /home/stack/templates/environments/network-environment.yaml >>>>>>>>> \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>> \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>> \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>> \ >>>>>>>>> * -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>> \* >>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>> >>>>>>>>> But it gives the following error >>>>>>>>> >>>>>>>>> 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde >>>>>>>>> | FATAL | Wait for puppet host configuration to finish | >>>>>>>>> overcloud-controller-0 | error={"ansible_job_id": "5188783868.37685", >>>>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>> 'lookup'. See >>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>> (file: >>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>> puppet-user: *Error: Evaluation Error: Error while evaluating a >>>>>>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>>>>>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>>>>>>> >>>>>>>>> >>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Anirudh Gupta >>>>>>>>> >>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Tue May 10 15:19:18 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Tue, 10 May 2022 20:49:18 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi Takashi, Thanks for your suggestion. I downloaded the updated Train Images and they had the ptp.pp file available on the overcloud and undercloud machines [root at overcloud-controller-1 /]# find . -name "ptp.pp" *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* With this, I re-executed the deployment and got the below error on the machines 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce | FATAL | Wait for puppet host configuration to finish | overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --detailed-exitcodes --summarize --color=false /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5\n<13>May 10 20:05:37 puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: Undefined variable '::deploy_config_name'; \\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module 'tripleo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer (file: /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5", "<13>May 10 20:05:37 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: Warning: Undefined variable '::deploy_config_name'; \\n (file & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module 'tripleo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules\\n (file & line not available)", "<13>May 10 20:05:37 puppet-user: *Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer (file: /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, column: 46) *on node overcloud-controller-1.localdomain"], "stdout": "", "stdout_lines": []} The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, column: 46 *had the following code: 34 class tripleo::profile::base::time::ptp ( 35 $ptp4l_interface = 'eth0', 36 $ptp4l_conf_slaveonly = 1, 37 $ptp4l_conf_network_transport = 'UDPv4', 38 ) { 39 40 $interface_mapping = generate('/bin/os-net-config', '-i', $ptp4l_interface) 41 *$ptp4l_interface_name = $interface_mapping[$ptp4l_interface]* *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file is as below: resource_registry: # FIXME(bogdando): switch it, once it is containerized OS::TripleO::Services::Ptp: ../../deployment/time/ptp-baremetal-puppet.yaml OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp parameter_defaults: # PTP hardware interface name *PtpInterface: 'nic1'* # Configure PTP clock in slave mode PtpSlaveMode: 1 # Configure PTP message transport protocol PtpMessageTransport: 'UDPv4' I have also tried modifying the entry as below: *PtpInterface: 'nic1' #*(i.e. without quotes), but the error remains the same. Queries: 1. Any pointers to resolve this? 2. You were mentioning something about the support of PTP not there in the wallaby release. Can you please confirm? It would be a great help if you could extend a little more support to resolve the issues. Regards Anirudh Gupta On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta wrote: > I'll check that well. > By the way, I downloaded the images from the below link > > https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ > > They seem to be updated yesterday, I'll download and try the deployment > with the latest images. > > Also are you pointing that the support for PTP would not be there in > Wallaby Release? > > Regards > Anirudh Gupta > > On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami > wrote: > >> >> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta >> wrote: >> >>> Hi Takashi >>> >>> I have checked this in undercloud only. >>> I don't find any such file in overcloud. Could this be a concern? >>> >> >> The manifest should exist in overcloud nodes and the missing file is the >> exact cause >> of that puppet failure during deployment. >> >> Please check your overcloud images used to install overcloud nodes and >> ensure that >> you're using the right one. You might be using the image for a different >> release. >> We removed the manifest file during the Wallaby cycle. >> >> >>> >>> Regards >>> Anirudh Gupta >>> >>> >>> >>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami >>> wrote: >>> >>>> >>>> >>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami >>>> wrote: >>>> >>>>> >>>>> >>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta >>>>> wrote: >>>>> >>>>>> Hi Takashi, >>>>>> >>>>>> Thanks for your reply. >>>>>> >>>>>> I have checked on my machine and the file "ptp.pp" do exist at path " >>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>> " >>>>>> >>>>> Did you check this in your undercloud or overcloud ? >>>>> During the deployment all configuration files are generated using >>>>> puppet modules >>>>> installed in overcloud nodes, so you should check this in overcloud >>>>> nodes. >>>>> >>>>> Also, the deprecation warning is not implemented >>>>> >>>> Ignore this incomplete line. I was looking for the implementation which >>>> shows the warning >>>> but I found it in tripleoclient and it looks reasonable according to >>>> what we have in >>>> environments/services/ptp.yaml . >>>> >>>> >>>>> >>>>> >>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>> for controller and compute *before rendering the templates, but >>>>>> still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>> >>>>> >>>>> IIUC you don't need this because OS::TripleO::Services::Timesync >>>>> becomes an alias >>>>> to the Ptp service resource when you use the ptp environment file. >>>>> >>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>> >>>>> >>>>>> >>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>> overcloud-controller-0.localdomain (file: >>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>> "stdout": "", "stdout_lines": []} >>>>>> >>>>>> Can you suggest any workarounds or any pointers to look further in >>>>>> order to resolve this issue? >>>>>> >>>>> >>>>>> Regards >>>>>> Anirudh Gupta >>>>>> >>>>>> >>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami >>>>>> wrote: >>>>>> >>>>>>> I'm not familiar with PTP, but the error you pasted indicates that >>>>>>> the required puppet manifest does not exist in your overcloud node/image. >>>>>>> >>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>> >>>>>>> This should not happen and the class should exist as long as you >>>>>>> have puppet-tripleo from stable/train installed. >>>>>>> >>>>>>> I'd recommend you check installed tripleo/puppet packages and ensure >>>>>>> everything is in the consistent release. >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta >>>>>>> wrote: >>>>>>> >>>>>>>> Hi All >>>>>>>> >>>>>>>> Any update on this? >>>>>>>> >>>>>>>> Regards >>>>>>>> Anirudh Gupta >>>>>>>> >>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Team, >>>>>>>>> >>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>> >>>>>>>>> When I was executing the Overcloud deployment script, passing the >>>>>>>>> PTP yaml, it gave the following option at the starting >>>>>>>>> >>>>>>>>> >>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>> continue with deployment [y/N]* >>>>>>>>> >>>>>>>>> even if passing Y, it starts executing for sometime and the gives >>>>>>>>> the following error >>>>>>>>> >>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>> >>>>>>>>> >>>>>>>>> Can someone suggest some pointers in order to resolve this issue >>>>>>>>> and move forward? >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Anirudh Gupta >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi Team, >>>>>>>>>> >>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>> successfully. >>>>>>>>>> I need to enable PTP service while deploying the overcloud for >>>>>>>>>> which I have included the service in my deployment >>>>>>>>>> >>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>> -e /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>> -e >>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>> \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>> \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>> \ >>>>>>>>>> * -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>> \* >>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>> >>>>>>>>>> But it gives the following error >>>>>>>>>> >>>>>>>>>> 2022-05-06 11:30:10.707655 | 5254001f-9952-7fed-4a6d-000000002fde >>>>>>>>>> | FATAL | Wait for puppet host configuration to finish | >>>>>>>>>> overcloud-controller-0 | error={"ansible_job_id": "5188783868.37685", >>>>>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>> 'lookup'. See >>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>> (file: >>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while evaluating a >>>>>>>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>>>>>>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Anirudh Gupta >>>>>>>>>> >>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue May 10 15:44:45 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 11 May 2022 00:44:45 +0900 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta wrote: > Hi Takashi, > > Thanks for your suggestion. > > I downloaded the updated Train Images and they had the ptp.pp file > available on the overcloud and undercloud machines > > [root at overcloud-controller-1 /]# find . -name "ptp.pp" > > *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* > > With this, I re-executed the deployment and got the below error on the > machines > > 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce | > FATAL | Wait for puppet host configuration to finish | > overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", > "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply > --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules > --detailed-exitcodes --summarize --color=false > /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t > puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 > 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": > "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", > "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' > is deprecated in favor of using 'lookup'. See > https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & > line not available)\n<13>May 10 20:05:37 puppet-user: Warning: > /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It > should be converted to version 5\n<13>May 10 20:05:37 puppet-user: > (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: > Undefined variable '::deploy_config_name'; \\n (file & line not > available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module > 'tripleo' has unresolved dependencies - it will only see those that are > resolved. Use 'puppet module list --tree' to see information about > modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: > Error: Evaluation Error: A substring operation does not accept a String as > a character index. Expected an Integer (file: > /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, > column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": > ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is > deprecated in favor of using 'lookup'. See > https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & > line not available)", "<13>May 10 20:05:37 puppet-user: Warning: > /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It > should be converted to version 5", "<13>May 10 20:05:37 puppet-user: > (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: > Warning: Undefined variable '::deploy_config_name'; \\n (file & line not > available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: > module 'tripleo' has unresolved dependencies - it will only see those that > are resolved. Use 'puppet module list --tree' to see information about > modules\\n (file & line not available)", "<13>May 10 20:05:37 > puppet-user: *Error: Evaluation Error: A substring operation does not > accept a String as a character index. Expected an Integer (file: > /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, > column: 46) *on node overcloud-controller-1.localdomain"], "stdout": "", > "stdout_lines": []} > > The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, > line: 41, column: 46 *had the following code: > 34 class tripleo::profile::base::time::ptp ( > 35 $ptp4l_interface = 'eth0', > 36 $ptp4l_conf_slaveonly = 1, > 37 $ptp4l_conf_network_transport = 'UDPv4', > 38 ) { > 39 > 40 $interface_mapping = generate('/bin/os-net-config', '-i', > $ptp4l_interface) > 41 *$ptp4l_interface_name = $interface_mapping[$ptp4l_interface]* > > > *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file > is as below: > > resource_registry: > # FIXME(bogdando): switch it, once it is containerized > OS::TripleO::Services::Ptp: > ../../deployment/time/ptp-baremetal-puppet.yaml > OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp > > parameter_defaults: > # PTP hardware interface name > *PtpInterface: 'nic1'* > > # Configure PTP clock in slave mode > PtpSlaveMode: 1 > > # Configure PTP message transport protocol > PtpMessageTransport: 'UDPv4' > > I have also tried modifying the entry as below: > *PtpInterface: 'nic1' #*(i.e. without quotes), but the error remains the > same. > > Queries: > > 1. Any pointers to resolve this? > > I'm not familiar with ptp but you'd need to use the actual interface name if you are not using the alias name. > > 1. You were mentioning something about the support of PTP not there in > the wallaby release. Can you please confirm? > > IIUC PTP is still supported even in master. What we removed is the implementation using Puppet which was replaced by ansible. The warning regarding OS::TripleO::Services::Ptp was added when we decided to merge all time sync services to the single service resource which is OS::TripleO::Services::Timesync[1]. It's related to how resources are defined in Heat and doesn't affect configuration support itself. [1] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 > It would be a great help if you could extend a little more support to > resolve the issues. > > Regards > Anirudh Gupta > > > On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta wrote: > >> I'll check that well. >> By the way, I downloaded the images from the below link >> >> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >> >> They seem to be updated yesterday, I'll download and try the deployment >> with the latest images. >> >> Also are you pointing that the support for PTP would not be there in >> Wallaby Release? >> >> Regards >> Anirudh Gupta >> >> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami >> wrote: >> >>> >>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta >>> wrote: >>> >>>> Hi Takashi >>>> >>>> I have checked this in undercloud only. >>>> I don't find any such file in overcloud. Could this be a concern? >>>> >>> >>> The manifest should exist in overcloud nodes and the missing file is the >>> exact cause >>> of that puppet failure during deployment. >>> >>> Please check your overcloud images used to install overcloud nodes and >>> ensure that >>> you're using the right one. You might be using the image for a different >>> release. >>> We removed the manifest file during the Wallaby cycle. >>> >>> >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> >>>> >>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami >>>> wrote: >>>> >>>>> >>>>> >>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta >>>>>> wrote: >>>>>> >>>>>>> Hi Takashi, >>>>>>> >>>>>>> Thanks for your reply. >>>>>>> >>>>>>> I have checked on my machine and the file "ptp.pp" do exist at path " >>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>> " >>>>>>> >>>>>> Did you check this in your undercloud or overcloud ? >>>>>> During the deployment all configuration files are generated using >>>>>> puppet modules >>>>>> installed in overcloud nodes, so you should check this in overcloud >>>>>> nodes. >>>>>> >>>>>> Also, the deprecation warning is not implemented >>>>>> >>>>> Ignore this incomplete line. I was looking for the implementation >>>>> which shows the warning >>>>> but I found it in tripleoclient and it looks reasonable according to >>>>> what we have in >>>>> environments/services/ptp.yaml . >>>>> >>>>> >>>>>> >>>>>> >>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>>> for controller and compute *before rendering the templates, but >>>>>>> still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>>> >>>>>> >>>>>> IIUC you don't need this because OS::TripleO::Services::Timesync >>>>>> becomes an alias >>>>>> to the Ptp service resource when you use the ptp environment file. >>>>>> >>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>> >>>>>> >>>>>>> >>>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>>> overcloud-controller-0.localdomain (file: >>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>> "stdout": "", "stdout_lines": []} >>>>>>> >>>>>>> Can you suggest any workarounds or any pointers to look further in >>>>>>> order to resolve this issue? >>>>>>> >>>>>> >>>>>>> Regards >>>>>>> Anirudh Gupta >>>>>>> >>>>>>> >>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>> tkajinam at redhat.com> wrote: >>>>>>> >>>>>>>> I'm not familiar with PTP, but the error you pasted indicates that >>>>>>>> the required puppet manifest does not exist in your overcloud node/image. >>>>>>>> >>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>> >>>>>>>> This should not happen and the class should exist as long as you >>>>>>>> have puppet-tripleo from stable/train installed. >>>>>>>> >>>>>>>> I'd recommend you check installed tripleo/puppet packages and >>>>>>>> ensure everything is in the consistent release. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi All >>>>>>>>> >>>>>>>>> Any update on this? >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Anirudh Gupta >>>>>>>>> >>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi Team, >>>>>>>>>> >>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>> >>>>>>>>>> When I was executing the Overcloud deployment script, passing the >>>>>>>>>> PTP yaml, it gave the following option at the starting >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>> >>>>>>>>>> even if passing Y, it starts executing for sometime and the gives >>>>>>>>>> the following error >>>>>>>>>> >>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Can someone suggest some pointers in order to resolve this issue >>>>>>>>>> and move forward? >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Anirudh Gupta >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Team, >>>>>>>>>>> >>>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>>> successfully. >>>>>>>>>>> I need to enable PTP service while deploying the overcloud for >>>>>>>>>>> which I have included the service in my deployment >>>>>>>>>>> >>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>> -e /home/stack/templates/environments/network-isolation.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e >>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>> \ >>>>>>>>>>> * -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>> \* >>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>> >>>>>>>>>>> But it gives the following error >>>>>>>>>>> >>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>> 'lookup'. See >>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>> (file: >>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while evaluating a >>>>>>>>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>>>>>>>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> Anirudh Gupta >>>>>>>>>>> >>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue May 10 17:03:21 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 10 May 2022 18:03:21 +0100 Subject: [requirements][oslo.log] failure to update to oslo.log===4.8.0 In-Reply-To: <20220510145630.f5qtjzlozeoiuwej@mthode.org> References: <20220510145630.f5qtjzlozeoiuwej@mthode.org> Message-ID: <65128159cdb917c072cf474ac569dc0c0a081392.camel@redhat.com> On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote: > Hi all, > > It looks like the latest oslo.log update is failing to pass tempest. > If anyone is around to look I'd appreciate it. > https://review.opendev.org/840630 Repeating from the review, this seems to be caused by [1]. I'm not entirely sure why Tempest specifically is unhappy with this. I'll try to find time to look at this. Hopefully Takashi-san will beat me to it though, as the author of that change ;) Stephen [1] https://review.opendev.org/c/openstack/oslo.log/+/838190 From johnsomor at gmail.com Tue May 10 17:04:04 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 10 May 2022 10:04:04 -0700 Subject: [designate] How to avoid NXDOMAIN or stale data during cold start of a (new) machine In-Reply-To: <69ab8e54-f419-4cd1-f289-a0b5efb7f723@inovex.de> References: <69ab8e54-f419-4cd1-f289-a0b5efb7f723@inovex.de> Message-ID: Hi Christian, On startup, BIND9 will start sending SOA serial number queries for all of the zones it knows about. In the case of Designate, that means BIND9 will send out requests to the miniDNS instances to check if the serial number in Designate is newer than the one in BIND9. If the serial number in Designate is newer, BIND9 will initiate a zone transfer from the miniDNS in Designate. BIND9, by default, will do 20 SOA serial number queries at a time (less on older versions of BIND). See the serial-query-rate setting in the rate limiter knowledge base article[1]. The tuning knowledge base article[2] also discusses settings that can be adjusted for secondary servers that may also help speed up a cold startup. Off my head, I don't know of a way to tell BIND9 to not answer queries via rdnc or such. I usually block network access to a new BIND9 instance until the "rdnc status" shows the "soa queries in progress" and "xfers running" drop to 0 or a low number. Maybe others will have different approaches? As for runtime of a full resync in BIND9, that really depends on the number and size of the zones as well as the configuration settings I mentioned above. The performance of the host running the miniDNS instances and database will also have an impact. Michael [1] https://kb.isc.org/v1/docs/rate-limiters-for-authoritative-zone-propagation [2] https://kb.isc.org/docs/aa-00726#options-for-tuning-secondary-servers On Tue, May 10, 2022 at 2:02 AM Christian Rohmann wrote: > > Hello openstack-discuss, > > I have a designate setup using bind9 as the user-serving DNS server. > > When starting a machine with either very old or no zones at all, > NXDOMAIN or other actually stale data is sent out to clients as designate > is not done doing an initial full sync / reconciliation. > > > * What is the "proper" way to tackle this cold-start issue and to keep > the bind from serving wrong data? > ** Did I miss on any options to handle this startup case? > > * What is the usual runtime for an initial sync that you observe in case > the backend DNS server has no zones at all anymore? > > > > Regards > > > Christian > > > From stephenfin at redhat.com Tue May 10 17:24:08 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 10 May 2022 18:24:08 +0100 Subject: [requirements][oslo.log] failure to update to oslo.log===4.8.0 In-Reply-To: <65128159cdb917c072cf474ac569dc0c0a081392.camel@redhat.com> References: <20220510145630.f5qtjzlozeoiuwej@mthode.org> <65128159cdb917c072cf474ac569dc0c0a081392.camel@redhat.com> Message-ID: On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote: > On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote: > > Hi all, > > > > It looks like the latest oslo.log update is failing to pass tempest. > > If anyone is around to look I'd appreciate it. > > https://review.opendev.org/840630 > > Repeating from the review, this seems to be caused by [1]. I'm not entirely sure > why Tempest specifically is unhappy with this. I'll try to find time to look at > this. Hopefully Takashi-san will beat me to it though, as the author of that > change ;) Actually, maybe not that hard to solve. I think [1] will fix the issue. Tempest doesn't use keystoneauth1 like everyone else so they didn't get the global request ID stuff for free. We need to pass some additional context when logging to keep this new version of oslo.log happy. If you've thoughts on this, please let me know on the review. The other option is to revert the change on oslo.log and cut a new release, blacklisting this one. Stephen [1] https://review.opendev.org/c/openstack/tempest/+/841310 > > Stephen > > [1] https://review.opendev.org/c/openstack/oslo.log/+/838190 From gmann at ghanshyammann.com Tue May 10 17:44:32 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 May 2022 12:44:32 -0500 Subject: [requirements][oslo.log] failure to update to oslo.log===4.8.0 In-Reply-To: References: <20220510145630.f5qtjzlozeoiuwej@mthode.org> <65128159cdb917c072cf474ac569dc0c0a081392.camel@redhat.com> Message-ID: <180af1294c0.e4d8f007168320.1115622902360506076@ghanshyammann.com> ---- On Tue, 10 May 2022 12:24:08 -0500 Stephen Finucane wrote ---- > On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote: > > On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote: > > > Hi all, > > > > > > It looks like the latest oslo.log update is failing to pass tempest. > > > If anyone is around to look I'd appreciate it. > > > https://review.opendev.org/840630 > > > > Repeating from the review, this seems to be caused by [1]. I'm not entirely sure > > why Tempest specifically is unhappy with this. I'll try to find time to look at > > this. Hopefully Takashi-san will beat me to it though, as the author of that > > change ;) > > Actually, maybe not that hard to solve. I think [1] will fix the issue. Tempest > doesn't use keystoneauth1 like everyone else so they didn't get the global > request ID stuff for free. We need to pass some additional context when logging > to keep this new version of oslo.log happy. > > If you've thoughts on this, please let me know on the review. The other option > is to revert the change on oslo.log and cut a new release, blacklisting this > one. I am ok with tempest change but IMO oslo.log changes are backward-incompatible with not much value? may be it should check if there is global request-id then of otherwise skip? -gmann > > Stephen > > [1] https://review.opendev.org/c/openstack/tempest/+/841310 > > > > > Stephen > > > > [1] https://review.opendev.org/c/openstack/oslo.log/+/838190 > > > From peter.matulis at canonical.com Tue May 10 21:40:34 2022 From: peter.matulis at canonical.com (Peter Matulis) Date: Tue, 10 May 2022 17:40:34 -0400 Subject: [docs] feedback mechanism Message-ID: Hi, I was wondering what the various projects use to gather user feedback for their documentation. Is something like what is visible here [0,1] acceptable to the OpenStack community?There is a widget at the right side of the page (disable your ad-blocker!). [0]: https://juju.is/docs/olm [1]: https://charmed-kubeflow.io/docs Peter Matulis OpenStack Charms team -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue May 10 22:18:45 2022 From: melwittt at gmail.com (melanie witt) Date: Tue, 10 May 2022 15:18:45 -0700 Subject: [requirements][oslo.log] failure to update to oslo.log===4.8.0 In-Reply-To: <180af1294c0.e4d8f007168320.1115622902360506076@ghanshyammann.com> References: <20220510145630.f5qtjzlozeoiuwej@mthode.org> <65128159cdb917c072cf474ac569dc0c0a081392.camel@redhat.com> <180af1294c0.e4d8f007168320.1115622902360506076@ghanshyammann.com> Message-ID: On Tue May 10 2022 10:44:32 GMT-0700 (Pacific Daylight Time), Ghanshyam Mann wrote: > ---- On Tue, 10 May 2022 12:24:08 -0500 Stephen Finucane wrote ---- > > On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote: > > > On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote: > > > > Hi all, > > > > > > > > It looks like the latest oslo.log update is failing to pass tempest. > > > > If anyone is around to look I'd appreciate it. > > > > https://review.opendev.org/840630 > > > > > > Repeating from the review, this seems to be caused by [1]. I'm not entirely sure > > > why Tempest specifically is unhappy with this. I'll try to find time to look at > > > this. Hopefully Takashi-san will beat me to it though, as the author of that > > > change ;) > > > > Actually, maybe not that hard to solve. I think [1] will fix the issue. Tempest > > doesn't use keystoneauth1 like everyone else so they didn't get the global > > request ID stuff for free. We need to pass some additional context when logging > > to keep this new version of oslo.log happy. > > > > If you've thoughts on this, please let me know on the review. The other option > > is to revert the change on oslo.log and cut a new release, blacklisting this > > one. > > I am ok with tempest change but IMO oslo.log changes are backward-incompatible with not > much value? may be it should check if there is global request-id then of otherwise skip? +1, it feels like this is a bug in oslo.log. Prior to the global_request_id change, it did the following test before using the logging_context_format_string [2]: if record.__dict__.get('request_id'): fmt = self.conf.logging_context_format_string else: fmt = self.conf.logging_default_format_string which protected against a KeyError in the case of request_id. It seems like something similar needs to be added for global_request_id. -melanie [2] https://github.com/openstack/oslo.log/blob/ebdee7f39920ad5b4268ee296952432b0d41a375/oslo_log/formatters.py#L468 > > [1] https://review.opendev.org/c/openstack/tempest/+/841310 > > > > > > > > Stephen > > > > > > [1] https://review.opendev.org/c/openstack/oslo.log/+/838190 > > > > > > > From tkajinam at redhat.com Wed May 11 03:10:57 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 11 May 2022 12:10:57 +0900 Subject: [puppet][tripleo][neutron] Deprecating support for networking-ansible Message-ID: Hello Currently puppet-neutron supports deploying networking-ansible[1]. However I noticed the project looks unmaintained. - No release was made since stable/victoria - Its job template has not been updated still points openstack-python3-wallaby-jobs (likely because of no release activity) - setup.cfg still indicates support for Python 2 while it does not support 3.6/7/9 Although the repo got a few patches merged 6 months ago, I've not seen any interest in maintaining the repository. Based on the above observations, I'd propose deprecating the support in Zed and removing it in the post-Zed release. TripleO also supports networking-ansible but that will be also deprecated and removed because the dependent implementation will be removed. (That's why I added the tripleo tag) If you have any concerns then please let me know. If we don't hear any concerns in a few weeks then I'll propose the deprecation. I'm adding the [neutron] tag in case anybody in the team is interested in fixing the maintenance status. IIUC networking-ansible is not part of the stadium but I'm not aware of any good alternatives... [1] https://opendev.org/x/networking-ansible Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Wed May 11 07:19:29 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 11 May 2022 09:19:29 +0200 Subject: [neutron][opencontrail]Trunk ports doesn't have standard attributes In-Reply-To: References: Message-ID: Hi Gleb: Please check the OSC version you are using. The OSC extension for Trunks is in the neutron-pythonclient project. And in particular, the support for the "description" parameter was added in [1]. This is the output in a system using master branches: [2]. In any case, what you need in "contrail-neutron-plugin" is what the API is returning. Don't mix the OSC output and the API. If you execute [3] (as commented in my previous mail), you'll see the "description" parameter (and others). This is what the plugin is retrieving and the information you'll handle. Again, please check the "fields" parameter in order not to select a limited set of fields, discarding what you are looking for. Place a debug line after the API call to check what you are receiving. Regards. [1]https://review.opendev.org/c/openstack/python-neutronclient/+/362614 [2]https://paste.opendev.org/show/bZbnOaKSvGDR6vbR4ITM/ [3]https://paste.opendev.org/show/bh1QD8VqLz4OKMKYBXUa/ On Wed, May 11, 2022 at 6:55 AM Gleb Zimin wrote: > Thanks for reply, Rodolfo. > > When I want to show exact trunk, there is none of standard params. > $ openstack network trunk show trunk1 > +----------------+--------------------------------------+ > | Field | Value | > +----------------+--------------------------------------+ > | admin_state_up | UP | > | id | 96babd03-8d5c-4b80-9118-b5e0a0e8d98d | > | name | trunk1 | > | port_id | b61ac310-35a6-41b6-9af1-8c25f73dec06 | > | project_id | 2923ed63b7934e4b89ce60bfe78dfd87 | > | status | DOWN | > | sub_ports | | > | tenant_id | 2923ed63b7934e4b89ce60bfe78dfd87 | > +----------------+--------------------------------------+ > > Also, if I want to set description to the trunk, I get this: > $ openstack network trunk set trunk1 --description test > Failed to set trunk 'trunk1': Unrecognized attribute(s) 'description' > Neutron server returns request_ids: > ['req-1aca1b69-fa75-466d-aef9-eb0786463555'] > > This request fails on neutron side, it appears before request proceeded on > the contrail-neutron-plugin side. > > https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/attributes.py#L284 > > On Fri, May 6, 2022 at 7:13 PM Rodolfo Alonso Hernandez < > ralonsoh at redhat.com> wrote: > >> Hello Gleb: >> >> The OC plugin is using the Neutron API to execute operations on the >> server. The "get_trunk" method calls "_get_resource" [1]. I've tested the >> Neutron API and this is what you get from the API: [2]. All standard >> attributes are returned. >> >> Maybe this could be related to the "fields" parameter you are passing in >> the method [3]. Check if you are filtering the API call and only retrieving >> a certain subset of fields. >> >> Regards. >> >> [1] >> https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/contrail_plugin.py;h=80104f1e673eb4583f7a15707c3590c510fd7667;hb=refs/heads/master#l326 >> [2]https://paste.opendev.org/show/bh1QD8VqLz4OKMKYBXUa/ >> [3] >> https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/services/trunk/plugin.py;h=35fc3310911143fd3b4cf8997c23d0358d652dba;hb=refs/heads/master#l50 >> >> On Fri, May 6, 2022 at 4:35 PM Gleb Zimin wrote: >> >>> Environment: Openstack Victoria, OpenContrail (TungstenFabric) R2011. >>> Problem: Trunk ports doesn't have standard attributes such as >>> description, timestamp. >>> I have an environment where core plugin is OpenContrail. OpenContrail >>> has tf-neutron-plugin for proper work with neutron. There is TrunkPlugin, >>> that proxies all trunk-related request to the OpenContrail backend. Here is >>> the link for this plugin. >>> https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/services/trunk/plugin.py;h=35fc3310911143fd3b4cf8997c23d0358d652dba;hb=refs/heads/master >>> According to the openstack documentation: Resources that inherit from >>> the HasStandardAttributes DB class can automatically have the extensions >>> written for standard attributes (e.g. timestamps, revision number, etc) >>> extend their resources by defining the ?api_collections? on their model. >>> These are used by extensions for standard attr resources to generate the >>> extended resources map. >>> As I understood, it works only for plugins, which using Openstack >>> database. For example, openstack native trunk plugin has models.py file >>> where we can see this inheritance. >>> https://github.com/openstack/neutron/blob/master/neutron/services/trunk/models.py#L26 >>> In case of OpenContrail+OpenStack Trunk plugin only redirect requests. >>> What can I do to make Contrail Trunk plugin works in the same way? >>> I'll appreciate any help. Thanks in advance. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed May 11 08:36:22 2022 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 11 May 2022 10:36:22 +0200 Subject: [requirements][oslo.log] failure to update to oslo.log===4.8.0 In-Reply-To: References: <20220510145630.f5qtjzlozeoiuwej@mthode.org> <65128159cdb917c072cf474ac569dc0c0a081392.camel@redhat.com> <180af1294c0.e4d8f007168320.1115622902360506076@ghanshyammann.com> Message-ID: Hello, Le mer. 11 mai 2022 ? 00:20, melanie witt a ?crit : > On Tue May 10 2022 10:44:32 GMT-0700 (Pacific Daylight Time), Ghanshyam > Mann wrote: > > ---- On Tue, 10 May 2022 12:24:08 -0500 Stephen Finucane < > stephenfin at redhat.com> wrote ---- > > > On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote: > > > > On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote: > > > > > Hi all, > > > > > > > > > > It looks like the latest oslo.log update is failing to pass > tempest. > > > > > If anyone is around to look I'd appreciate it. > > > > > https://review.opendev.org/840630 > > > > > > > > Repeating from the review, this seems to be caused by [1]. I'm not > entirely sure > > > > why Tempest specifically is unhappy with this. I'll try to find > time to look at > > > > this. Hopefully Takashi-san will beat me to it though, as the > author of that > > > > change ;) > > > > > > Actually, maybe not that hard to solve. I think [1] will fix the > issue. Tempest > > > doesn't use keystoneauth1 like everyone else so they didn't get the > global > > > request ID stuff for free. We need to pass some additional context > when logging > > > to keep this new version of oslo.log happy. > > > > > > If you've thoughts on this, please let me know on the review. The > other option > > > is to revert the change on oslo.log and cut a new release, > blacklisting this > > > one. > > > > I am ok with tempest change but IMO oslo.log changes are > backward-incompatible with not > > much value? may be it should check if there is global request-id then of > otherwise skip? > I agree with that, we need to better handle this point in oslo.log. > +1, it feels like this is a bug in oslo.log. Prior to the > global_request_id change, it did the following test before using the > logging_context_format_string [2]: > > if record.__dict__.get('request_id'): > fmt = self.conf.logging_context_format_string > else: > fmt = self.conf.logging_default_format_string > > which protected against a KeyError in the case of request_id. It seems > like something similar needs to be added for global_request_id. > If we apply the same logic for `global_request_id` then we will open the door of an implicit remplacement of the used format. By example if `request_id` is given and not `global_request_id` then we will override the context format with the default format implicitly. I'd rather suggest to init `global_request` with a default value (`None`) as we did previously with other context var format => https://opendev.org/openstack/oslo.log/commit/c50d3d633ba Thoughts? > > -melanie > > [2] > > https://github.com/openstack/oslo.log/blob/ebdee7f39920ad5b4268ee296952432b0d41a375/oslo_log/formatters.py#L468 > > > > [1] https://review.opendev.org/c/openstack/tempest/+/841310 > > > > > > > > > > > Stephen > > > > > > > > [1] https://review.opendev.org/c/openstack/oslo.log/+/838190 > > > > > > > > > > > > > > -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed May 11 08:57:32 2022 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 11 May 2022 10:57:32 +0200 Subject: [requirements][oslo.log] failure to update to oslo.log===4.8.0 In-Reply-To: References: <20220510145630.f5qtjzlozeoiuwej@mthode.org> <65128159cdb917c072cf474ac569dc0c0a081392.camel@redhat.com> <180af1294c0.e4d8f007168320.1115622902360506076@ghanshyammann.com> Message-ID: I just submitted a patch to fix this issue in the oslo.log side, reviews are welcomes https://review.opendev.org/c/openstack/oslo.log/+/841383 Le mer. 11 mai 2022 ? 10:36, Herve Beraud a ?crit : > Hello, > > > > Le mer. 11 mai 2022 ? 00:20, melanie witt a ?crit : > >> On Tue May 10 2022 10:44:32 GMT-0700 (Pacific Daylight Time), Ghanshyam >> Mann wrote: >> > ---- On Tue, 10 May 2022 12:24:08 -0500 Stephen Finucane < >> stephenfin at redhat.com> wrote ---- >> > > On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote: >> > > > On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote: >> > > > > Hi all, >> > > > > >> > > > > It looks like the latest oslo.log update is failing to pass >> tempest. >> > > > > If anyone is around to look I'd appreciate it. >> > > > > https://review.opendev.org/840630 >> > > > >> > > > Repeating from the review, this seems to be caused by [1]. I'm >> not entirely sure >> > > > why Tempest specifically is unhappy with this. I'll try to find >> time to look at >> > > > this. Hopefully Takashi-san will beat me to it though, as the >> author of that >> > > > change ;) >> > > >> > > Actually, maybe not that hard to solve. I think [1] will fix the >> issue. Tempest >> > > doesn't use keystoneauth1 like everyone else so they didn't get the >> global >> > > request ID stuff for free. We need to pass some additional context >> when logging >> > > to keep this new version of oslo.log happy. >> > > >> > > If you've thoughts on this, please let me know on the review. The >> other option >> > > is to revert the change on oslo.log and cut a new release, >> blacklisting this >> > > one. >> > >> > I am ok with tempest change but IMO oslo.log changes are >> backward-incompatible with not >> > much value? may be it should check if there is global request-id then >> of otherwise skip? >> > > I agree with that, we need to better handle this point in oslo.log. > > >> +1, it feels like this is a bug in oslo.log. Prior to the >> global_request_id change, it did the following test before using the >> logging_context_format_string [2]: >> >> if record.__dict__.get('request_id'): >> fmt = self.conf.logging_context_format_string >> else: >> fmt = self.conf.logging_default_format_string >> > >> which protected against a KeyError in the case of request_id. It seems >> like something similar needs to be added for global_request_id. >> > > If we apply the same logic for `global_request_id` then we will open the > door of an implicit remplacement of the used format. By example if > `request_id` is given and not `global_request_id` then we will override the > context format with the default format implicitly. > > I'd rather suggest to init `global_request` with a default value (`None`) > as we did previously with other context var format => > https://opendev.org/openstack/oslo.log/commit/c50d3d633ba > > Thoughts? > >> >> -melanie >> >> [2] >> >> https://github.com/openstack/oslo.log/blob/ebdee7f39920ad5b4268ee296952432b0d41a375/oslo_log/formatters.py#L468 >> >> > > [1] https://review.opendev.org/c/openstack/tempest/+/841310 >> > > >> > > > >> > > > Stephen >> > > > >> > > > [1] https://review.opendev.org/c/openstack/oslo.log/+/838190 >> > > >> > > >> > > >> > >> >> >> > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > > -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Wed May 11 12:15:47 2022 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 11 May 2022 21:15:47 +0900 Subject: [neutron][opencontrail]Trunk ports doesn't have standard attributes In-Reply-To: References: Message-ID: Hi Gleb, The standard attributes are populated by StandardAttrDescriptionMixin [1]. At a quick glance, the OpenContrail core plugin (both NeutronPluginContrailCoreV3 or NeutronPluginContrailCoreV2) inherits NeutronPluginContrailCoreBase [2] which inherits NeutronPluginBaseV2 rather than NeutronDbPluginV2. Thus, what StandardAttrDescriptionMixin does is not available in your case. For best case, inheriting StandardAttrDescriptionMixin in the core plugin addresses your issue, but I am not sure. >From the initial implementation of Contrail plugin, Contrail plugin inherits NeutronPluginBaseV2 instead of NeutronDbPluginV2. Most logics behind the neutron API are implemented by NeutronDbPluginV2 and perhaps the OpenStack documentation assumes how NeutronDbPluginV2 behaves. However, Contrail plugin did not choose this way and implemented all behaviors behind the neutron API by its plugin, so Contrail plugin needs to implement almost all logics in NeutronDbPluginV2 by itself. I wonder if the issue you hit comes from this situation. I don't know why this way was chosen but this is the real situation. Hope it may help you a bit. [1] https://opendev.org/openstack/neutron/src/branch/master/neutron/db/standardattrdescription_db.py [2] https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/contrail_plugin_base.py;h=7bb7b54ab7bb35ec2193f2637f622fac8f39aed5;hb=refs/heads/master Thanks, Akihiro Motoki (amotoki) On Fri, May 6, 2022 at 11:34 PM Gleb Zimin wrote: > > Environment: Openstack Victoria, OpenContrail (TungstenFabric) R2011. > Problem: Trunk ports doesn't have standard attributes such as description, timestamp. > I have an environment where core plugin is OpenContrail. OpenContrail has tf-neutron-plugin for proper work with neutron. There is TrunkPlugin, that proxies all trunk-related request to the OpenContrail backend. Here is the link for this plugin. https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/services/trunk/plugin.py;h=35fc3310911143fd3b4cf8997c23d0358d652dba;hb=refs/heads/master > According to the openstack documentation: Resources that inherit from the HasStandardAttributes DB class can automatically have the extensions written for standard attributes (e.g. timestamps, revision number, etc) extend their resources by defining the ?api_collections? on their model. These are used by extensions for standard attr resources to generate the extended resources map. > As I understood, it works only for plugins, which using Openstack database. For example, openstack native trunk plugin has models.py file where we can see this inheritance. https://github.com/openstack/neutron/blob/master/neutron/services/trunk/models.py#L26 > In case of OpenContrail+OpenStack Trunk plugin only redirect requests. > What can I do to make Contrail Trunk plugin works in the same way? > I'll appreciate any help. Thanks in advance. From anyrude10 at gmail.com Wed May 11 07:25:37 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 11 May 2022 12:55:37 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi Takashi, Thanks for clarifying my issues regarding the support of PTP in Wallaby Release. In Train, I have also tried passing the exact interface name and took 2 runs with and without quotes like below: *PtpInterface: eno1* *PtpInterface: 'eno1'* But in both the cases, the issue observed was similar 2022-05-11 10:33:20.189107 | 5254001f-9952-934d-e901-0000000030be | FATAL | Wait for puppet host configuration to finish | overcloud-controller-2 | error={"ansible_job_id": "526310775819.36650", "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --detailed-exitcodes --summarize --color=false /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t puppet-user", "delta": "0:00:04.289208", "end": "2022-05-11 10:33:08.195052", "failed_when_result": true, "finished": 1, "msg": "non-zero return code", "rc": 1, "start": "2022-05-11 10:33:03.905844", "stderr": "<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5\n<13>May 11 10:33:03 puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 11 10:33:03 puppet-user: Warning: Undefined variable '::deploy_config_name'; \\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module 'tripleo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer (file: /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, column: 46) on node overcloud-controller-2.localdomain", "stderr_lines": ["<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & line not available)", "<13>May 11 10:33:03 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5", "<13>May 11 10:33:03 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 11 10:33:03 puppet-user: Warning: Undefined variable '::deploy_config_name'; \\n (file & line not available)", "<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module 'tripleo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules\\n (file & line not available)", "<13>May 11 10:33:03 puppet-user: *Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer (file: /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, column: 46) on node overcloud-controller-2.localdomain"], "stdout": "", "stdout_lines": []}* 2022-05-11 10:33:20.190263 | 5254001f-9952-934d-e901-0000000030be | TIMING | Wait for puppet host configuration to finish | overcloud-controller-2 | 0:12:41.268734 | 7.01s I'll be highly grateful if you could further extend your support to resolve the issue. Regards Anirudh Gupta On Tue, May 10, 2022 at 9:15 PM Takashi Kajinami wrote: > > > On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta > wrote: > >> Hi Takashi, >> >> Thanks for your suggestion. >> >> I downloaded the updated Train Images and they had the ptp.pp file >> available on the overcloud and undercloud machines >> >> [root at overcloud-controller-1 /]# find . -name "ptp.pp" >> >> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >> >> With this, I re-executed the deployment and got the below error on the >> machines >> >> 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce | >> FATAL | Wait for puppet host configuration to finish | >> overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", >> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >> --detailed-exitcodes --summarize --color=false >> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >> puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 >> 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": >> "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", >> "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' >> is deprecated in favor of using 'lookup'. See >> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & >> line not available)\n<13>May 10 20:05:37 puppet-user: Warning: >> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >> should be converted to version 5\n<13>May 10 20:05:37 puppet-user: >> (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: >> Undefined variable '::deploy_config_name'; \\n (file & line not >> available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module >> 'tripleo' has unresolved dependencies - it will only see those that are >> resolved. Use 'puppet module list --tree' to see information about >> modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: >> Error: Evaluation Error: A substring operation does not accept a String as >> a character index. Expected an Integer (file: >> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >> column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": >> ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is >> deprecated in favor of using 'lookup'. See >> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & >> line not available)", "<13>May 10 20:05:37 puppet-user: Warning: >> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >> should be converted to version 5", "<13>May 10 20:05:37 puppet-user: >> (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: >> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >> available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: >> module 'tripleo' has unresolved dependencies - it will only see those that >> are resolved. Use 'puppet module list --tree' to see information about >> modules\\n (file & line not available)", "<13>May 10 20:05:37 >> puppet-user: *Error: Evaluation Error: A substring operation does not >> accept a String as a character index. Expected an Integer (file: >> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >> column: 46) *on node overcloud-controller-1.localdomain"], "stdout": "", >> "stdout_lines": []} >> >> The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, >> line: 41, column: 46 *had the following code: >> 34 class tripleo::profile::base::time::ptp ( >> 35 $ptp4l_interface = 'eth0', >> 36 $ptp4l_conf_slaveonly = 1, >> 37 $ptp4l_conf_network_transport = 'UDPv4', >> 38 ) { >> 39 >> 40 $interface_mapping = generate('/bin/os-net-config', '-i', >> $ptp4l_interface) >> 41 *$ptp4l_interface_name = $interface_mapping[$ptp4l_interface]* >> >> >> *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file >> is as below: >> >> resource_registry: >> # FIXME(bogdando): switch it, once it is containerized >> OS::TripleO::Services::Ptp: >> ../../deployment/time/ptp-baremetal-puppet.yaml >> OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp >> >> parameter_defaults: >> # PTP hardware interface name >> *PtpInterface: 'nic1'* >> >> # Configure PTP clock in slave mode >> PtpSlaveMode: 1 >> >> # Configure PTP message transport protocol >> PtpMessageTransport: 'UDPv4' >> >> I have also tried modifying the entry as below: >> *PtpInterface: 'nic1' #*(i.e. without quotes), but the error remains >> the same. >> >> Queries: >> >> 1. Any pointers to resolve this? >> >> I'm not familiar with ptp but you'd need to use the actual interface name > if you are not using the alias name. > > > >> >> 1. You were mentioning something about the support of PTP not there >> in the wallaby release. Can you please confirm? >> >> IIUC PTP is still supported even in master. What we removed is the > implementation using Puppet > which was replaced by ansible. > > The warning regarding OS::TripleO::Services::Ptp was added when we decided > to merge > all time sync services to the single service resource which is > OS::TripleO::Services::Timesync[1]. > It's related to how resources are defined in Heat and doesn't affect > configuration support itself. > > [1] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 > > > >> It would be a great help if you could extend a little more support to >> resolve the issues. >> >> Regards >> Anirudh Gupta >> >> >> On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta >> wrote: >> >>> I'll check that well. >>> By the way, I downloaded the images from the below link >>> >>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>> >>> They seem to be updated yesterday, I'll download and try the deployment >>> with the latest images. >>> >>> Also are you pointing that the support for PTP would not be there in >>> Wallaby Release? >>> >>> Regards >>> Anirudh Gupta >>> >>> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami >>> wrote: >>> >>>> >>>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta >>>> wrote: >>>> >>>>> Hi Takashi >>>>> >>>>> I have checked this in undercloud only. >>>>> I don't find any such file in overcloud. Could this be a concern? >>>>> >>>> >>>> The manifest should exist in overcloud nodes and the missing file is >>>> the exact cause >>>> of that puppet failure during deployment. >>>> >>>> Please check your overcloud images used to install overcloud nodes and >>>> ensure that >>>> you're using the right one. You might be using the image for a >>>> different release. >>>> We removed the manifest file during the Wallaby cycle. >>>> >>>> >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> >>>>> >>>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Takashi, >>>>>>>> >>>>>>>> Thanks for your reply. >>>>>>>> >>>>>>>> I have checked on my machine and the file "ptp.pp" do exist at path >>>>>>>> " >>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>> " >>>>>>>> >>>>>>> Did you check this in your undercloud or overcloud ? >>>>>>> During the deployment all configuration files are generated using >>>>>>> puppet modules >>>>>>> installed in overcloud nodes, so you should check this in overcloud >>>>>>> nodes. >>>>>>> >>>>>>> Also, the deprecation warning is not implemented >>>>>>> >>>>>> Ignore this incomplete line. I was looking for the implementation >>>>>> which shows the warning >>>>>> but I found it in tripleoclient and it looks reasonable according to >>>>>> what we have in >>>>>> environments/services/ptp.yaml . >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>>>> for controller and compute *before rendering the templates, but >>>>>>>> still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>>>> >>>>>>> >>>>>>> IIUC you don't need this because OS::TripleO::Services::Timesync >>>>>>> becomes an alias >>>>>>> to the Ptp service resource when you use the ptp environment file. >>>>>>> >>>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>> >>>>>>>> Can you suggest any workarounds or any pointers to look further in >>>>>>>> order to resolve this issue? >>>>>>>> >>>>>>> >>>>>>>> Regards >>>>>>>> Anirudh Gupta >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>> >>>>>>>>> I'm not familiar with PTP, but the error you pasted indicates that >>>>>>>>> the required puppet manifest does not exist in your overcloud node/image. >>>>>>>>> >>>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>>> >>>>>>>>> This should not happen and the class should exist as long as you >>>>>>>>> have puppet-tripleo from stable/train installed. >>>>>>>>> >>>>>>>>> I'd recommend you check installed tripleo/puppet packages and >>>>>>>>> ensure everything is in the consistent release. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi All >>>>>>>>>> >>>>>>>>>> Any update on this? >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Anirudh Gupta >>>>>>>>>> >>>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Team, >>>>>>>>>>> >>>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>>> >>>>>>>>>>> When I was executing the Overcloud deployment script, passing >>>>>>>>>>> the PTP yaml, it gave the following option at the starting >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>>> >>>>>>>>>>> even if passing Y, it starts executing for sometime and the >>>>>>>>>>> gives the following error >>>>>>>>>>> >>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Can someone suggest some pointers in order to resolve this issue >>>>>>>>>>> and move forward? >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> Anirudh Gupta >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta < >>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Team, >>>>>>>>>>>> >>>>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>>>> successfully. >>>>>>>>>>>> I need to enable PTP service while deploying the overcloud for >>>>>>>>>>>> which I have included the service in my deployment >>>>>>>>>>>> >>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>> -e >>>>>>>>>>>> /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>>>> -e >>>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> * -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>>> \* >>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>> >>>>>>>>>>>> But it gives the following error >>>>>>>>>>>> >>>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>>> 'lookup'. See >>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>>> (file: >>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while evaluating >>>>>>>>>>>> a Function Call, Could not find class ::tripleo::profile::base::time::ptp >>>>>>>>>>>> for overcloud-controller-0.localdomain (file: >>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>>>>>>>>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>>> >>>>>>>>>>>> Regards >>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>> >>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibrahim.eryalcin at ulakhaberlesme.com.tr Wed May 11 11:56:56 2022 From: ibrahim.eryalcin at ulakhaberlesme.com.tr (Halil Ibrahim ERYALCIN) Date: Wed, 11 May 2022 11:56:56 +0000 Subject: OpenStack - OVS 2.15.2 - Intel E810-C - SR-IOV issue Message-ID: Hello, We 've a issue about running SR-IOV feature on OVS with Intel E810-C NIC. Ovs configuration is shared. When try to attach interface to VM , gives error as below. I wonder , can OVS work with Intel - SR-IOV ? Has anyone ever successed about it? Best Regards, OS: Ubuntu 20.04.4 LTS (GNU/Linux 5.13.0-40-lowlatency x86_64) **************** ovs_version: "2.15.2" **************** root at cmp15:/home/ulak# lspci |grep Eth b1:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02) b3:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02) b3:01.0 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.1 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.2 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.3 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.4 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.5 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.6 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.7 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.0 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.1 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.2 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.3 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.4 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.5 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.6 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.7 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) **************** root at cmp15:/home/ulak# ethtool -i enp179s0 driver: ice version: 1.8.3 firmware-version: 3.00 0x80008271 1.2992.0 expansion-rom-version: bus-info: 0000:b3:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes **************** nova.conf [pci] passthrough_whitelist = { "address": "*:b1:00.*", "physical_network": null } passthrough_whitelist = { "address": "*:b3:00.*", "physical_network": null } passthrough_whitelist = { "address": "*:b3:01.*", "physical_network": null } passthrough_whitelist = { "address": "*:b3:02.*", "physical_network": null } passthrough_whitelist = { "devname": "enp177s0", "physical_network": null } passthrough_whitelist = { "devname": "enp179s0", "physical_network": null } passthrough_whitelist = { "vendor_id":"8086", "product_id":"1592", "physical_network": null } passthrough_whitelist = { "vendor_id":"8086", "product_id":"1889", "physical_network": null } passthrough_whitelist = { "address": "0000:b3:02.7", "physical_network": null } **************** root at cmp15:/home/ulak# dpdk-devbind.py -s Network devices using kernel driver =================================== 0000:4b:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp75s0f0 drv=mlx5_core unused=vfio-pci 0000:4b:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp75s0f1 drv=mlx5_core unused=vfio-pci 0000:4c:00.0 'Ethernet Controller 10G X550T 1563' if=enp76s0f0 drv=ixgbe unused=vfio-pci 0000:4c:00.1 'Ethernet Controller 10G X550T 1563' if=enp76s0f1 drv=ixgbe unused=vfio-pci 0000:98:00.0 'MT2894 Family [ConnectX-6 Lx] 101f' if=enp152s0f0 drv=mlx5_core unused=vfio-pci 0000:98:00.1 'MT2894 Family [ConnectX-6 Lx] 101f' if=enp152s0f1 drv=mlx5_core unused=vfio-pci 0000:b1:00.0 'Ethernet Controller E810-C for QSFP 1592' if=enp177s0 drv=ice unused=vfio-pci 0000:b3:00.0 'Ethernet Controller E810-C for QSFP 1592' if=enp179s0 drv=ice unused=vfio-pci *Active* 0000:b3:01.0 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v0 drv=iavf unused=vfio-pci 0000:b3:01.1 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v1 drv=iavf unused=vfio-pci 0000:b3:01.2 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v2 drv=iavf unused=vfio-pci 0000:b3:01.3 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v3 drv=iavf unused=vfio-pci 0000:b3:01.4 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v4 drv=iavf unused=vfio-pci 0000:b3:01.5 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v5 drv=iavf unused=vfio-pci 0000:b3:01.6 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v6 drv=iavf unused=vfio-pci 0000:b3:01.7 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v7 drv=iavf unused=vfio-pci 0000:b3:02.0 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v8 drv=iavf unused=vfio-pci 0000:b3:02.1 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v9 drv=iavf unused=vfio-pci 0000:b3:02.2 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v10 drv=iavf unused=vfio-pci 0000:b3:02.3 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v11 drv=iavf unused=vfio-pci 0000:b3:02.4 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v12 drv=iavf unused=vfio-pci 0000:b3:02.5 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v13 drv=iavf unused=vfio-pci 0000:b3:02.6 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v14 drv=iavf unused=vfio-pci 0000:b3:02.7 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v15 drv=iavf unused=vfio-pci /var/log/nova/nova-compute.log 2022-05-10 17:00:43.684 706104 INFO nova.virt.libvirt.driver [req-222a65f6-2db1-4dc1-b4c6-a08b016bfd8f 2896d201c5ed44d8813273c48bed5ba3 fe8aaa7d14f44459b6c46e230d538765 - default default] [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Deletion of /var/lib/nova/instances/e8adc95a-2d7c-40f9-a5ab-bbfa640418b7_del complete 2022-05-10 17:00:43.759 706104 INFO nova.compute.manager [req-222a65f6-2db1-4dc1-b4c6-a08b016bfd8f 2896d201c5ed44d8813273c48bed5ba3 fe8aaa7d14f44459b6c46e230d538765 - default default] [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Took 3.12 seconds to destroy the instance on the hypervisor. 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [req-222a65f6-2db1-4dc1-b4c6-a08b016bfd8f 2896d201c5ed44d8813273c48bed5ba3 fe8aaa7d14f44459b6c46e230d538765 - default default] [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Failed to build and run instance: nova.exception.InternalError: Failure running os_vif plugin plug method: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Traceback (most recent call last): 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 77, in plug 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] plugin.plug(vif, instance_info) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 305, in plug 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self._plug_vf(vif, instance_info) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 269, in _plug_vf 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] pf_ifname = linux_net.get_ifname_by_pci_address( 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/vif_plug_ovs/linux_net.py", line 357, in get_ifname_by_pci_address 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] raise exception.PciDeviceNotFoundById(id=pci_addr) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] vif_plug_ovs.exception.PciDeviceNotFoundById: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Traceback (most recent call last): 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 696, in _plug_os_vif 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] os_vif.plug(vif, instance_info) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 82, in plug 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] raise os_vif.exception.PlugException(vif=vif, err=err) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] os_vif.exception.PlugException: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Traceback (most recent call last): 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2409, in _build_and_run_instance 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self.driver.spawn(context, instance, image_meta, 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 4172, in spawn 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self._create_guest_with_network( 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7240, in _create_guest_with_network 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self._cleanup( 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__ 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self.force_reraise() 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] raise self.value 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7205, in _create_guest_with_network 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self.plug_vifs(instance, network_info) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1277, in plug_vifs 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self.vif_driver.plug(instance, vif) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 720, in plug 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self._plug_os_vif(instance, vif_obj) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 700, in _plug_os_vif 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] raise exception.InternalError(msg) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] nova.exception.InternalError: Failure running os_vif plugin plug method: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.310 706104 ERROR os_vif [req-222a65f6-2db1-4dc1-b4c6-a08b016bfd8f 2896d201c5ed44d8813273c48bed5ba3 fe8aaa7d14f44459b6c46e230d538765 - default default] Failed to unplug vif VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True): vif_plug_ovs.exception.PciDeviceNotFoundById: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 77, in plug 2022-05-10 17:00:47.310 706104 ERROR os_vif plugin.plug(vif, instance_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 305, in plug 2022-05-10 17:00:47.310 706104 ERROR os_vif self._plug_vf(vif, instance_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 269, in _plug_vf 2022-05-10 17:00:47.310 706104 ERROR os_vif pf_ifname = linux_net.get_ifname_by_pci_address( 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/linux_net.py", line 357, in get_ifname_by_pci_address 2022-05-10 17:00:47.310 706104 ERROR os_vif raise exception.PciDeviceNotFoundById(id=pci_addr) 2022-05-10 17:00:47.310 706104 ERROR os_vif vif_plug_ovs.exception.PciDeviceNotFoundById: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 696, in _plug_os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif os_vif.plug(vif, instance_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 82, in plug 2022-05-10 17:00:47.310 706104 ERROR os_vif raise os_vif.exception.PlugException(vif=vif, err=err) 2022-05-10 17:00:47.310 706104 ERROR os_vif os_vif.exception.PlugException: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2409, in _build_and_run_instance 2022-05-10 17:00:47.310 706104 ERROR os_vif self.driver.spawn(context, instance, image_meta, 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 4172, in spawn 2022-05-10 17:00:47.310 706104 ERROR os_vif self._create_guest_with_network( 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7240, in _create_guest_with_network 2022-05-10 17:00:47.310 706104 ERROR os_vif self._cleanup( 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__ 2022-05-10 17:00:47.310 706104 ERROR os_vif self.force_reraise() 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise 2022-05-10 17:00:47.310 706104 ERROR os_vif raise self.value 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7205, in _create_guest_with_network 2022-05-10 17:00:47.310 706104 ERROR os_vif self.plug_vifs(instance, network_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1277, in plug_vifs 2022-05-10 17:00:47.310 706104 ERROR os_vif self.vif_driver.plug(instance, vif) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 720, in plug 2022-05-10 17:00:47.310 706104 ERROR os_vif self._plug_os_vif(instance, vif_obj) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 700, in _plug_os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif raise exception.InternalError(msg) 2022-05-10 17:00:47.310 706104 ERROR os_vif nova.exception.InternalError: Failure running os_vif plugin plug method: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2232, in _do_build_and_run_instance 2022-05-10 17:00:47.310 706104 ERROR os_vif self._build_and_run_instance(context, instance, image, 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2505, in _build_and_run_instance 2022-05-10 17:00:47.310 706104 ERROR os_vif raise exception.RescheduledException( 2022-05-10 17:00:47.310 706104 ERROR os_vif nova.exception.RescheduledException: Build of instance e8adc95a-2d7c-40f9-a5ab-bbfa640418b7 was re-scheduled: Failure running os_vif plugin plug method: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 110, in unplug 2022-05-10 17:00:47.310 706104 ERROR os_vif plugin.unplug(vif, instance_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 380, in unplug 2022-05-10 17:00:47.310 706104 ERROR os_vif self._unplug_vf(vif) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 343, in _unplug_vf 2022-05-10 17:00:47.310 706104 ERROR os_vif pf_ifname = linux_net.get_ifname_by_pci_address( 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/linux_net.py", line 357, in get_ifname_by_pci_address 2022-05-10 17:00:47.310 706104 ERROR os_vif raise exception.PciDeviceNotFoundById(id=pci_addr) 2022-05-10 17:00:47.310 706104 ERROR os_vif vif_plug_ovs.exception.PciDeviceNotFoundById: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed May 11 12:29:14 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 11 May 2022 09:29:14 -0300 Subject: [cinder] Bug deputy report for week of 05-11-2022 Message-ID: This is a bug report from 05-04-2022 to 05-11-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/cinder/+bug/1971970 "Cinder do not encrypt volumes correctly(25% success ratio)." Possible fix already on master. Medium - https://bugs.launchpad.net/cinder/+bug/1971668 " [IBM Storwize_SVC] Able to create Volume -type, Volume, and snapshot with Invalid clean_rate." Unassigned. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gzimin at mirantis.com Wed May 11 12:34:40 2022 From: gzimin at mirantis.com (Gleb Zimin) Date: Wed, 11 May 2022 16:34:40 +0400 Subject: [neutron][opencontrail]Trunk ports doesn't have standard attributes In-Reply-To: References: Message-ID: Hi Akihiro, thanks for reply. The thing is that description and other standard attributes don't work only with trunks. I can add descriptions, tags to neutron "core" objects such as networks, subnets, ports etc. Inheriting class StandardAttrDescriptionMixin didn't help, unfortunately. On Wed, May 11, 2022 at 4:16 PM Akihiro Motoki wrote: > Hi Gleb, > > The standard attributes are populated by StandardAttrDescriptionMixin [1]. > At a quick glance, the OpenContrail core plugin (both > NeutronPluginContrailCoreV3 or NeutronPluginContrailCoreV2) inherits > NeutronPluginContrailCoreBase [2] which inherits NeutronPluginBaseV2 > rather than NeutronDbPluginV2. > Thus, what StandardAttrDescriptionMixin does is not available in your case. > For best case, inheriting StandardAttrDescriptionMixin in the core > plugin addresses your issue, but I am not sure. > > From the initial implementation of Contrail plugin, Contrail plugin > inherits NeutronPluginBaseV2 instead of NeutronDbPluginV2. > Most logics behind the neutron API are implemented by > NeutronDbPluginV2 and perhaps the OpenStack documentation > assumes how NeutronDbPluginV2 behaves. However, Contrail plugin did > not choose this way and implemented all behaviors > behind the neutron API by its plugin, so Contrail plugin needs to > implement almost all logics in NeutronDbPluginV2 by itself. > I wonder if the issue you hit comes from this situation. I don't know > why this way was chosen but this is the real situation. > > Hope it may help you a bit. > > [1] > https://opendev.org/openstack/neutron/src/branch/master/neutron/db/standardattrdescription_db.py > [2] > https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/contrail_plugin_base.py;h=7bb7b54ab7bb35ec2193f2637f622fac8f39aed5;hb=refs/heads/master > > Thanks, > Akihiro Motoki (amotoki) > > On Fri, May 6, 2022 at 11:34 PM Gleb Zimin wrote: > > > > Environment: Openstack Victoria, OpenContrail (TungstenFabric) R2011. > > Problem: Trunk ports doesn't have standard attributes such as > description, timestamp. > > I have an environment where core plugin is OpenContrail. OpenContrail > has tf-neutron-plugin for proper work with neutron. There is TrunkPlugin, > that proxies all trunk-related request to the OpenContrail backend. Here is > the link for this plugin. > https://gerrit.tungsten.io/r/gitweb?p=tungstenfabric/tf-neutron-plugin.git;a=blob;f=neutron_plugin_contrail/plugins/opencontrail/services/trunk/plugin.py;h=35fc3310911143fd3b4cf8997c23d0358d652dba;hb=refs/heads/master > > According to the openstack documentation: Resources that inherit from > the HasStandardAttributes DB class can automatically have the extensions > written for standard attributes (e.g. timestamps, revision number, etc) > extend their resources by defining the ?api_collections? on their model. > These are used by extensions for standard attr resources to generate the > extended resources map. > > As I understood, it works only for plugins, which using Openstack > database. For example, openstack native trunk plugin has models.py file > where we can see this inheritance. > https://github.com/openstack/neutron/blob/master/neutron/services/trunk/models.py#L26 > > In case of OpenContrail+OpenStack Trunk plugin only redirect requests. > > What can I do to make Contrail Trunk plugin works in the same way? > > I'll appreciate any help. Thanks in advance. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Wed May 11 13:55:55 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 11 May 2022 14:55:55 +0100 Subject: [docs] feedback mechanism In-Reply-To: References: Message-ID: <15aba870a175416ba0919d3568481761519843e3.camel@redhat.com> On Tue, 2022-05-10 at 17:40 -0400, Peter Matulis wrote: > Hi, I was wondering what the various projects use to gather user feedback for > their documentation. > > Is something like what is visible here [0,1] acceptable to the OpenStack > community?There is a widget at the right side of the page (disable your ad- > blocker!). > > [0]: https://juju.is/docs/olm > [1]:?https://charmed-kubeflow.io/docs > > Peter Matulis > OpenStack Charms team If you look at pages like this one [1], you'll note that there's a bug icon in the top right corner. Clicking that will prepopulate a bug report in either Launchpad or Storyboard, thus allowing users to report bugs and maintainers to triage this bug as part of their usual triage process. That sounds similar to what you're suggesting? Stephen [1] https://docs.openstack.org/nova/latest/admin/availability-zones.html From thierry at openstack.org Wed May 11 15:43:30 2022 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 11 May 2022 17:43:30 +0200 Subject: [largescale-sig] Next meeting: May 11th, 15utc In-Reply-To: References: Message-ID: Today we had a short SIG meeting. We discussed our next "Ops Deep Dive" OpenInfra.Live episode, which with Berlin Summit and summer vacation should probably be targeted for September. Please reach out if you are interested in being featured! You can read the meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-05-11-15.00.html Our next IRC meeting will be May 25, at 1500utc on #openstack-operators on OFTC. We will discuss the contents of our Forum session in Berlin: "Challenges & Lessons from Operating OpenStack at Scale". Regards, -- Thierry Carrez From rlandy at redhat.com Wed May 11 16:38:47 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 11 May 2022 12:38:47 -0400 Subject: [TripleO] Gate blocker on master - tripleo-ci-centos-9-undercloud-upgrade Message-ID: Hello All, Currently we have a gate blocker in test: tripleo-ci-centos-9-undercloud-upgrade. The details have been added to https://bugs.launchpad.net/tripleo/+bug/1973043. We are testing a workaround until we can promote a new master build with new containers. Please hold rechecks in the meantime. Thanks, Ronelle (for TripleO CI) -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Wed May 11 19:29:16 2022 From: peter.matulis at canonical.com (Peter Matulis) Date: Wed, 11 May 2022 15:29:16 -0400 Subject: [docs] feedback mechanism In-Reply-To: <15aba870a175416ba0919d3568481761519843e3.camel@redhat.com> References: <15aba870a175416ba0919d3568481761519843e3.camel@redhat.com> Message-ID: On Wed, May 11, 2022 at 9:56 AM Stephen Finucane wrote: > > If you look at pages like this one [1], you'll note that there's a bug > icon in > the top right corner. Clicking that will prepopulate a bug report in either > Launchpad or Storyboard, thus allowing users to report bugs and > maintainers to > triage this bug as part of their usual triage process. That sounds similar > to > what you're suggesting? > > Stephen > > [1] https://docs.openstack.org/nova/latest/admin/availability-zones.html > > I'm aware of the bug-reporting link. I'm looking for an actual user feedback mechanism like I linked to. It's something simple that allows a user to provide positive or negative feedback. Thanks anyways. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed May 11 21:49:18 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 11 May 2022 16:49:18 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 12, 2022 at 1500 UTC In-Reply-To: <180a9f072e7.ea1ec8a190505.8898309317901204614@ghanshyammann.com> References: <180a9f072e7.ea1ec8a190505.8898309317901204614@ghanshyammann.com> Message-ID: <180b5190892.10486629111774.9064002481293517991@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check ** Fixing Zuul config error in OpenStack *** https://etherpad.opendev.org/p/zuul-config-error-openstack * Join leadership meeting with Board of Directors * New ELK service dashboard: e-r service ** https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global ** http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028346.html * 'tick', 'tock' release cadence ** Legal checks on using 'tick', 'tock' *** https://review.opendev.org/c/openstack/governance/+/840354 ** release notes discussion * OpenStack release naming after Zed ** http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028354.html ** https://review.opendev.org/c/openstack/governance/+/839897 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 09 May 2022 12:49:09 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for May 12, 2022 at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, May 11, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > > From oliver.weinmann at me.com Thu May 12 05:04:57 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Thu, 12 May 2022 07:04:57 +0200 Subject: Magnum CSI Cinder Plugin broken in Yoga? Message-ID: Hi, I just updated to Yoga in order to fix the problem with the broken metrics-server, but now I had problems deploying GITLAB and the errors lead to problems with mounting PVs. So I had a look at the csi-cinder-plugin and saw this: kolla-yoga) [oliweilocal at gedasvl99 images]$ kubectl get pods -n kube-system | grep -i csi csi-cinder-controllerplugin-0??????????????? 4/5 CrashLoopBackOff?? 168 (4m53s ago)?? 14h csi-cinder-nodeplugin-7kh9q????????????????? 2/2 Running??????????? 0???????????????? 14h csi-cinder-nodeplugin-q5bfq????????????????? 2/2 Running??????????? 0???????????????? 14h csi-cinder-nodeplugin-x4vrk????????????????? 2/2 Running??????????? 0???????????????? 14h I re-deployed the cluster and the error stays. Is there anything known about this? I have a small Ceph Pacific cluster. I can do some testing with an NFS backend as well and see if the problem goes away. Best Regards, Oliver From sandeepggn93 at gmail.com Thu May 12 05:22:55 2022 From: sandeepggn93 at gmail.com (Sandeep Yadav) Date: Thu, 12 May 2022 10:52:55 +0530 Subject: [TripleO] Gate blocker on Wallaby/Victoria/Train - Content-provider job failing Message-ID: Hello All, Currently, we have a check/gate blocker on Tripleo Content-provider job for wallaby and earlier branches. The content-provider job cannot pull Ceph related containers because quay.ceph.io is not accessible. The details have been added to the launchpad bug[1]. Please hold rechecks while we are investigating the issue. In the meantime, We are trying to switch the registry to pull ceph related containers and waiting for the triple ceph team squad reviews on the patch[2]. [1] https://bugs.launchpad.net/tripleo/+bug/1973115 [2] https://review.opendev.org/c/openstack/tripleo-common/+/841512 Sandeep (on behalf of TripleO CI team) -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Thu May 12 05:39:25 2022 From: syedammad83 at gmail.com (Ammad Syed) Date: Thu, 12 May 2022 10:39:25 +0500 Subject: Magnum CSI Cinder Plugin broken in Yoga? In-Reply-To: References: Message-ID: Hi, Are you using containerd or default docker as CRI ? Ammad On Thu, May 12, 2022 at 10:13 AM Oliver Weinmann wrote: > Hi, > > I just updated to Yoga in order to fix the problem with the broken > metrics-server, but now I had problems deploying GITLAB and the errors > lead to problems with mounting PVs. > > So I had a look at the csi-cinder-plugin and saw this: > > kolla-yoga) [oliweilocal at gedasvl99 images]$ kubectl get pods -n > kube-system | grep -i csi > csi-cinder-controllerplugin-0 4/5 CrashLoopBackOff 168 > (4m53s ago) 14h > csi-cinder-nodeplugin-7kh9q 2/2 Running > 0 14h > csi-cinder-nodeplugin-q5bfq 2/2 Running > 0 14h > csi-cinder-nodeplugin-x4vrk 2/2 Running > 0 14h > > I re-deployed the cluster and the error stays. Is there anything known > about this? I have a small Ceph Pacific cluster. I can do some testing > with an NFS backend as well and see if the problem goes away. > > Best Regards, > > Oliver > > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Thu May 12 06:42:16 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Thu, 12 May 2022 08:42:16 +0200 Subject: Magnum CSI Cinder Plugin broken in Yoga? In-Reply-To: References: Message-ID: Hi Ammad, Thanks for your quick reply. I deployed Openstack Yoga using kolla-ansible. I did a standard magnum k8s cluster deploy: (kolla-yoga) [oliweilocal at gedasvl99 ~]$ kubectl get nodes -o wide NAME??????????????????????????????? STATUS?? ROLES??? AGE VERSION?? INTERNAL-IP?? EXTERNAL-IP OS-IMAGE??????????????????????? KERNEL-VERSION CONTAINER-RUNTIME k8s-test-35-lr3ysuuiolme-master-0?? Ready??? master?? 15h v1.23.3?? 10.0.0.230??? 172.28.4.128?? Fedora CoreOS 35.20220410.3.1?? 5.16.18-200.fc35.x86_64?? docker://20.10.12 k8s-test-35-lr3ysuuiolme-node-0???? Ready??? ?? 15h v1.23.3?? 10.0.0.183??? 172.28.4.120?? Fedora CoreOS 35.20220410.3.1?? 5.16.18-200.fc35.x86_64?? docker://20.10.12 k8s-test-35-lr3ysuuiolme-node-1???? Ready??? ?? 15h v1.23.3?? 10.0.0.49???? 172.28.4.125?? Fedora CoreOS 35.20220410.3.1?? 5.16.18-200.fc35.x86_64?? docker://20.10.12 Seems to be docker. It seems it is failing to pull an image: (kolla-yoga) [oliweilocal at gedasvl99 ~]$ kubectl get events -n kube-system LAST SEEN?? TYPE????? REASON OBJECT????????????????????????????????????????????? MESSAGE 17m???????? Normal??? LeaderElection configmap/cert-manager-cainjector-leader-election gitlab-certmanager-cainjector-75f8fbb78d-xvm8s_61655618-fbe8-4070-b179-64e60a1ad067 became leader 17m???????? Normal??? LeaderElection lease/cert-manager-cainjector-leader-election gitlab-certmanager-cainjector-75f8fbb78d-xvm8s_61655618-fbe8-4070-b179-64e60a1ad067 became leader 17m???????? Normal??? LeaderElection configmap/cert-manager-controller gitlab-certmanager-774db6b45f-nkmck-external-cert-manager-controller became leader 17m???????? Normal??? LeaderElection lease/cert-manager-controller gitlab-certmanager-774db6b45f-nkmck-external-cert-manager-controller became leader 39s???????? Warning?? BackOff pod/csi-cinder-controllerplugin-0?????????????????? Back-off restarting failed container 30m???????? Normal??? BackOff pod/csi-cinder-controllerplugin-0?????????????????? Back-off pulling image "quay.io/k8scsi/csi-snapshotter:v1.2.2" I'm not 100% sure yet whether the problem with the csi-plugin affects my Gitlab deployment, but I just installed a NFS provisioner and the Gitlab deployment was successful. I will now try the very same thing again using the csi provisioner. Cheers, Oliver Am 12.05.2022 um 07:39 schrieb Ammad Syed: > Hi, > > Are you using containerd or default docker as CRI ? > > Ammad > > On Thu, May 12, 2022 at 10:13 AM Oliver Weinmann > wrote: > > Hi, > > I just updated to Yoga in order to fix the problem with the broken > metrics-server, but now I had problems deploying GITLAB and the > errors > lead to problems with mounting PVs. > > So I had a look at the csi-cinder-plugin and saw this: > > kolla-yoga) [oliweilocal at gedasvl99 images]$ kubectl get pods -n > kube-system | grep -i csi > csi-cinder-controllerplugin-0??????????????? 4/5 > CrashLoopBackOff?? 168 > (4m53s ago)?? 14h > csi-cinder-nodeplugin-7kh9q????????????????? 2/2 Running > 0???????????????? 14h > csi-cinder-nodeplugin-q5bfq????????????????? 2/2 Running > 0???????????????? 14h > csi-cinder-nodeplugin-x4vrk????????????????? 2/2 Running > 0???????????????? 14h > > I re-deployed the cluster and the error stays. Is there anything > known > about this? I have a small Ceph Pacific cluster. I can do some > testing > with an NFS backend as well and see if the problem goes away. > > Best Regards, > > Oliver > > > > > -- > Regards, > > > Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeepggn93 at gmail.com Thu May 12 07:48:13 2022 From: sandeepggn93 at gmail.com (Sandeep Yadav) Date: Thu, 12 May 2022 13:18:13 +0530 Subject: [TripleO] Gate blocker on master - tripleo-ci-centos-9-undercloud-upgrade In-Reply-To: References: Message-ID: Hello All, We reverted back the bump of python-smmap in [1], and affected jobs are back to green as per the testproject results[2] [1] https://review.rdoproject.org/r/c/rdoinfo/+/42757 [2] https://review.rdoproject.org/r/c/testproject/+/36256/72#message-7b668259b73e2db0a3a869eaac7e14fdd89b9365 ~~~ tripleo-ci-centos-9-undercloud-upgrade https://review.rdoproject.org/zuul/build/6248f4c01d86423cafc2f5371f673331 : SUCCESS in 1h 01m 31s periodic-tripleo-centos-9-buildimage-ironic-python-agent-master https://review.rdoproject.org/zuul/build/b0dd6db972514c019c3a6acd70f59f50 : SUCCESS in 30m 12s periodic-tripleo-centos-9-buildimage-overcloud-hardened-uefi-full-master https://review.rdoproject.org/zuul/build/c74c2b4ed8b44303bbd6ecdb57f88188 : SUCCESS in 31m 51s ~~~ On Wed, May 11, 2022 at 10:19 PM Ronelle Landy wrote: > Hello All, > > Currently we have a gate blocker in > test: tripleo-ci-centos-9-undercloud-upgrade. > > The details have been added to > https://bugs.launchpad.net/tripleo/+bug/1973043. > > We are testing a workaround until we can promote a new master build with > new containers. > > Please hold rechecks in the meantime. > > Thanks, > Ronelle (for TripleO CI) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Thu May 12 15:01:24 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 12 May 2022 17:01:24 +0200 Subject: [cinder][kolla][OSA][release] Yoga cycle-trailing release deadline Message-ID: <52b749f6-6c5d-ef83-36bc-af05c7b6c26c@est.tech> Hello teams with deliverables following the cycle-trailing release model! This is just a reminder to wrap up your trailing deliverables for Yoga. A few cycles ago the deadline for cycle-trailing projects was extended to give more time. The deadline for Yoga is *June 23rd, 2022* [1]. If things are ready sooner than that though, all the better for our downstream consumers. For reference, the following cycle-trailing deliverables will need final releases at some point until the above deadline: ansible-collection-kolla cinderlib kayobe kolla-ansible kolla openstack-ansible-roles openstack-ansible Thanks! El?d Ill?s irc: elodilles [1]https://releases.openstack.org/zed/schedule.html#z-cycle-trail -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Thu May 12 05:33:06 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 12 May 2022 14:33:06 +0900 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: The puppy implementation executes the following command to get the interface information. /bin/os-net-config -i I'd recommend you check the command output in *all overcloud nodes *. If you need to use different interfaces for different roles then you need to define the parameter as role specific one, defined under Parameters. On Wed, May 11, 2022 at 4:26 PM Anirudh Gupta wrote: > Hi Takashi, > > Thanks for clarifying my issues regarding the support of PTP in Wallaby > Release. > > In Train, I have also tried passing the exact interface name and took 2 > runs with and without quotes like below: > > > *PtpInterface: eno1* > > *PtpInterface: 'eno1'* > > But in both the cases, the issue observed was similar > > 2022-05-11 10:33:20.189107 | 5254001f-9952-934d-e901-0000000030be | > FATAL | Wait for puppet host configuration to finish | > overcloud-controller-2 | error={"ansible_job_id": "526310775819.36650", > "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply > --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules > --detailed-exitcodes --summarize --color=false > /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t > puppet-user", "delta": "0:00:04.289208", "end": "2022-05-11 > 10:33:08.195052", "failed_when_result": true, "finished": 1, "msg": > "non-zero return code", "rc": 1, "start": "2022-05-11 10:33:03.905844", > "stderr": "<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' > is deprecated in favor of using 'lookup'. See > https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & > line not available)\n<13>May 11 10:33:03 puppet-user: Warning: > /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It > should be converted to version 5\n<13>May 11 10:33:03 puppet-user: > (file: /etc/puppet/hiera.yaml)\n<13>May 11 10:33:03 puppet-user: Warning: > Undefined variable '::deploy_config_name'; \\n (file & line not > available)\n<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module > 'tripleo' has unresolved dependencies - it will only see those that are > resolved. Use 'puppet module list --tree' to see information about > modules\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: > Error: Evaluation Error: A substring operation does not accept a String as > a character index. Expected an Integer (file: > /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, > column: 46) on node overcloud-controller-2.localdomain", "stderr_lines": > ["<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is > deprecated in favor of using 'lookup'. See > https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & > line not available)", "<13>May 11 10:33:03 puppet-user: Warning: > /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It > should be converted to version 5", "<13>May 11 10:33:03 puppet-user: > (file: /etc/puppet/hiera.yaml)", "<13>May 11 10:33:03 puppet-user: > Warning: Undefined variable '::deploy_config_name'; \\n (file & line not > available)", "<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: > module 'tripleo' has unresolved dependencies - it will only see those that > are resolved. Use 'puppet module list --tree' to see information about > modules\\n (file & line not available)", "<13>May 11 10:33:03 > puppet-user: *Error: Evaluation Error: A substring operation does not > accept a String as a character index. Expected an Integer (file: > /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, > column: 46) on node overcloud-controller-2.localdomain"], "stdout": "", > "stdout_lines": []}* > 2022-05-11 10:33:20.190263 | 5254001f-9952-934d-e901-0000000030be | > TIMING | Wait for puppet host configuration to finish | > overcloud-controller-2 | 0:12:41.268734 | 7.01s > > I'll be highly grateful if you could further extend your support to > resolve the issue. > > Regards > Anirudh Gupta > > On Tue, May 10, 2022 at 9:15 PM Takashi Kajinami > wrote: > >> >> >> On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta >> wrote: >> >>> Hi Takashi, >>> >>> Thanks for your suggestion. >>> >>> I downloaded the updated Train Images and they had the ptp.pp file >>> available on the overcloud and undercloud machines >>> >>> [root at overcloud-controller-1 /]# find . -name "ptp.pp" >>> >>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>> >>> With this, I re-executed the deployment and got the below error on the >>> machines >>> >>> 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce | >>> FATAL | Wait for puppet host configuration to finish | >>> overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", >>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>> --detailed-exitcodes --summarize --color=false >>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>> puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 >>> 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": >>> "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", >>> "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' >>> is deprecated in favor of using 'lookup'. See >>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file >>> & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: >>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>> should be converted to version 5\n<13>May 10 20:05:37 puppet-user: >>> (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: >>> Undefined variable '::deploy_config_name'; \\n (file & line not >>> available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module >>> 'tripleo' has unresolved dependencies - it will only see those that are >>> resolved. Use 'puppet module list --tree' to see information about >>> modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: >>> Error: Evaluation Error: A substring operation does not accept a String as >>> a character index. Expected an Integer (file: >>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>> column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": >>> ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is >>> deprecated in favor of using 'lookup'. See >>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file >>> & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: >>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>> should be converted to version 5", "<13>May 10 20:05:37 puppet-user: >>> (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: >>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>> available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: >>> module 'tripleo' has unresolved dependencies - it will only see those that >>> are resolved. Use 'puppet module list --tree' to see information about >>> modules\\n (file & line not available)", "<13>May 10 20:05:37 >>> puppet-user: *Error: Evaluation Error: A substring operation does not >>> accept a String as a character index. Expected an Integer (file: >>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>> column: 46) *on node overcloud-controller-1.localdomain"], "stdout": >>> "", "stdout_lines": []} >>> >>> The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, >>> line: 41, column: 46 *had the following code: >>> 34 class tripleo::profile::base::time::ptp ( >>> 35 $ptp4l_interface = 'eth0', >>> 36 $ptp4l_conf_slaveonly = 1, >>> 37 $ptp4l_conf_network_transport = 'UDPv4', >>> 38 ) { >>> 39 >>> 40 $interface_mapping = generate('/bin/os-net-config', '-i', >>> $ptp4l_interface) >>> 41 *$ptp4l_interface_name = $interface_mapping[$ptp4l_interface]* >>> >>> >>> *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file >>> is as below: >>> >>> resource_registry: >>> # FIXME(bogdando): switch it, once it is containerized >>> OS::TripleO::Services::Ptp: >>> ../../deployment/time/ptp-baremetal-puppet.yaml >>> OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp >>> >>> parameter_defaults: >>> # PTP hardware interface name >>> *PtpInterface: 'nic1'* >>> >>> # Configure PTP clock in slave mode >>> PtpSlaveMode: 1 >>> >>> # Configure PTP message transport protocol >>> PtpMessageTransport: 'UDPv4' >>> >>> I have also tried modifying the entry as below: >>> *PtpInterface: 'nic1' #*(i.e. without quotes), but the error remains >>> the same. >>> >>> Queries: >>> >>> 1. Any pointers to resolve this? >>> >>> I'm not familiar with ptp but you'd need to use the actual interface name >> if you are not using the alias name. >> >> >> >>> >>> 1. You were mentioning something about the support of PTP not there >>> in the wallaby release. Can you please confirm? >>> >>> IIUC PTP is still supported even in master. What we removed is the >> implementation using Puppet >> which was replaced by ansible. >> >> The warning regarding OS::TripleO::Services::Ptp was added when we >> decided to merge >> all time sync services to the single service resource which is >> OS::TripleO::Services::Timesync[1]. >> It's related to how resources are defined in Heat and doesn't affect >> configuration support itself. >> >> [1] >> https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 >> >> >> >>> It would be a great help if you could extend a little more support to >>> resolve the issues. >>> >>> Regards >>> Anirudh Gupta >>> >>> >>> On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta >>> wrote: >>> >>>> I'll check that well. >>>> By the way, I downloaded the images from the below link >>>> >>>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>>> >>>> They seem to be updated yesterday, I'll download and try the deployment >>>> with the latest images. >>>> >>>> Also are you pointing that the support for PTP would not be there in >>>> Wallaby Release? >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami >>>> wrote: >>>> >>>>> >>>>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta >>>>> wrote: >>>>> >>>>>> Hi Takashi >>>>>> >>>>>> I have checked this in undercloud only. >>>>>> I don't find any such file in overcloud. Could this be a concern? >>>>>> >>>>> >>>>> The manifest should exist in overcloud nodes and the missing file is >>>>> the exact cause >>>>> of that puppet failure during deployment. >>>>> >>>>> Please check your overcloud images used to install overcloud nodes and >>>>> ensure that >>>>> you're using the right one. You might be using the image for a >>>>> different release. >>>>> We removed the manifest file during the Wallaby cycle. >>>>> >>>>> >>>>>> >>>>>> Regards >>>>>> Anirudh Gupta >>>>>> >>>>>> >>>>>> >>>>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami < >>>>>>> tkajinam at redhat.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Takashi, >>>>>>>>> >>>>>>>>> Thanks for your reply. >>>>>>>>> >>>>>>>>> I have checked on my machine and the file "ptp.pp" do exist at >>>>>>>>> path " >>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>> " >>>>>>>>> >>>>>>>> Did you check this in your undercloud or overcloud ? >>>>>>>> During the deployment all configuration files are generated using >>>>>>>> puppet modules >>>>>>>> installed in overcloud nodes, so you should check this in overcloud >>>>>>>> nodes. >>>>>>>> >>>>>>>> Also, the deprecation warning is not implemented >>>>>>>> >>>>>>> Ignore this incomplete line. I was looking for the implementation >>>>>>> which shows the warning >>>>>>> but I found it in tripleoclient and it looks reasonable according to >>>>>>> what we have in >>>>>>> environments/services/ptp.yaml . >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>>>>> for controller and compute *before rendering the templates, but >>>>>>>>> still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>>>>> >>>>>>>> >>>>>>>> IIUC you don't need this because OS::TripleO::Services::Timesync >>>>>>>> becomes an alias >>>>>>>> to the Ptp service resource when you use the ptp environment file. >>>>>>>> >>>>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>> >>>>>>>>> Can you suggest any workarounds or any pointers to look further in >>>>>>>>> order to resolve this issue? >>>>>>>>> >>>>>>>> >>>>>>>>> Regards >>>>>>>>> Anirudh Gupta >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>> >>>>>>>>>> I'm not familiar with PTP, but the error you pasted indicates >>>>>>>>>> that the required puppet manifest does not exist in your overcloud >>>>>>>>>> node/image. >>>>>>>>>> >>>>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>>>> >>>>>>>>>> This should not happen and the class should exist as long as you >>>>>>>>>> have puppet-tripleo from stable/train installed. >>>>>>>>>> >>>>>>>>>> I'd recommend you check installed tripleo/puppet packages and >>>>>>>>>> ensure everything is in the consistent release. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta < >>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi All >>>>>>>>>>> >>>>>>>>>>> Any update on this? >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> Anirudh Gupta >>>>>>>>>>> >>>>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Team, >>>>>>>>>>>> >>>>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>>>> >>>>>>>>>>>> When I was executing the Overcloud deployment script, passing >>>>>>>>>>>> the PTP yaml, it gave the following option at the starting >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>>>> >>>>>>>>>>>> even if passing Y, it starts executing for sometime and the >>>>>>>>>>>> gives the following error >>>>>>>>>>>> >>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Can someone suggest some pointers in order to resolve this >>>>>>>>>>>> issue and move forward? >>>>>>>>>>>> >>>>>>>>>>>> Regards >>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta < >>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>> >>>>>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>>>>> successfully. >>>>>>>>>>>>> I need to enable PTP service while deploying the overcloud for >>>>>>>>>>>>> which I have included the service in my deployment >>>>>>>>>>>>> >>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> * -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>>>> \* >>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>> >>>>>>>>>>>>> But it gives the following error >>>>>>>>>>>>> >>>>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>>>> 'lookup'. See >>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>>>> (file: >>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while evaluating >>>>>>>>>>>>> a Function Call, Could not find class ::tripleo::profile::base::time::ptp >>>>>>>>>>>>> for overcloud-controller-0.localdomain (file: >>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* >>>>>>>>>>>>> overcloud-controller-0.localdomain"], "stdout": "", "stdout_lines": []} >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>>>> >>>>>>>>>>>>> Regards >>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>> >>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp.methot at planethoster.info Thu May 12 17:23:15 2022 From: jp.methot at planethoster.info (J-P Methot) Date: Thu, 12 May 2022 13:23:15 -0400 Subject: [neutron] Static route not added in namespace using DVR on Wallaby In-Reply-To: References: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> <8f613f51-4df1-7927-347d-c17e01a68055@planethoster.info> Message-ID: <30e014f2-0608-9d6e-02c1-fdbca0700fb9@planethoster.info> Hi, I got the debug logs. They were a bit too long so I put them in a txt file. Please tell me if you'd prefer a pastebin instead. On 5/10/22 02:38, Slawek Kaplonski wrote: > Hi, > > W?dniu pon, 9 maj 2022 o?13:49:22 -0400 u?ytkownik J-P Methot > napisa?: >> >> I tested this on my own DVR test environment with a random static >> route and I'm getting the same results as on production. Here's what >> I get in the logs : >> >> 2022-05-09 17:28:50.018 691 INFO neutron.agent.l3.agent [-] Starting >> processing update 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, >> priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait time >> elapsed: 0.001 >> 2022-05-09 17:28:50.019 691 INFO neutron.agent.l3.agent [-] Starting >> router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, >> priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait time >> elapsed: 0.002 >> 2022-05-09 17:28:51.640 691 INFO neutron.agent.l3.agent [-] Finished >> a router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, update_id >> 9e112de1-f538-4a41-9526-152aa3937129. Time elapsed: 1.622 >> >> As you can see, there was an attempt at updating the router and it >> did return as successful. However, there was no new route added in >> the router or floating ip namespace. No error either. >> > Can You do the same with debug logs enabled? > >> On 5/6/22 14:40, Slawek Kaplonski wrote: >>> Hi, >>> >>> W?dniu pi?, 6 maj 2022 o?14:14:47 -0400 u?ytkownik J-P Methot >>> napisa?: >>>> >>>> Hi, >>>> >>>> We're in this situation where we are going to move some instances >>>> from one openstack cluster to another. After this process, we want >>>> our instances on the new openstack cluster to keep the same >>>> floating IPs but also to be able to communicate with some instances >>>> that are in the same public IP range on the first cluster. >>>> >>>> To accomplish this, we want to add static routes like 'X.X.X.X/32 >>>> via Y.Y.Y.Y'. However, we're using DVR and when we add the static >>>> routes, they do not show up anywhere in any of the namespaces. Is >>>> there a different way to add static routes on DVR instead of using >>>> openstack router add route ? >>>> >>> No, there is no other way to add static routes to the dvr router. I >>> don't have any DVR deployment now to check it but IIRC route should >>> be added in the qrouter namespace in the compute nodes where router >>> exists. If it's not there please check logs of the l3-agent on those >>> hosts, maybe there are some errors there. >>>> -- >>>> Jean-Philippe M?thot >>>> Senior Openstack system administrator >>>> Administrateur syst?me Openstack s?nior >>>> PlanetHoster inc. >>> -- >>> Slawek Kaplonski >>> Principal Software Engineer >>> Red Hat >> -- >> Jean-Philippe M?thot >> Senior Openstack system administrator >> Administrateur syst?me Openstack s?nior >> PlanetHoster inc. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Jean-Philippe M?thot Senior Openstack system administrator Administrateur syst?me Openstack s?nior PlanetHoster inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 2022-05-11 21:24:09.187 951 DEBUG neutron.agent.l3.agent [req-c0d9fa60-e145-4872-ad23-d3bc27453dc7 181b8917423847fba4ee2aab5100497d 884742392e414877a102240387f46823 - - -] Got routers updated notification :['41fcd10b-7db5-45d9-b23c-e22f34c45eec'] routers_updated /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/agent.py:586 2022-05-11 21:24:09.189 951 INFO neutron.agent.l3.agent [-] Starting processing update 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, priority 1, update_id 0d6f9a8e-05d2-4150-ae99-0078c123c068. Wait time elapsed: 0.001 2022-05-11 21:24:09.190 951 INFO neutron.agent.l3.agent [-] Starting router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, priority 1, update_id 0d6f9a8e-05d2-4150-ae99-0078c123c068. Wait time elapsed: 0.002 2022-05-11 21:24:09.190 951 DEBUG neutron.common.utils [-] Time-cost: call f62f7e77-a67a-40dd-bb67-9aedb45f0240 function get_routers start wrapper /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/utils.py:934 2022-05-11 21:24:10.699 951 DEBUG neutron.common.utils [-] Time-cost: call f62f7e77-a67a-40dd-bb67-9aedb45f0240 function get_routers took 1.509s seconds to run wrapper /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/timeutils.py:386 2022-05-11 21:24:10.700 951 DEBUG neutron.agent.l3.agent [-] Router 41fcd10b-7db5-45d9-b23c-e22f34c45eec info in cache, will do the router update action. _process_router_if_compatible /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/agent.py:644 2022-05-11 21:24:10.700 951 DEBUG neutron_lib.callbacks.manager [-] Notify callbacks [] for router, before_update _notify_loop /var/lib/kolla/venv/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py:192 2022-05-11 21:24:10.773 951 DEBUG neutron.agent.l3.router_info [-] Process updates, router 41fcd10b-7db5-45d9-b23c-e22f34c45eec process /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/router_info.py:1307 2022-05-11 21:24:10.795 951 DEBUG oslo_concurrency.lockutils [-] Lock "l3-agent-pd" acquired by "neutron.agent.linux.pd.PrefixDelegation.sync_router" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:355 2022-05-11 21:24:10.796 951 DEBUG oslo_concurrency.lockutils [-] Lock "l3-agent-pd" released by "neutron.agent.linux.pd.PrefixDelegation.sync_router" :: held 0.000s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:367 2022-05-11 21:24:10.796 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "port-lock-fip-6636b4e5-16e2-482b-a0a7-5897ab0776c4-fg-0f0c96ae-7a" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:10.847 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "port-lock-fip-6636b4e5-16e2-482b-a0a7-5897ab0776c4-fg-0f0c96ae-7a" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:10.859 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:10.859 951 DEBUG neutron.common.coordination [-] Lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" acquired by "process_external" :: waited 0.000s _synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/coordination.py:73 2022-05-11 21:24:10.897 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:10.898 951 DEBUG oslo_concurrency.lockutils [-] Acquired external semaphore "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:272 2022-05-11 21:24:10.943 951 DEBUG neutron.agent.linux.iptables_manager [-] IPTablesManager.apply completed with success. 6 iptables commands were issued _apply_synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/linux/iptables_manager.py:625 2022-05-11 21:24:10.944 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:11.028 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:11.029 951 DEBUG neutron.common.coordination [-] Lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" released by "process_external" :: held 0.170s _synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/coordination.py:85 2022-05-11 21:24:11.029 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:11.029 951 DEBUG neutron.common.coordination [-] Lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" acquired by "process_address_scope" :: waited 0.000s _synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/coordination.py:73 2022-05-11 21:24:11.057 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:11.057 951 DEBUG oslo_concurrency.lockutils [-] Acquired external semaphore "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:272 2022-05-11 21:24:11.083 951 DEBUG neutron.agent.linux.iptables_manager [-] IPTablesManager.apply completed with success. 0 iptables commands were issued _apply_synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/linux/iptables_manager.py:625 2022-05-11 21:24:11.083 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:11.083 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:11.083 951 DEBUG neutron.common.coordination [-] Lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" released by "process_address_scope" :: held 0.054s _synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/coordination.py:85 2022-05-11 21:24:11.084 951 DEBUG neutron.agent.l3.router_info [-] Removed route entry is '{'destination': '192.168.25.0/24', 'nexthop': '10.50.51.1'}' routes_updated /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/router_info.py:245 2022-05-11 21:24:11.107 951 DEBUG neutron_lib.callbacks.manager [-] Notify callbacks ['neutron.agent.metadata.driver.after_router_updated-8778777282865', 'neutron.agent.linux.pd.update_router-8778777281118'] for router, after_update _notify_loop /var/lib/kolla/venv/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py:192 2022-05-11 21:24:11.107 951 DEBUG oslo_concurrency.lockutils [-] Lock "l3-agent-pd" acquired by "neutron.agent.linux.pd.update_router" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:355 2022-05-11 21:24:11.108 951 DEBUG oslo_concurrency.lockutils [-] Lock "l3-agent-pd" released by "neutron.agent.linux.pd.update_router" :: held 0.000s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:367 2022-05-11 21:24:11.108 951 DEBUG neutron.agent.l3.l3_agent_extensions_manager [-] L3 agent extension(s) finished router 41fcd10b-7db5-45d9-b23c-e22f34c45eec update action. update_router /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/l3_agent_extensions_manager.py:63 2022-05-11 21:24:11.108 951 INFO neutron.agent.l3.agent [-] Finished a router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, update_id 0d6f9a8e-05d2-4150-ae99-0078c123c068. Time elapsed: 1.919 2022-05-11 21:24:22.707 951 DEBUG oslo_service.periodic_task [req-85c87e55-5d28-44a5-ba20-282ac842c18a - - - - -] Running periodic task L3NATAgentWithStateReport.periodic_sync_routers_task run_periodic_tasks /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_service/periodic_task.py:210 2022-05-11 21:24:36.777 951 DEBUG neutron.agent.l3.agent [req-9e76cefe-be04-426c-a8b9-6ca2d3788417 181b8917423847fba4ee2aab5100497d 884742392e414877a102240387f46823 - - -] Got routers updated notification :['41fcd10b-7db5-45d9-b23c-e22f34c45eec'] routers_updated /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/agent.py:586 2022-05-11 21:24:36.779 951 INFO neutron.agent.l3.agent [-] Starting processing update 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, priority 1, update_id 22d78d78-ecc7-4173-a354-1388ab90e883. Wait time elapsed: 0.001 2022-05-11 21:24:36.779 951 INFO neutron.agent.l3.agent [-] Starting router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, priority 1, update_id 22d78d78-ecc7-4173-a354-1388ab90e883. Wait time elapsed: 0.001 2022-05-11 21:24:36.780 951 DEBUG neutron.common.utils [-] Time-cost: call f62f7e77-a67a-40dd-bb67-9aedb45f0240 function get_routers start wrapper /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/utils.py:934 2022-05-11 21:24:38.201 951 DEBUG neutron.common.utils [-] Time-cost: call f62f7e77-a67a-40dd-bb67-9aedb45f0240 function get_routers took 1.421s seconds to run wrapper /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/timeutils.py:386 2022-05-11 21:24:38.202 951 DEBUG neutron.agent.l3.agent [-] Router 41fcd10b-7db5-45d9-b23c-e22f34c45eec info in cache, will do the router update action. _process_router_if_compatible /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/agent.py:644 2022-05-11 21:24:38.202 951 DEBUG neutron_lib.callbacks.manager [-] Notify callbacks [] for router, before_update _notify_loop /var/lib/kolla/venv/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py:192 2022-05-11 21:24:38.296 951 DEBUG neutron.agent.l3.router_info [-] Process updates, router 41fcd10b-7db5-45d9-b23c-e22f34c45eec process /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/router_info.py:1307 2022-05-11 21:24:38.319 951 DEBUG oslo_concurrency.lockutils [-] Lock "l3-agent-pd" acquired by "neutron.agent.linux.pd.PrefixDelegation.sync_router" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:355 2022-05-11 21:24:38.320 951 DEBUG oslo_concurrency.lockutils [-] Lock "l3-agent-pd" released by "neutron.agent.linux.pd.PrefixDelegation.sync_router" :: held 0.001s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:367 2022-05-11 21:24:38.321 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "port-lock-fip-6636b4e5-16e2-482b-a0a7-5897ab0776c4-fg-0f0c96ae-7a" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:38.375 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "port-lock-fip-6636b4e5-16e2-482b-a0a7-5897ab0776c4-fg-0f0c96ae-7a" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:38.389 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:38.390 951 DEBUG neutron.common.coordination [-] Lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" acquired by "process_external" :: waited 0.001s _synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/coordination.py:73 2022-05-11 21:24:38.411 951 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:355 2022-05-11 21:24:38.413 951 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" released by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:367 2022-05-11 21:24:38.413 951 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:355 2022-05-11 21:24:38.413 951 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" released by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:367 2022-05-11 21:24:38.433 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:38.434 951 DEBUG oslo_concurrency.lockutils [-] Acquired external semaphore "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:272 2022-05-11 21:24:38.467 951 DEBUG neutron.agent.linux.iptables_manager [-] IPTablesManager.apply completed with success. 0 iptables commands were issued _apply_synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/linux/iptables_manager.py:625 2022-05-11 21:24:38.467 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:38.554 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:38.555 951 DEBUG neutron.common.coordination [-] Lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" released by "process_external" :: held 0.165s _synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/coordination.py:85 2022-05-11 21:24:38.555 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:38.556 951 DEBUG neutron.common.coordination [-] Lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" acquired by "process_address_scope" :: waited 0.000s _synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/coordination.py:73 2022-05-11 21:24:38.573 951 DEBUG oslo_concurrency.lockutils [-] Acquired lock "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:266 2022-05-11 21:24:38.574 951 DEBUG oslo_concurrency.lockutils [-] Acquired external semaphore "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:272 2022-05-11 21:24:38.604 951 DEBUG neutron.agent.linux.iptables_manager [-] IPTablesManager.apply completed with success. 0 iptables commands were issued _apply_synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/linux/iptables_manager.py:625 2022-05-11 21:24:38.604 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "iptables-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:38.604 951 DEBUG oslo_concurrency.lockutils [-] Releasing lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" lock /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282 2022-05-11 21:24:38.604 951 DEBUG neutron.common.coordination [-] Lock "router-lock-ns-qrouter-41fcd10b-7db5-45d9-b23c-e22f34c45eec" released by "process_address_scope" :: held 0.049s _synchronized /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/common/coordination.py:85 2022-05-11 21:24:38.605 951 DEBUG neutron.agent.l3.router_info [-] Added route entry is '{'destination': '192.168.25.0/24', 'nexthop': '10.50.51.1'}' routes_updated /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/router_info.py:251 2022-05-11 21:24:38.640 951 DEBUG neutron_lib.callbacks.manager [-] Notify callbacks ['neutron.agent.metadata.driver.after_router_updated-8778777282865', 'neutron.agent.linux.pd.update_router-8778777281118'] for router, after_update _notify_loop /var/lib/kolla/venv/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py:192 2022-05-11 21:24:38.641 951 DEBUG oslo_concurrency.lockutils [-] Lock "l3-agent-pd" acquired by "neutron.agent.linux.pd.update_router" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:355 2022-05-11 21:24:38.641 951 DEBUG oslo_concurrency.lockutils [-] Lock "l3-agent-pd" released by "neutron.agent.linux.pd.update_router" :: held 0.000s inner /var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:367 2022-05-11 21:24:38.641 951 DEBUG neutron.agent.l3.l3_agent_extensions_manager [-] L3 agent extension(s) finished router 41fcd10b-7db5-45d9-b23c-e22f34c45eec update action. update_router /var/lib/kolla/venv/lib/python3.8/site-packages/neutron/agent/l3/l3_agent_extensions_manager.py:63 2022-05-11 21:24:38.642 951 INFO neutron.agent.l3.agent [-] Finished a router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, update_id 22d78d78-ecc7-4173-a354-1388ab90e883. Time elapsed: 1.863 From katonalala at gmail.com Thu May 12 17:40:57 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 12 May 2022 19:40:57 +0200 Subject: [Neutron][neutron-vpnaas] proposing Mohammed Naser for neutron-vpnaas core reviewer In-Reply-To: References: <2626416.mvXUDI8C0e@p1> Message-ID: Hi, Sorry for forgetting this topic for weeks. I think we have all the votes for mnaser and time for everybody to raise a hand. Welcome Mohammed :-) Lajos Miguel Lavalle ezt ?rta (id?pont: 2022. ?pr. 26., K, 16:29): > +1. Thanks mnaser and Dmitry for stepping up to the plate > > Cheers > > On Mon, Apr 25, 2022 at 2:17 PM Slawek Kaplonski > wrote: > >> Hi, >> >> On poniedzia?ek, 25 kwietnia 2022 17:18:19 CEST Lajos Katona wrote: >> > Hi, >> > I would like to propose Mohammed Naser (mnaser) as a core reviewer to >> > neutron-vpnaas. >> > He and his company uses neutron-vpnaas in production and volunteered to >> > help in the maintenance of it. >> > >> > You can vote/feedback in this email thread. >> > If there is no objection by 6th of May, we will add Mohammed to the core >> > list. >> > >> > Thanks >> > Lajos >> > >> >> +1 >> Great to see Mohammed stepping up to maintain neutron-vpnaas. Thanks >> Mohammed :) >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu May 12 17:48:12 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 12 May 2022 19:48:12 +0200 Subject: [neutron] Drivers meeting agenda - 13.05.2022. Message-ID: Hi Neutron Drivers, The agenda for tomorrow's drivers meeting is at [1]. [RFE][fwaas][OVN]support l3 firewall for ovn driver (#link https://bugs.launchpad.net/neutron/+bug/1971958) * On Demand agenda: (ralonsoh): https://bugs.launchpad.net/neutron/+bug/1973049: with the SQLAlchemy 2.0 compatibility patch in place (n-lib 2.21.0), there are some issues when an API call (e.g.: user command) updates the same resource as another RPC call (agent call). This is very frequent with port status and FIP status. Sometimes (I still need to investigate the reason), the retry command detects that there is another SQL transaction in progress ( https://paste.opendev.org/show/beHzMCKjVLmE5nX0ZXZX/). [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda See you at the meeting tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Thu May 12 18:07:28 2022 From: helena at openstack.org (Helena Spease) Date: Thu, 12 May 2022 13:07:28 -0500 Subject: Meet the OpenStack community in Berlin, June 7-9 Message-ID: Hi everyone, OpenInfra Summit Berlin is around the corner, and in less than a month, hundreds of OpenStack community members will be in Berlin, Germany to present and share the latest upstream development, use cases and news in the OpenStack community. Here I have compiled all the OpenStack sessions that you can look forward to at the Summit. https://superuser.openstack.org/articles/what-can-you-expect-to-learn-about-openstack-at-the-openinfra-summit/ Don?t forget to register before the Summit prices increase on May 16 at 11:59 PM PT to save on your ticket purchase and learn all about the OpenStack project at the OpenInfra Summit Berlin, June 7-9. You can register for your ticket here: https://openinfrasummitberlin.eventbrite.com Cheers, Helena -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeepggn93 at gmail.com Fri May 13 05:16:45 2022 From: sandeepggn93 at gmail.com (Sandeep Yadav) Date: Fri, 13 May 2022 10:46:45 +0530 Subject: [TripleO] Gate blocker on Wallaby/Victoria/Train - Content-provider job failing In-Reply-To: References: Message-ID: Hello All, This issue is resolved. The issue affecting the quay.ceph.io is resolved, Also we have moved from quay.ceph.io to quay.io in [1] to pull ceph containers for stable branches(Master branch was already using quay.io). [1] https://review.opendev.org/q/topic:ceph_monitoring_containers On Thu, May 12, 2022 at 10:52 AM Sandeep Yadav wrote: > Hello All, > > Currently, we have a check/gate blocker on Tripleo Content-provider job > for wallaby and earlier branches. The content-provider job cannot pull Ceph > related containers because quay.ceph.io is not accessible. The > details have been added to the launchpad bug[1]. > > Please hold rechecks while we are investigating the issue. In the > meantime, We are trying to switch the registry to pull ceph related > containers and waiting for the triple ceph team squad reviews on the > patch[2]. > > [1] https://bugs.launchpad.net/tripleo/+bug/1973115 > [2] https://review.opendev.org/c/openstack/tripleo-common/+/841512 > > Sandeep (on behalf of TripleO CI team) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri May 13 06:27:05 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 13 May 2022 11:57:05 +0530 Subject: ImportError: cannot import name 'ceph_spec'" | Openstack Wallaby | Tripleo Message-ID: Hi Team, I am deploying openstack wallaby using the deployed ceph method. I am currently facing this issue while deploying overcloud. Can someone please check ? "2022-05-13 14:17:51.022353 | 48d539a1-1679-730f-272a-0000000000cb | TASK | Create the RGW Daemon spec definition", "An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: cannot import name 'ceph_spec'", "2022-05-13 14:17:51.669288 | 48d539a1-1679-730f-272a-0000000000cb | FATAL | Create the RGW Daemon spec definition | overcloud-controller-0 | error={\"changed\": false, \"module_stderr\": \"Traceback (most recent call last):\\n File \\\"\\\", line 102, in \\n File \\\"\\\", line 94, in _ansiballz_main\\n File \\\"\\\", line 40, in invoke_module\\n File \\\"/usr/lib64/python3.6/runpy.py\\\", line 205, in run_module\\n return _run_module_code(code, init_globals, run_name, mod_spec)\\n File \\\"/usr/lib64/python3.6/runpy.py\\\", line 96, in _run_module_code\\n mod_name, mod_spec, pkg_name, script_name)\\n File \\\"/usr/lib64/python3.6/runpy.py\\\", line 85, in _run_code\\n exec(code, run_globals)\\n File \\\"/tmp/ansible_ceph_mkspec_payload_iwqyfs7g/ansible_ceph_mkspec_payload.zip/ansible/modules/ceph_mkspec.py\\\", line 24, in \\nImportError: cannot import name 'ceph_spec'\\n\", \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for the exact error\", \"rc\": 1}", With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fpantano at redhat.com Fri May 13 07:24:01 2022 From: fpantano at redhat.com (Francesco Pantano) Date: Fri, 13 May 2022 09:24:01 +0200 Subject: ImportError: cannot import name 'ceph_spec'" | Openstack Wallaby | Tripleo In-Reply-To: References: Message-ID: Hello, This is because of bug [1] which is already solved reverting the changes. The problem there is that a task, which is delegated_to mon[0], looks for a library present in the undercloud, but not in the overcloud. Not sure why you are still hitting that issue (older packages?), but if you apply [2] [3] you should be able to move forward. An alternative way to solve this problem is to (manually) install tripleo-common on mon[0] (which is your controller0). Hope this helps, Thanks, [1] https://bugs.launchpad.net/tripleo/+bug/1961325 [2] https://review.opendev.org/c/openstack/tripleo-common/+/830572 [3] https://review.opendev.org/c/openstack/tripleo-ansible/+/830573 On Fri, May 13, 2022 at 8:36 AM Swogat Pradhan wrote: > Hi Team, > I am deploying openstack wallaby using the deployed ceph method. > I am currently facing this issue while deploying overcloud. > Can someone please check ? > > "2022-05-13 14:17:51.022353 | 48d539a1-1679-730f-272a-0000000000cb > | TASK | Create the RGW Daemon spec definition", > "An exception occurred during task execution. To see the full > traceback, use -vvv. The error was: ImportError: cannot import name > 'ceph_spec'", > "2022-05-13 14:17:51.669288 | 48d539a1-1679-730f-272a-0000000000cb > | FATAL | Create the RGW Daemon spec definition | > overcloud-controller-0 | error={\"changed\": false, \"module_stderr\": > \"Traceback (most recent call last):\\n File \\\"\\\", line 102, in > \\n File \\\"\\\", line 94, in _ansiballz_main\\n File > \\\"\\\", line 40, in invoke_module\\n File > \\\"/usr/lib64/python3.6/runpy.py\\\", line 205, in run_module\\n return > _run_module_code(code, init_globals, run_name, mod_spec)\\n File > \\\"/usr/lib64/python3.6/runpy.py\\\", line 96, in _run_module_code\\n > mod_name, mod_spec, pkg_name, script_name)\\n File > \\\"/usr/lib64/python3.6/runpy.py\\\", line 85, in _run_code\\n > exec(code, run_globals)\\n File > \\\"/tmp/ansible_ceph_mkspec_payload_iwqyfs7g/ansible_ceph_mkspec_payload.zip/ansible/modules/ceph_mkspec.py\\\", > line 24, in \\nImportError: cannot import name 'ceph_spec'\\n\", > \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for > the exact error\", \"rc\": 1}", > > With regards, > Swogat Pradhan > -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri May 13 07:50:37 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 13 May 2022 13:20:37 +0530 Subject: ImportError: cannot import name 'ceph_spec'" | Openstack Wallaby | Tripleo In-Reply-To: References: Message-ID: Hi Francesco, Thank you for your reply. I already have installed the tripleo-common package. (have downgraded cryptography from 37 to 3.2.1) Still facing this issue. With regards, Swogat Pradhan On Fri, May 13, 2022 at 12:54 PM Francesco Pantano wrote: > Hello, > This is because of bug [1] which is already solved reverting the changes. > The problem there is that a task, which is delegated_to mon[0], looks for > a library > present in the undercloud, but not in the overcloud. > Not sure why you are still hitting that issue (older packages?), but if > you apply > [2] [3] you should be able to move forward. > An alternative way to solve this problem is to (manually) install > tripleo-common > on mon[0] (which is your controller0). > > Hope this helps, > Thanks, > > > [1] https://bugs.launchpad.net/tripleo/+bug/1961325 > [2] https://review.opendev.org/c/openstack/tripleo-common/+/830572 > [3] https://review.opendev.org/c/openstack/tripleo-ansible/+/830573 > > On Fri, May 13, 2022 at 8:36 AM Swogat Pradhan > wrote: > >> Hi Team, >> I am deploying openstack wallaby using the deployed ceph method. >> I am currently facing this issue while deploying overcloud. >> Can someone please check ? >> >> "2022-05-13 14:17:51.022353 | 48d539a1-1679-730f-272a-0000000000cb >> | TASK | Create the RGW Daemon spec definition", >> "An exception occurred during task execution. To see the full >> traceback, use -vvv. The error was: ImportError: cannot import name >> 'ceph_spec'", >> "2022-05-13 14:17:51.669288 | >> 48d539a1-1679-730f-272a-0000000000cb | FATAL | Create the RGW Daemon >> spec definition | overcloud-controller-0 | error={\"changed\": false, >> \"module_stderr\": \"Traceback (most recent call last):\\n File >> \\\"\\\", line 102, in \\n File \\\"\\\", line 94, >> in _ansiballz_main\\n File \\\"\\\", line 40, in invoke_module\\n >> File \\\"/usr/lib64/python3.6/runpy.py\\\", line 205, in run_module\\n >> return _run_module_code(code, init_globals, run_name, mod_spec)\\n File >> \\\"/usr/lib64/python3.6/runpy.py\\\", line 96, in _run_module_code\\n >> mod_name, mod_spec, pkg_name, script_name)\\n File >> \\\"/usr/lib64/python3.6/runpy.py\\\", line 85, in _run_code\\n >> exec(code, run_globals)\\n File >> \\\"/tmp/ansible_ceph_mkspec_payload_iwqyfs7g/ansible_ceph_mkspec_payload.zip/ansible/modules/ceph_mkspec.py\\\", >> line 24, in \\nImportError: cannot import name 'ceph_spec'\\n\", >> \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for >> the exact error\", \"rc\": 1}", >> >> With regards, >> Swogat Pradhan >> > > > -- > Francesco Pantano > GPG KEY: F41BD75C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Fri May 13 09:14:37 2022 From: tobias.urdin at binero.com (Tobias Urdin) Date: Fri, 13 May 2022 09:14:37 +0000 Subject: [puppet] Gate blocker: Installation of the systemtap package fails In-Reply-To: References: Message-ID: <608A2389-C1AC-4FAC-AE34-81355E551831@binero.com> Thanks for handling all of these Takashi! Best regards On 9 May 2022, at 06:01, Takashi Kajinami > wrote: The new systemtap packages were released and stable/yoga should be unblocked now. I've reverted the temporal workaround and now integration jobs are passing without it. On Fri, May 6, 2022 at 11:41 AM Takashi Kajinami > wrote: Because the new systemtap packages have not yet been released and our master gate job has been broken for more than a week, I merged the temporal workaround[1]. I'd keep that patch master only atm, but it takes additional days then I'll backport that to stable/yoga to unblock stable/yoga CI. [1] https://review.opendev.org/c/openstack/puppet-openstack-integration/+/840188 On Wed, May 4, 2022 at 12:16 AM Takashi Kajinami > wrote: Hello, We are currently facing consistent failure in centos stream 9 integration jobs. which is caused by the new dyninst package. I've already reported the issue in bz[1] and we are currently waiting for the updated systemtap package. Please avoid rechecking until the package is released [1] https://bugzilla.redhat.com/show_bug.cgi?id=2079892 If the fix is not released for additional days then we can merge the temporal workaround to unblock our jobs. https://review.opendev.org/c/openstack/puppet-openstack-integration/+/840188 Last week we also saw centos stream 8 jobs consistently failing but it seems these jobs were already fixed by the new python3-qt5 package. [2] https://bugzilla.redhat.com/show_bug.cgi?id=2079895 Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri May 13 10:01:16 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 13 May 2022 15:31:16 +0530 Subject: ImportError: cannot import name 'ceph_spec'" | Openstack Wallaby | Tripleo In-Reply-To: References: Message-ID: Hi Francesco, As you mentioned, if i apply [2] and [3] I will be able to move forward. [2] is already in place as i have already setup tripleo_common coming to [3] you suggesting to install the tripleo_ansible package like mentioned here https://docs.openstack.org/tripleo-ansible/wallaby/installation.html in controller nodes? With regards, Swogat Pradhan On Fri, May 13, 2022 at 12:54 PM Francesco Pantano wrote: > Hello, > This is because of bug [1] which is already solved reverting the changes. > The problem there is that a task, which is delegated_to mon[0], looks for > a library > present in the undercloud, but not in the overcloud. > Not sure why you are still hitting that issue (older packages?), but if > you apply > [2] [3] you should be able to move forward. > An alternative way to solve this problem is to (manually) install > tripleo-common > on mon[0] (which is your controller0). > > Hope this helps, > Thanks, > > > [1] https://bugs.launchpad.net/tripleo/+bug/1961325 > [2] https://review.opendev.org/c/openstack/tripleo-common/+/830572 > [3] https://review.opendev.org/c/openstack/tripleo-ansible/+/830573 > > On Fri, May 13, 2022 at 8:36 AM Swogat Pradhan > wrote: > >> Hi Team, >> I am deploying openstack wallaby using the deployed ceph method. >> I am currently facing this issue while deploying overcloud. >> Can someone please check ? >> >> "2022-05-13 14:17:51.022353 | 48d539a1-1679-730f-272a-0000000000cb >> | TASK | Create the RGW Daemon spec definition", >> "An exception occurred during task execution. To see the full >> traceback, use -vvv. The error was: ImportError: cannot import name >> 'ceph_spec'", >> "2022-05-13 14:17:51.669288 | >> 48d539a1-1679-730f-272a-0000000000cb | FATAL | Create the RGW Daemon >> spec definition | overcloud-controller-0 | error={\"changed\": false, >> \"module_stderr\": \"Traceback (most recent call last):\\n File >> \\\"\\\", line 102, in \\n File \\\"\\\", line 94, >> in _ansiballz_main\\n File \\\"\\\", line 40, in invoke_module\\n >> File \\\"/usr/lib64/python3.6/runpy.py\\\", line 205, in run_module\\n >> return _run_module_code(code, init_globals, run_name, mod_spec)\\n File >> \\\"/usr/lib64/python3.6/runpy.py\\\", line 96, in _run_module_code\\n >> mod_name, mod_spec, pkg_name, script_name)\\n File >> \\\"/usr/lib64/python3.6/runpy.py\\\", line 85, in _run_code\\n >> exec(code, run_globals)\\n File >> \\\"/tmp/ansible_ceph_mkspec_payload_iwqyfs7g/ansible_ceph_mkspec_payload.zip/ansible/modules/ceph_mkspec.py\\\", >> line 24, in \\nImportError: cannot import name 'ceph_spec'\\n\", >> \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for >> the exact error\", \"rc\": 1}", >> >> With regards, >> Swogat Pradhan >> > > > -- > Francesco Pantano > GPG KEY: F41BD75C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Fri May 13 05:45:10 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 13 May 2022 11:15:10 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi Takashi, Thanks for your reply. I tried executing the suggested command and below is the output [heat-admin at overcloud-controller-1 ~]$ /bin/os-net-config -i eno1 {'eno1': 'eno1'} Regards Anirudh Gupta On Thu, May 12, 2022 at 11:03 AM Takashi Kajinami wrote: > The puppy implementation executes the following command to get the > interface information. > /bin/os-net-config -i > I'd recommend you check the command output in *all overcloud nodes *. > > If you need to use different interfaces for different roles then you need > to define the parameter > as role specific one, defined under Parameters. > > On Wed, May 11, 2022 at 4:26 PM Anirudh Gupta wrote: > >> Hi Takashi, >> >> Thanks for clarifying my issues regarding the support of PTP in Wallaby >> Release. >> >> In Train, I have also tried passing the exact interface name and took 2 >> runs with and without quotes like below: >> >> >> *PtpInterface: eno1* >> >> *PtpInterface: 'eno1'* >> >> But in both the cases, the issue observed was similar >> >> 2022-05-11 10:33:20.189107 | 5254001f-9952-934d-e901-0000000030be | >> FATAL | Wait for puppet host configuration to finish | >> overcloud-controller-2 | error={"ansible_job_id": "526310775819.36650", >> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >> --detailed-exitcodes --summarize --color=false >> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >> puppet-user", "delta": "0:00:04.289208", "end": "2022-05-11 >> 10:33:08.195052", "failed_when_result": true, "finished": 1, "msg": >> "non-zero return code", "rc": 1, "start": "2022-05-11 10:33:03.905844", >> "stderr": "<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' >> is deprecated in favor of using 'lookup'. See >> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & >> line not available)\n<13>May 11 10:33:03 puppet-user: Warning: >> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >> should be converted to version 5\n<13>May 11 10:33:03 puppet-user: >> (file: /etc/puppet/hiera.yaml)\n<13>May 11 10:33:03 puppet-user: Warning: >> Undefined variable '::deploy_config_name'; \\n (file & line not >> available)\n<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module >> 'tripleo' has unresolved dependencies - it will only see those that are >> resolved. Use 'puppet module list --tree' to see information about >> modules\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: >> Error: Evaluation Error: A substring operation does not accept a String as >> a character index. Expected an Integer (file: >> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >> column: 46) on node overcloud-controller-2.localdomain", "stderr_lines": >> ["<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is >> deprecated in favor of using 'lookup'. See >> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & >> line not available)", "<13>May 11 10:33:03 puppet-user: Warning: >> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >> should be converted to version 5", "<13>May 11 10:33:03 puppet-user: >> (file: /etc/puppet/hiera.yaml)", "<13>May 11 10:33:03 puppet-user: >> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >> available)", "<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: >> module 'tripleo' has unresolved dependencies - it will only see those that >> are resolved. Use 'puppet module list --tree' to see information about >> modules\\n (file & line not available)", "<13>May 11 10:33:03 >> puppet-user: *Error: Evaluation Error: A substring operation does not >> accept a String as a character index. Expected an Integer (file: >> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >> column: 46) on node overcloud-controller-2.localdomain"], "stdout": "", >> "stdout_lines": []}* >> 2022-05-11 10:33:20.190263 | 5254001f-9952-934d-e901-0000000030be | >> TIMING | Wait for puppet host configuration to finish | >> overcloud-controller-2 | 0:12:41.268734 | 7.01s >> >> I'll be highly grateful if you could further extend your support to >> resolve the issue. >> >> Regards >> Anirudh Gupta >> >> On Tue, May 10, 2022 at 9:15 PM Takashi Kajinami >> wrote: >> >>> >>> >>> On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta >>> wrote: >>> >>>> Hi Takashi, >>>> >>>> Thanks for your suggestion. >>>> >>>> I downloaded the updated Train Images and they had the ptp.pp file >>>> available on the overcloud and undercloud machines >>>> >>>> [root at overcloud-controller-1 /]# find . -name "ptp.pp" >>>> >>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>> >>>> With this, I re-executed the deployment and got the below error on the >>>> machines >>>> >>>> 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce | >>>> FATAL | Wait for puppet host configuration to finish | >>>> overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", >>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>> --detailed-exitcodes --summarize --color=false >>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>> puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 >>>> 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": >>>> "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", >>>> "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' >>>> is deprecated in favor of using 'lookup'. See >>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file >>>> & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: >>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>> should be converted to version 5\n<13>May 10 20:05:37 puppet-user: >>>> (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: >>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>> available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module >>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>> resolved. Use 'puppet module list --tree' to see information about >>>> modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: >>>> Error: Evaluation Error: A substring operation does not accept a String as >>>> a character index. Expected an Integer (file: >>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>> column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": >>>> ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is >>>> deprecated in favor of using 'lookup'. See >>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file >>>> & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: >>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>> should be converted to version 5", "<13>May 10 20:05:37 puppet-user: >>>> (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: >>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>> available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: >>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>> are resolved. Use 'puppet module list --tree' to see information about >>>> modules\\n (file & line not available)", "<13>May 10 20:05:37 >>>> puppet-user: *Error: Evaluation Error: A substring operation does not >>>> accept a String as a character index. Expected an Integer (file: >>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>> column: 46) *on node overcloud-controller-1.localdomain"], "stdout": >>>> "", "stdout_lines": []} >>>> >>>> The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, >>>> line: 41, column: 46 *had the following code: >>>> 34 class tripleo::profile::base::time::ptp ( >>>> 35 $ptp4l_interface = 'eth0', >>>> 36 $ptp4l_conf_slaveonly = 1, >>>> 37 $ptp4l_conf_network_transport = 'UDPv4', >>>> 38 ) { >>>> 39 >>>> 40 $interface_mapping = generate('/bin/os-net-config', '-i', >>>> $ptp4l_interface) >>>> 41 *$ptp4l_interface_name = >>>> $interface_mapping[$ptp4l_interface]* >>>> >>>> >>>> *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file >>>> is as below: >>>> >>>> resource_registry: >>>> # FIXME(bogdando): switch it, once it is containerized >>>> OS::TripleO::Services::Ptp: >>>> ../../deployment/time/ptp-baremetal-puppet.yaml >>>> OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp >>>> >>>> parameter_defaults: >>>> # PTP hardware interface name >>>> *PtpInterface: 'nic1'* >>>> >>>> # Configure PTP clock in slave mode >>>> PtpSlaveMode: 1 >>>> >>>> # Configure PTP message transport protocol >>>> PtpMessageTransport: 'UDPv4' >>>> >>>> I have also tried modifying the entry as below: >>>> *PtpInterface: 'nic1' #*(i.e. without quotes), but the error remains >>>> the same. >>>> >>>> Queries: >>>> >>>> 1. Any pointers to resolve this? >>>> >>>> I'm not familiar with ptp but you'd need to use the actual interface >>> name >>> if you are not using the alias name. >>> >>> >>> >>>> >>>> 1. You were mentioning something about the support of PTP not there >>>> in the wallaby release. Can you please confirm? >>>> >>>> IIUC PTP is still supported even in master. What we removed is the >>> implementation using Puppet >>> which was replaced by ansible. >>> >>> The warning regarding OS::TripleO::Services::Ptp was added when we >>> decided to merge >>> all time sync services to the single service resource which is >>> OS::TripleO::Services::Timesync[1]. >>> It's related to how resources are defined in Heat and doesn't affect >>> configuration support itself. >>> >>> [1] >>> https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 >>> >>> >>> >>>> It would be a great help if you could extend a little more support to >>>> resolve the issues. >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> >>>> On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta >>>> wrote: >>>> >>>>> I'll check that well. >>>>> By the way, I downloaded the images from the below link >>>>> >>>>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>>>> >>>>> They seem to be updated yesterday, I'll download and try the >>>>> deployment with the latest images. >>>>> >>>>> Also are you pointing that the support for PTP would not be there in >>>>> Wallaby Release? >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> >>>>>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta >>>>>> wrote: >>>>>> >>>>>>> Hi Takashi >>>>>>> >>>>>>> I have checked this in undercloud only. >>>>>>> I don't find any such file in overcloud. Could this be a concern? >>>>>>> >>>>>> >>>>>> The manifest should exist in overcloud nodes and the missing file is >>>>>> the exact cause >>>>>> of that puppet failure during deployment. >>>>>> >>>>>> Please check your overcloud images used to install overcloud nodes >>>>>> and ensure that >>>>>> you're using the right one. You might be using the image for a >>>>>> different release. >>>>>> We removed the manifest file during the Wallaby cycle. >>>>>> >>>>>> >>>>>>> >>>>>>> Regards >>>>>>> Anirudh Gupta >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami < >>>>>>> tkajinam at redhat.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami < >>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi Takashi, >>>>>>>>>> >>>>>>>>>> Thanks for your reply. >>>>>>>>>> >>>>>>>>>> I have checked on my machine and the file "ptp.pp" do exist at >>>>>>>>>> path " >>>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>>> " >>>>>>>>>> >>>>>>>>> Did you check this in your undercloud or overcloud ? >>>>>>>>> During the deployment all configuration files are generated using >>>>>>>>> puppet modules >>>>>>>>> installed in overcloud nodes, so you should check this in >>>>>>>>> overcloud nodes. >>>>>>>>> >>>>>>>>> Also, the deprecation warning is not implemented >>>>>>>>> >>>>>>>> Ignore this incomplete line. I was looking for the implementation >>>>>>>> which shows the warning >>>>>>>> but I found it in tripleoclient and it looks reasonable according >>>>>>>> to what we have in >>>>>>>> environments/services/ptp.yaml . >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>>>>>> for controller and compute *before rendering the templates, but >>>>>>>>>> still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>>>>>> >>>>>>>>> >>>>>>>>> IIUC you don't need this because OS::TripleO::Services::Timesync >>>>>>>>> becomes an alias >>>>>>>>> to the Ptp service resource when you use the ptp environment file. >>>>>>>>> >>>>>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function Call, >>>>>>>>>> Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>> >>>>>>>>>> Can you suggest any workarounds or any pointers to look further >>>>>>>>>> in order to resolve this issue? >>>>>>>>>> >>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Anirudh Gupta >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> I'm not familiar with PTP, but the error you pasted indicates >>>>>>>>>>> that the required puppet manifest does not exist in your overcloud >>>>>>>>>>> node/image. >>>>>>>>>>> >>>>>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>>>>> >>>>>>>>>>> This should not happen and the class should exist as long as you >>>>>>>>>>> have puppet-tripleo from stable/train installed. >>>>>>>>>>> >>>>>>>>>>> I'd recommend you check installed tripleo/puppet packages and >>>>>>>>>>> ensure everything is in the consistent release. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta < >>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi All >>>>>>>>>>>> >>>>>>>>>>>> Any update on this? >>>>>>>>>>>> >>>>>>>>>>>> Regards >>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>> >>>>>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>> >>>>>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>>>>> >>>>>>>>>>>>> When I was executing the Overcloud deployment script, passing >>>>>>>>>>>>> the PTP yaml, it gave the following option at the starting >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>>>>> >>>>>>>>>>>>> even if passing Y, it starts executing for sometime and the >>>>>>>>>>>>> gives the following error >>>>>>>>>>>>> >>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Can someone suggest some pointers in order to resolve this >>>>>>>>>>>>> issue and move forward? >>>>>>>>>>>>> >>>>>>>>>>>>> Regards >>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta < >>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>> >>>>>>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>>>>>> successfully. >>>>>>>>>>>>>> I need to enable PTP service while deploying the overcloud >>>>>>>>>>>>>> for which I have included the service in my deployment >>>>>>>>>>>>>> >>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> * -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>>>>> \* >>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>> >>>>>>>>>>>>>> But it gives the following error >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>>>>> 'lookup'. See >>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>>>>> (file: >>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while >>>>>>>>>>>>>> evaluating a Function Call, Could not find class >>>>>>>>>>>>>> ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain >>>>>>>>>>>>>> (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) >>>>>>>>>>>>>> on node* overcloud-controller-0.localdomain"], "stdout": "", >>>>>>>>>>>>>> "stdout_lines": []} >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Regards >>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>> >>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri May 13 06:29:46 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 13 May 2022 11:59:46 +0530 Subject: ImportError: cannot import name 'ceph_spec'" | Openstack Wallaby | Tripleo In-Reply-To: References: Message-ID: Hi, Please find the attached log. With regards Swogat pradhan On Fri, May 13, 2022 at 11:57 AM Swogat Pradhan wrote: > Hi Team, > I am deploying openstack wallaby using the deployed ceph method. > I am currently facing this issue while deploying overcloud. > Can someone please check ? > > "2022-05-13 14:17:51.022353 | 48d539a1-1679-730f-272a-0000000000cb > | TASK | Create the RGW Daemon spec definition", > "An exception occurred during task execution. To see the full > traceback, use -vvv. The error was: ImportError: cannot import name > 'ceph_spec'", > "2022-05-13 14:17:51.669288 | 48d539a1-1679-730f-272a-0000000000cb > | FATAL | Create the RGW Daemon spec definition | > overcloud-controller-0 | error={\"changed\": false, \"module_stderr\": > \"Traceback (most recent call last):\\n File \\\"\\\", line 102, in > \\n File \\\"\\\", line 94, in _ansiballz_main\\n File > \\\"\\\", line 40, in invoke_module\\n File > \\\"/usr/lib64/python3.6/runpy.py\\\", line 205, in run_module\\n return > _run_module_code(code, init_globals, run_name, mod_spec)\\n File > \\\"/usr/lib64/python3.6/runpy.py\\\", line 96, in _run_module_code\\n > mod_name, mod_spec, pkg_name, script_name)\\n File > \\\"/usr/lib64/python3.6/runpy.py\\\", line 85, in _run_code\\n > exec(code, run_globals)\\n File > \\\"/tmp/ansible_ceph_mkspec_payload_iwqyfs7g/ansible_ceph_mkspec_payload.zip/ansible/modules/ceph_mkspec.py\\\", > line 24, in \\nImportError: cannot import name 'ceph_spec'\\n\", > \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for > the exact error\", \"rc\": 1}", > > With regards, > Swogat Pradhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cephadm_command.log Type: application/octet-stream Size: 311284 bytes Desc: not available URL: From fpantano at redhat.com Fri May 13 12:36:14 2022 From: fpantano at redhat.com (Francesco Pantano) Date: Fri, 13 May 2022 14:36:14 +0200 Subject: ImportError: cannot import name 'ceph_spec'" | Openstack Wallaby | Tripleo In-Reply-To: References: Message-ID: Hi, [2] and [3] are enough to move forward, so ... if you already applied the two patches (in the undercloud) you don't have to do anything else. I mentioned "installing tripleo_common" on the controller-0 node as a _not_ recommended workaround that avoid you to change the code applying [2] [3]. Thanks, [2] https://review.opendev.org/c/openstack/tripleo-common/+/830572 [3] https://review.opendev.org/c/openstack/tripleo-ansible/+/830573 On Fri, May 13, 2022 at 12:01 PM Swogat Pradhan wrote: > Hi Francesco, > As you mentioned, if i apply [2] and [3] I will be able to move forward. > [2] is already in place as i have already setup tripleo_common > coming to [3] you suggesting to install the tripleo_ansible package like > mentioned here > https://docs.openstack.org/tripleo-ansible/wallaby/installation.html in > controller nodes? > > With regards, > Swogat Pradhan > > On Fri, May 13, 2022 at 12:54 PM Francesco Pantano > wrote: > >> Hello, >> This is because of bug [1] which is already solved reverting the changes. >> The problem there is that a task, which is delegated_to mon[0], looks for >> a library >> present in the undercloud, but not in the overcloud. >> Not sure why you are still hitting that issue (older packages?), but if >> you apply >> [2] [3] you should be able to move forward. >> An alternative way to solve this problem is to (manually) install >> tripleo-common >> on mon[0] (which is your controller0). >> >> Hope this helps, >> Thanks, >> >> >> [1] https://bugs.launchpad.net/tripleo/+bug/1961325 >> [2] https://review.opendev.org/c/openstack/tripleo-common/+/830572 >> [3] https://review.opendev.org/c/openstack/tripleo-ansible/+/830573 >> >> On Fri, May 13, 2022 at 8:36 AM Swogat Pradhan >> wrote: >> >>> Hi Team, >>> I am deploying openstack wallaby using the deployed ceph method. >>> I am currently facing this issue while deploying overcloud. >>> Can someone please check ? >>> >>> "2022-05-13 14:17:51.022353 | >>> 48d539a1-1679-730f-272a-0000000000cb | TASK | Create the RGW Daemon >>> spec definition", >>> "An exception occurred during task execution. To see the full >>> traceback, use -vvv. The error was: ImportError: cannot import name >>> 'ceph_spec'", >>> "2022-05-13 14:17:51.669288 | >>> 48d539a1-1679-730f-272a-0000000000cb | FATAL | Create the RGW Daemon >>> spec definition | overcloud-controller-0 | error={\"changed\": false, >>> \"module_stderr\": \"Traceback (most recent call last):\\n File >>> \\\"\\\", line 102, in \\n File \\\"\\\", line 94, >>> in _ansiballz_main\\n File \\\"\\\", line 40, in invoke_module\\n >>> File \\\"/usr/lib64/python3.6/runpy.py\\\", line 205, in run_module\\n >>> return _run_module_code(code, init_globals, run_name, mod_spec)\\n File >>> \\\"/usr/lib64/python3.6/runpy.py\\\", line 96, in _run_module_code\\n >>> mod_name, mod_spec, pkg_name, script_name)\\n File >>> \\\"/usr/lib64/python3.6/runpy.py\\\", line 85, in _run_code\\n >>> exec(code, run_globals)\\n File >>> \\\"/tmp/ansible_ceph_mkspec_payload_iwqyfs7g/ansible_ceph_mkspec_payload.zip/ansible/modules/ceph_mkspec.py\\\", >>> line 24, in \\nImportError: cannot import name 'ceph_spec'\\n\", >>> \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for >>> the exact error\", \"rc\": 1}", >>> >>> With regards, >>> Swogat Pradhan >>> >> >> >> -- >> Francesco Pantano >> GPG KEY: F41BD75C >> > -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri May 13 17:11:50 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 13 May 2022 19:11:50 +0200 Subject: [release] Release countdown for week R-20, May 16 - 20 Message-ID: <34ed0cab-4a5c-4ddc-88be-8070ee1ecf37@est.tech> Development Focus ----------------- The Zed-1 milestone is next week, on May 19th, 2022! Project team plans for the Zed cycle should now be solidified. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library which had changes but has not been otherwise released since the Yoga release. PTL's or release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well, by posting a -1. If we do not hear anything at all by the end of the week, we will assume things are OK to proceed. NB: If one of your libraries is still releasing 0.x versions, start thinking about when it will be appropriate to do a 1.0 version. The version number does signal the state, real or perceived, of the library, so we strongly encourage going to a full major version once things are in a good and usable state. Upcoming Deadlines & Dates -------------------------- Zed-1 milestone: May 19th, 2022 OpenInfra Summit: June 7-9, 2022 (Berlin) El?d Ill?s irc: elodilles From ces.eduardo98 at gmail.com Fri May 13 18:57:20 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Fri, 13 May 2022 15:57:20 -0300 Subject: [manila] Manila Zed hack-a-thon Message-ID: Hi Zorillas and fellow stackers, Over the past cycles we had a good effort to add OSC support to Manila. We are quite close to parity now (thanks to all the Zorillas involved). In those changes, we didn't focus on automated functional testing. Over PTG we came up with the idea to host a hack-a-thon, so we could enhance our functional test coverage, helping to ensure the quality of our OSC. We will be doing that from *May 16th to May 23rd.* The work on OSC is currently being tracked here: https://tree.taiga.io/project/gouthampacha-openstack-manila-osc-integration-with-python-manilaclient/kanban What are the plans for the hack-a-thon: - There are few examples of recent additions [1][2] that could serve as inspiration. You're free to create, claim and edit Taiga cards for the implementation you're chasing. Please reach out to me (carloss) or Goutham (gouthamr) if you have to be added to the taiga board. Also, Add yourself as a "watcher" to two other Taiga cards that someone else is implementing, this will mean that you will be a code reviewer on these implementations! - You can work in teams - min size: 1, max size: 2 - You can submit code between May 16th 1500 UTC and May 23rd 1500 UTC to count towards this hack-a-thon - During the hack-a-thon, we'll use #openstack-manila on OFTC to chat, and hop onto a meeting room [3] - We'll have a one hour kick-off session on May 16th at 1500 UTC - On this session we will also cover the environment setup - We'll use the manila community meeting hour as a mid-point status check (May 19th, 1500 UTC) - We'll have a half-hour close out meeting to discuss reviews, AIs and futures (May 23rd, 1500 UTC) All sessions will take place on the [3] bridge. Please feel free to reach out here or on OFTC's #openstack-manila if you have any questions! Huge thank you to Maari for helping us out with the backlog. I hope to see you there and that we accomplish good things with this Hackathon! [1] https://review.opendev.org/c/openstack/python-manilaclient/+/707577 [2] https://review.opendev.org/c/openstack/python-manilaclient/+/836834 [3] https://bluejeans.com/371190441 Regards, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat May 14 01:43:57 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 13 May 2022 20:43:57 -0500 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> Message-ID: <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> Hello Everyone, Writing on the top for quick reading as we have a consensus on this. In today's TC meeting [3], we discussed this topic with Foundation staff and we all agreed to give the release name process handling to Foundation. TC will not be involved in the process. The release name will be mainly used for marketing purposes and we will use the release number as the primary identifier in our release scheduling, automated processes, directory structure etc. I have proposed the patch to document it in the TC release naming process. - https://review.opendev.org/c/openstack/governance/+/841800 [1] https://meetings.opendev.org/meetings/tc/2022/tc.2022-05-12-15.00.log.html#l-245 -gmann ---- On Fri, 29 Apr 2022 14:47:31 -0500 Ghanshyam Mann wrote ---- > > ---- On Fri, 29 Apr 2022 14:11:12 -0500 Goutham Pacha Ravi wrote ---- > > On Fri, Apr 29, 2022 at 8:36 PM Slawek Kaplonski wrote: > > > > > > Hi, > > > > > > > > > During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > > > > > > It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > > > > > > > > > Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > > > > > > This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > > > > Beloved TC, > > > > I'm highly disappointed in this 'decision', and would like for you to > > reconsider. I see the reasons you cite, but I feel like we're throwing > > the baby out with the bathwater here. Disagreements need not be > > feared, why not allow them to be aired publicly? That's a tenet of > > this open community. Allow names to be downvoted with reason during > > the proposal phase, and they'll organically fall-off from favor. > > > > Release names have always been a bonding factor. I've been happy to > > drum up contributor morale with our release names and the > > stories/anecdotes behind them. Release naming will not hurt/help the > > tick-tock release IMHO. We can append the release number to the name, > > and call it a day if you want. > > I agree with the disagrement ratio and that should not stop us doing the things. > But here we need to understand what type of disagrement we have and on what. > Most of the disagrement were cutural or historical where people has shown it > emotinally. And I personally as well as a TC or communitiy member does not feel > goot to ignore them or give them any reasoning not to listen them (because I do not > have any reasoning on these cultural/historical disagrement). > > Zed cycle was one good example of such thing when it was brought up in TC > channel about war thing[1] and asked about change the Zed name. I will be happy > to know what is best solution for this. > > 1. Change Zed name: it involve lot of technical work and communciation too. If yes then > let's do this now. > > 2. Do not listen to these emotional request to change name: We did this at the end and I > do not feel ok to do that. At least I do not want to ignore such request in future. > > Those are main reason we in TC decided to remvoe the name as they are culturally, emotionally > tied. That is main reason of droping those not any techncial or work wise issue. > > [1] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2022-03-08.log.html#t2022-03-08T14:35:26 > > -gmann > > > > > I do believe our current release naming process is a step out of the > > TC's perceived charter. There are many technical challenges that the > > TC is tackling, and coordinating a vote/slugfest about names isn't as > > important as those. > > As Allison suggests, we could seek help from the foundation to run the > > community voting and vetting for the release naming process - and > > expect the same level of transparency as the 4 opens that the > > OpenStack community espouses. > > Yes we will offcourse open to that but at the same time we will be waiting > for the foudnation proposal to sovle such issue irespective of who is doing name > selection. So let's wait for that. > > -gmann > > > > > > > > > > > > > > > > > > [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > > > > > > [2] https://review.opendev.org/c/openstack/governance/+/839897 > > > > > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > > > > From gmann at ghanshyammann.com Sat May 14 01:47:14 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 13 May 2022 20:47:14 -0500 Subject: [all][tc] Release cadence terminology (Was: Change OpenStack release naming...) In-Reply-To: <20220429184651.q4crwkkam74qjheu@yuggoth.org> References: <2175937.irdbgypaU6@p1> <3ef1f910-5f87-fad2-9bee-79df2abced07@linaro.org> <20220429184651.q4crwkkam74qjheu@yuggoth.org> Message-ID: <180c03f9833.103316537148495.2271003896993195726@ghanshyammann.com> ---- On Fri, 29 Apr 2022 13:46:51 -0500 Jeremy Stanley wrote ---- > On 2022-04-29 17:43:25 +0200 (+0200), Marcin Juszkiewicz wrote: > [...] > > Also suggestion: drop tick/tock from naming documentation please. > > I never remember which is major and which is minor. > > This is a good point. From an internationalization perspective, the > choice of wording could be especially confusing as it's an analogy > for English onomatopoeia related to mechanical clocks. I doubt it > would translate well (if at all). > > In retrospect, adjusting the terminology to make 2023.1 the > "primary" release of 2023, with 2023.2 as the "secondary" release of > that year, makes it a bit more clear as to their relationship to one > another. We can say that consumers are able to upgrade directly from > one primary release to another, skipping secondary releases. With the outcome of the legal check, it is not clear to us that tick-tock words are ok to use for the new release cadence terminology and in what form/combination. In TC, we decided to use a different name and 'SLURP' (Skip Level Upgrade Release Process) is the choice of the majority [1]. I have pushed the patch to reflect the same in the TC resolution - https://review.opendev.org/c/openstack/governance/+/840354 [1] https://meetings.opendev.org/meetings/tc/2022/tc.2022-05-12-15.00.log.html#l-191 -gmann > -- > Jeremy Stanley > From gmann at ghanshyammann.com Sat May 14 02:31:16 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 13 May 2022 21:31:16 -0500 Subject: [all][tc] What's happening in Technical Committee: summary May 13, 2022: Reading: 5 min Message-ID: <180c067e7fe.11e044334148639.7936761392502230792@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on May 12. Most of the meeting discussions are summarized in this email. Meeting summary logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-05-12-15.00.log.html * Next TC weekly meeting will be on May 19 Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by May 18. 2. What we completed this week: ========================= * FIPS goal is selected community-wide goal now[2]. 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[3], we have started the many items. Open Reviews ----------------- * Nine open reviews for ongoing activities[4]. Change OpenStack release naming policy proposal ----------------------------------------------------------- In today's TC meeting [5], we discussed this topic with Foundation staff and we all agreed to give the release name process handling to Foundation. TC will not be involved in the process. The release name will be mainly used for marketing purposes and we will use the release number as the primary identifier in our release scheduling, automated processes, directory structure etc. The agreed proposal is up for review[6] New release cadence open items --------------------------------------- 1. release notes strategy: Brian presented the different possible ways to reflect the tick-tock ( hoping this is the last time I am using these names as we agree to change them, see below ) release notes combination[7] and we agreed with the 'simple' approach[8]. 2. release cadence terminology With the outcome of the legal check, it is not clear to us that tick-tock words are ok to use for the new release cadence terminology and in what form/combination. In TC, we decided to use a different name and 'SLURP' (Skip Level Upgrade Release Process) is the choice of the majority. I have pushed the patch to reflect the same in the TC resolution[9]. Improve project governance --------------------------------- Slawek has the proposal the framework up and it is under review[10]. New ELK service dashboard: e-r service ----------------------------------------------- Tripleo team joined the TC meeting and we agreed to merge the master and rdo branches in the elastic-recheck repo. It is completely fine to keep both devstack-based and tripleo focus things and maintain them collaboratively. The hosting part will be investigated by the dpawlik Consistent and Secure Default RBAC ------------------------------------------- We had our weekly call on Tuesday 10 May and we agreed on the 'service' role part. I will update the spec for this. The heat open item is still in discussion and Takashi sent an email asking the feedback about the 'split stack' approach, please respond with your thought[11] While we are discussing the heat/scope things, we agreed to work on the 'new defaults' at least (separate from scope which will see later). Please find the summary/plan in etherpad L97[12] 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[13]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's the situation again on ML and hope Braden will be ready with their company side permission[14]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[15]. Project updates ------------------- * None 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[16]. [2] https://governance.openstack.org/tc/goals/selected/fips.html 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [17] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [3] https://etherpad.opendev.org/p/tc-zed-tracker [4] https://review.opendev.org/q/projects:openstack/governance+status:open [5] https://meetings.opendev.org/meetings/tc/2022/tc.2022-05-12-15.00.log.html#l-245 [6] https://review.opendev.org/c/openstack/governance/+/841800 [7] https://review.opendev.org/c/openstack/cinder/+/840996 [8] https://review.opendev.org/c/openstack/cinder/+/840996/1/releasenotes/source/tick2-simple.rst [9] https://review.opendev.org/c/openstack/governance/+/840354 [10] https://review.opendev.org/c/openstack/governance/+/839880 [11] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028490.html [12] https://etherpad.opendev.org/p/rbac-zed-ptg#L97 [13] https://review.opendev.org/c/openstack/governance/+/836888 [14] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [15] https://etherpad.opendev.org/p/zuul-config-error-openstack [16] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [17] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From zaitcev at redhat.com Sat May 14 04:37:51 2022 From: zaitcev at redhat.com (Pete Zaitcev) Date: Fri, 13 May 2022 23:37:51 -0500 Subject: Devstack and "The service catalog is empty." Message-ID: <20220513233751.0be1a075@niphredil.zaitcev.lan> Hello: For certain reasons I tried to run devstack for the first time. Worked on OpenStack since 2011 and always used something else, like Packstack. Anyway, it was painless, just a git checkout and ./stack.sh, done. But then, any access with openstack CLI says: $ openstack --os-auth-url http://192.168.128.11/identity/v3/ user list The service catalog is empty. Looks like Keystone wasn't populated... Is it how it's supposed to be? -- Pete From swogatpradhan22 at gmail.com Sun May 15 03:34:24 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sun, 15 May 2022 09:04:24 +0530 Subject: ImportError: cannot import name 'ceph_spec'" | Openstack Wallaby | Tripleo In-Reply-To: References: Message-ID: Hi Francesco, Thank you so much for your help. my cluster is up and running now. With regards, Swogat Pradhan On Fri, May 13, 2022 at 6:06 PM Francesco Pantano wrote: > Hi, > [2] and [3] are enough to move forward, so ... if you already applied the > two patches (in the undercloud) > you don't have to do anything else. > I mentioned "installing tripleo_common" on the controller-0 node as a > _not_ recommended workaround that > avoid you to change the code applying [2] [3]. > > Thanks, > > [2] https://review.opendev.org/c/openstack/tripleo-common/+/830572 > [3] https://review.opendev.org/c/openstack/tripleo-ansible/+/830573 > > > On Fri, May 13, 2022 at 12:01 PM Swogat Pradhan > wrote: > >> Hi Francesco, >> As you mentioned, if i apply [2] and [3] I will be able to move forward. >> [2] is already in place as i have already setup tripleo_common >> coming to [3] you suggesting to install the tripleo_ansible package like >> mentioned here >> https://docs.openstack.org/tripleo-ansible/wallaby/installation.html in >> controller nodes? >> >> With regards, >> Swogat Pradhan >> >> On Fri, May 13, 2022 at 12:54 PM Francesco Pantano >> wrote: >> >>> Hello, >>> This is because of bug [1] which is already solved reverting the changes. >>> The problem there is that a task, which is delegated_to mon[0], looks >>> for a library >>> present in the undercloud, but not in the overcloud. >>> Not sure why you are still hitting that issue (older packages?), but if >>> you apply >>> [2] [3] you should be able to move forward. >>> An alternative way to solve this problem is to (manually) install >>> tripleo-common >>> on mon[0] (which is your controller0). >>> >>> Hope this helps, >>> Thanks, >>> >>> >>> [1] https://bugs.launchpad.net/tripleo/+bug/1961325 >>> [2] https://review.opendev.org/c/openstack/tripleo-common/+/830572 >>> [3] https://review.opendev.org/c/openstack/tripleo-ansible/+/830573 >>> >>> On Fri, May 13, 2022 at 8:36 AM Swogat Pradhan < >>> swogatpradhan22 at gmail.com> wrote: >>> >>>> Hi Team, >>>> I am deploying openstack wallaby using the deployed ceph method. >>>> I am currently facing this issue while deploying overcloud. >>>> Can someone please check ? >>>> >>>> "2022-05-13 14:17:51.022353 | >>>> 48d539a1-1679-730f-272a-0000000000cb | TASK | Create the RGW Daemon >>>> spec definition", >>>> "An exception occurred during task execution. To see the full >>>> traceback, use -vvv. The error was: ImportError: cannot import name >>>> 'ceph_spec'", >>>> "2022-05-13 14:17:51.669288 | >>>> 48d539a1-1679-730f-272a-0000000000cb | FATAL | Create the RGW Daemon >>>> spec definition | overcloud-controller-0 | error={\"changed\": false, >>>> \"module_stderr\": \"Traceback (most recent call last):\\n File >>>> \\\"\\\", line 102, in \\n File \\\"\\\", line 94, >>>> in _ansiballz_main\\n File \\\"\\\", line 40, in invoke_module\\n >>>> File \\\"/usr/lib64/python3.6/runpy.py\\\", line 205, in run_module\\n >>>> return _run_module_code(code, init_globals, run_name, mod_spec)\\n File >>>> \\\"/usr/lib64/python3.6/runpy.py\\\", line 96, in _run_module_code\\n >>>> mod_name, mod_spec, pkg_name, script_name)\\n File >>>> \\\"/usr/lib64/python3.6/runpy.py\\\", line 85, in _run_code\\n >>>> exec(code, run_globals)\\n File >>>> \\\"/tmp/ansible_ceph_mkspec_payload_iwqyfs7g/ansible_ceph_mkspec_payload.zip/ansible/modules/ceph_mkspec.py\\\", >>>> line 24, in \\nImportError: cannot import name 'ceph_spec'\\n\", >>>> \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for >>>> the exact error\", \"rc\": 1}", >>>> >>>> With regards, >>>> Swogat Pradhan >>>> >>> >>> >>> -- >>> Francesco Pantano >>> GPG KEY: F41BD75C >>> >> > > -- > Francesco Pantano > GPG KEY: F41BD75C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vineshnellaiappan at gmail.com Sun May 15 04:31:47 2022 From: vineshnellaiappan at gmail.com (Vinesh N) Date: Sun, 15 May 2022 10:01:47 +0530 Subject: Fwd: unable to connect instance to external network In-Reply-To: References: Message-ID: hi, openstack - tripleo installed and was able to launch the instances. Machine has two interfaces, eth2 for external(WAN 192.168.35.0/24) and eth3 for internal(LAN 10.0.20.0/24). issue: unable to reach route interface by following command. (in this case, my route ipaddres 10.0.1.73) * openstack network create public * openstack subnet create --network public public --subnet-range 10.0.1.0/24 --allocation-pool start=10.0.1.40,end=10.0.1.80 --dns-nameserver 8.8.8.8 --gateway 10.0.1.1 --no-dhcp * openstack network create public --provider-physical-network datacentre --provider-network-type vlan --provider-segment 1 --external --share here is my ml2_conf.ini ******************************************************** [ml2] type_drivers=geneve,vlan,flat tenant_network_types=geneve mechanism_drivers=ovn path_mtu=0 extension_drivers=qos,port_security,dns [ml2_type_geneve] max_header_size=38 vni_ranges=1:65536 [ml2_type_vlan] network_vlan_ranges=datacentre:1:1000 [ml2_type_flat] flat_networks=datacentre #[linux_bridge] #physical_interface_mappings = datacentre:br-ex #physical_interface_mappings = datacentre:ens20f2 [ovn] ovn_nb_connection=tcp:10.0.1.158:6641 ovn_sb_connection=tcp:10.0.1.158:6642 ovsdb_connection_timeout=180 neutron_sync_mode=log ovn_l3_mode=True vif_type=ovs ovn_metadata_enabled=True enable_distributed_floating_ip=True dns_servers= ovn_emit_need_to_frag=False **************************************************************** here is my controller network: (ens2042 external and ens2043 internal interfaces) [root at overcloud-controller-0 ~]# ip a |more 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens20f0: mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000 link/ether 00:25:90:fe:0f:dc brd ff:ff:ff:ff:ff:ff inet6 fe80::225:90ff:fefe:fdc/64 scope link valid_lft forever preferred_lft forever 3: ens20f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 00:25:90:fe:0f:dd brd ff:ff:ff:ff:ff:ff 4: ens20f2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:25:90:fe:0f:de brd ff:ff:ff:ff:ff:ff inet 192.168.35.86/24 brd 192.168.35.255 scope global dynamic noprefixroute ens20f2 valid_lft 39528sec preferred_lft 39528sec inet6 fe80::f2d5:4cdf:ab4f:4f14/64 scope link noprefixroute valid_lft forever preferred_lft forever 5: ens20f3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:25:90:fe:0f:df brd ff:ff:ff:ff:ff:ff inet6 fe80::225:90ff:fefe:fdf/64 scope link valid_lft forever preferred_lft forever 6: ib0: mtu 4092 qdisc fq_codel state DOWN group default qlen 256 link/infiniband 80:00:02:08:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:21:eb:51 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff 7: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether fa:8f:18:e9:44:e6 brd ff:ff:ff:ff:ff:ff 8: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 00:25:90:fe:0f:dc brd ff:ff:ff:ff:ff:ff inet 10.0.1.179/24 brd 10.0.1.255 scope global br-ex valid_lft forever preferred_lft forever inet6 fe80::225:90ff:fefe:fdc/64 scope link valid_lft forever preferred_lft forever 9: vlan1: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 42:f0:35:f5:cf:5f brd ff:ff:ff:ff:ff:ff inet 10.0.1.179/24 brd 10.0.1.255 scope global vlan1 valid_lft forever preferred_lft forever inet 10.0.1.141/32 brd 10.0.1.255 scope global vlan1 valid_lft forever preferred_lft forever inet 10.0.1.158/32 brd 10.0.1.255 scope global vlan1 valid_lft forever preferred_lft forever inet6 fe80::40f0:35ff:fef5:cf5f/64 scope link valid_lft forever preferred_lft forever 10: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 76:36:79:8f:4f:b7 brd ff:ff:ff:ff:ff:ff 11: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 link/ether 0e:59:d6:99:85:78 brd ff:ff:ff:ff:ff:ff inet6 fe80::c59:d6ff:fe99:8578/64 scope link valid_lft forever preferred_lft forever Note: I tried with 192.168.35.0/24/200.168.35.0/24 for the external network topology as well. Thanks, Vinesh --------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Sun May 15 16:26:32 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sun, 15 May 2022 21:56:32 +0530 Subject: octavia_rsyslog service fails/keeps restarting | Openstack Wallaby | Tripleo Message-ID: Hi, I am currently trying to deploy octavia in openstack wallaby, but the octavia_rsyslog service is malfunctioning it seems. And I am checking the logs but am not sure how to fix the issue. Attached log for reference. With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: octavia_rsyslog.log Type: application/octet-stream Size: 21745 bytes Desc: not available URL: From katonalala at gmail.com Mon May 16 09:46:20 2022 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 16 May 2022 11:46:20 +0200 Subject: [neutron] Bug deputy report for week of May 9 Message-ID: Hi Neutron Team I was bug deputy in neutron last week. Needs attention ================= * OVN revision_number infinite update loop ( https://bugs.launchpad.net/neutron/+bug/1973347 ) In Progress ============ * Slow queries after upgrade to Xena ( https://bugs.launchpad.net/neutron/+bug/1973349 ) Done ======= * [Wallaby] OVNPortForwarding._handle_lb_on_ls fails with TypeError: _handle_lb_on_ls() got an unexpected keyword argument 'context' ( https://bugs.launchpad.net/neutron/+bug/1972764) Question ======== * Does FWaaS v2 support linuxbridge-agent ? ( https://bugs.launchpad.net/neutron/+bug/1973039) Invalid / Won't fix ================= * Skip DB retry when update on "standardattributes" fails ( https://bugs.launchpad.net/neutron/+bug/1973049 ) * FWaaS rules lost on l3 agent restart ( https://bugs.launchpad.net/neutron/+bug/1973035 ) * Unable to delete FWaaS v2 firewall group stuck ACTIVE ( https://bugs.launchpad.net/neutron/+bug/1972746 ) * [neutron-dynamic-routing] Train CI is broken ( https://bugs.launchpad.net/neutron/+bug/1972854 ) ** https://review.opendev.org/c/openstack/releases/+/841292 train will be EOL-d Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Mon May 16 12:24:42 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Mon, 16 May 2022 09:24:42 -0300 Subject: Devstack and "The service catalog is empty." In-Reply-To: <20220513233751.0be1a075@niphredil.zaitcev.lan> References: <20220513233751.0be1a075@niphredil.zaitcev.lan> Message-ID: Hey Pete, I'm not sure but I think you need to source openrc [1] in order to interact with your cloud via CLI: `source openrc demo demo` or `source openrc admin admin` Cheers, Sofi [1] https://opendev.org/openstack/devstack#start-a-dev-cloud On Sat, May 14, 2022 at 1:41 AM Pete Zaitcev wrote: > Hello: > > For certain reasons I tried to run devstack for the first time. Worked > on OpenStack since 2011 and always used something else, like Packstack. > Anyway, it was painless, just a git checkout and ./stack.sh, done. > > But then, any access with openstack CLI says: > > $ openstack --os-auth-url http://192.168.128.11/identity/v3/ user list > The service catalog is empty. > > Looks like Keystone wasn't populated... Is it how it's supposed to be? > > -- Pete > > > -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon May 16 13:00:26 2022 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 16 May 2022 10:00:26 -0300 Subject: [ironic] Group Dinner in Berlin - OIS Message-ID: Hello Ironicers and fellow stackers! Since some of us will be at the Open Infrastructure Summit in Berlin, I think this would be a great opportunity to have a group dinner in Berlin with our friends (like the one we had during the Ironic Mid-Cycle at CERN). I've created an etherpad to track who would be interested in it [1] and also choose the best day. [1] https://etherpad.opendev.org/p/ironic-dinner-ois2022Berlin -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Senior Software Engineer at Red Hat Brasil* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Mon May 16 12:17:46 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Mon, 16 May 2022 17:47:46 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi Takashi Could you infer anything from the output and the issue being faced? Is it because the output has keys and values as string? Any workaround to resolve this issue? Regards Anirudh On Fri, 13 May, 2022, 11:15 Anirudh Gupta, wrote: > Hi Takashi, > > Thanks for your reply. > > I tried executing the suggested command and below is the output > > [heat-admin at overcloud-controller-1 ~]$ /bin/os-net-config -i eno1 > {'eno1': 'eno1'} > > Regards > Anirudh Gupta > > On Thu, May 12, 2022 at 11:03 AM Takashi Kajinami > wrote: > >> The puppy implementation executes the following command to get the >> interface information. >> /bin/os-net-config -i >> I'd recommend you check the command output in *all overcloud nodes *. >> >> If you need to use different interfaces for different roles then you need >> to define the parameter >> as role specific one, defined under Parameters. >> >> On Wed, May 11, 2022 at 4:26 PM Anirudh Gupta >> wrote: >> >>> Hi Takashi, >>> >>> Thanks for clarifying my issues regarding the support of PTP in Wallaby >>> Release. >>> >>> In Train, I have also tried passing the exact interface name and took 2 >>> runs with and without quotes like below: >>> >>> >>> *PtpInterface: eno1* >>> >>> *PtpInterface: 'eno1'* >>> >>> But in both the cases, the issue observed was similar >>> >>> 2022-05-11 10:33:20.189107 | 5254001f-9952-934d-e901-0000000030be | >>> FATAL | Wait for puppet host configuration to finish | >>> overcloud-controller-2 | error={"ansible_job_id": "526310775819.36650", >>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>> --detailed-exitcodes --summarize --color=false >>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>> puppet-user", "delta": "0:00:04.289208", "end": "2022-05-11 >>> 10:33:08.195052", "failed_when_result": true, "finished": 1, "msg": >>> "non-zero return code", "rc": 1, "start": "2022-05-11 10:33:03.905844", >>> "stderr": "<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' >>> is deprecated in favor of using 'lookup'. See >>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file >>> & line not available)\n<13>May 11 10:33:03 puppet-user: Warning: >>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>> should be converted to version 5\n<13>May 11 10:33:03 puppet-user: >>> (file: /etc/puppet/hiera.yaml)\n<13>May 11 10:33:03 puppet-user: Warning: >>> Undefined variable '::deploy_config_name'; \\n (file & line not >>> available)\n<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module >>> 'tripleo' has unresolved dependencies - it will only see those that are >>> resolved. Use 'puppet module list --tree' to see information about >>> modules\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: >>> Error: Evaluation Error: A substring operation does not accept a String as >>> a character index. Expected an Integer (file: >>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>> column: 46) on node overcloud-controller-2.localdomain", "stderr_lines": >>> ["<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is >>> deprecated in favor of using 'lookup'. See >>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file >>> & line not available)", "<13>May 11 10:33:03 puppet-user: Warning: >>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>> should be converted to version 5", "<13>May 11 10:33:03 puppet-user: >>> (file: /etc/puppet/hiera.yaml)", "<13>May 11 10:33:03 puppet-user: >>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>> available)", "<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: >>> module 'tripleo' has unresolved dependencies - it will only see those that >>> are resolved. Use 'puppet module list --tree' to see information about >>> modules\\n (file & line not available)", "<13>May 11 10:33:03 >>> puppet-user: *Error: Evaluation Error: A substring operation does not >>> accept a String as a character index. Expected an Integer (file: >>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>> column: 46) on node overcloud-controller-2.localdomain"], "stdout": "", >>> "stdout_lines": []}* >>> 2022-05-11 10:33:20.190263 | 5254001f-9952-934d-e901-0000000030be | >>> TIMING | Wait for puppet host configuration to finish | >>> overcloud-controller-2 | 0:12:41.268734 | 7.01s >>> >>> I'll be highly grateful if you could further extend your support to >>> resolve the issue. >>> >>> Regards >>> Anirudh Gupta >>> >>> On Tue, May 10, 2022 at 9:15 PM Takashi Kajinami >>> wrote: >>> >>>> >>>> >>>> On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta >>>> wrote: >>>> >>>>> Hi Takashi, >>>>> >>>>> Thanks for your suggestion. >>>>> >>>>> I downloaded the updated Train Images and they had the ptp.pp file >>>>> available on the overcloud and undercloud machines >>>>> >>>>> [root at overcloud-controller-1 /]# find . -name "ptp.pp" >>>>> >>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>> >>>>> With this, I re-executed the deployment and got the below error on the >>>>> machines >>>>> >>>>> 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce | >>>>> FATAL | Wait for puppet host configuration to finish | >>>>> overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", >>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>> --detailed-exitcodes --summarize --color=false >>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>> puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 >>>>> 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": >>>>> "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", >>>>> "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' >>>>> is deprecated in favor of using 'lookup'. See >>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>> (file & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>> should be converted to version 5\n<13>May 10 20:05:37 puppet-user: >>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>> available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module >>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>> resolved. Use 'puppet module list --tree' to see information about >>>>> modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: >>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>> a character index. Expected an Integer (file: >>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>> column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": >>>>> ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is >>>>> deprecated in favor of using 'lookup'. See >>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>> (file & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: >>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>> should be converted to version 5", "<13>May 10 20:05:37 puppet-user: >>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: >>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>> available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: >>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>> modules\\n (file & line not available)", "<13>May 10 20:05:37 >>>>> puppet-user: *Error: Evaluation Error: A substring operation does not >>>>> accept a String as a character index. Expected an Integer (file: >>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>> column: 46) *on node overcloud-controller-1.localdomain"], "stdout": >>>>> "", "stdout_lines": []} >>>>> >>>>> The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, >>>>> line: 41, column: 46 *had the following code: >>>>> 34 class tripleo::profile::base::time::ptp ( >>>>> 35 $ptp4l_interface = 'eth0', >>>>> 36 $ptp4l_conf_slaveonly = 1, >>>>> 37 $ptp4l_conf_network_transport = 'UDPv4', >>>>> 38 ) { >>>>> 39 >>>>> 40 $interface_mapping = generate('/bin/os-net-config', '-i', >>>>> $ptp4l_interface) >>>>> 41 *$ptp4l_interface_name = >>>>> $interface_mapping[$ptp4l_interface]* >>>>> >>>>> >>>>> *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file >>>>> is as below: >>>>> >>>>> resource_registry: >>>>> # FIXME(bogdando): switch it, once it is containerized >>>>> OS::TripleO::Services::Ptp: >>>>> ../../deployment/time/ptp-baremetal-puppet.yaml >>>>> OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp >>>>> >>>>> parameter_defaults: >>>>> # PTP hardware interface name >>>>> *PtpInterface: 'nic1'* >>>>> >>>>> # Configure PTP clock in slave mode >>>>> PtpSlaveMode: 1 >>>>> >>>>> # Configure PTP message transport protocol >>>>> PtpMessageTransport: 'UDPv4' >>>>> >>>>> I have also tried modifying the entry as below: >>>>> *PtpInterface: 'nic1' #*(i.e. without quotes), but the error remains >>>>> the same. >>>>> >>>>> Queries: >>>>> >>>>> 1. Any pointers to resolve this? >>>>> >>>>> I'm not familiar with ptp but you'd need to use the actual interface >>>> name >>>> if you are not using the alias name. >>>> >>>> >>>> >>>>> >>>>> 1. You were mentioning something about the support of PTP not >>>>> there in the wallaby release. Can you please confirm? >>>>> >>>>> IIUC PTP is still supported even in master. What we removed is the >>>> implementation using Puppet >>>> which was replaced by ansible. >>>> >>>> The warning regarding OS::TripleO::Services::Ptp was added when we >>>> decided to merge >>>> all time sync services to the single service resource which is >>>> OS::TripleO::Services::Timesync[1]. >>>> It's related to how resources are defined in Heat and doesn't affect >>>> configuration support itself. >>>> >>>> [1] >>>> https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 >>>> >>>> >>>> >>>>> It would be a great help if you could extend a little more support to >>>>> resolve the issues. >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> >>>>> On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta >>>>> wrote: >>>>> >>>>>> I'll check that well. >>>>>> By the way, I downloaded the images from the below link >>>>>> >>>>>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>>>>> >>>>>> They seem to be updated yesterday, I'll download and try the >>>>>> deployment with the latest images. >>>>>> >>>>>> Also are you pointing that the support for PTP would not be there in >>>>>> Wallaby Release? >>>>>> >>>>>> Regards >>>>>> Anirudh Gupta >>>>>> >>>>>> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Takashi >>>>>>>> >>>>>>>> I have checked this in undercloud only. >>>>>>>> I don't find any such file in overcloud. Could this be a concern? >>>>>>>> >>>>>>> >>>>>>> The manifest should exist in overcloud nodes and the missing file is >>>>>>> the exact cause >>>>>>> of that puppet failure during deployment. >>>>>>> >>>>>>> Please check your overcloud images used to install overcloud nodes >>>>>>> and ensure that >>>>>>> you're using the right one. You might be using the image for a >>>>>>> different release. >>>>>>> We removed the manifest file during the Wallaby cycle. >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Regards >>>>>>>> Anirudh Gupta >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami < >>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami < >>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta < >>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Takashi, >>>>>>>>>>> >>>>>>>>>>> Thanks for your reply. >>>>>>>>>>> >>>>>>>>>>> I have checked on my machine and the file "ptp.pp" do exist at >>>>>>>>>>> path " >>>>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>>>> " >>>>>>>>>>> >>>>>>>>>> Did you check this in your undercloud or overcloud ? >>>>>>>>>> During the deployment all configuration files are generated using >>>>>>>>>> puppet modules >>>>>>>>>> installed in overcloud nodes, so you should check this in >>>>>>>>>> overcloud nodes. >>>>>>>>>> >>>>>>>>>> Also, the deprecation warning is not implemented >>>>>>>>>> >>>>>>>>> Ignore this incomplete line. I was looking for the implementation >>>>>>>>> which shows the warning >>>>>>>>> but I found it in tripleoclient and it looks reasonable according >>>>>>>>> to what we have in >>>>>>>>> environments/services/ptp.yaml . >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>>>>>>> for controller and compute *before rendering the templates, but >>>>>>>>>>> still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> IIUC you don't need this because OS::TripleO::Services::Timesync >>>>>>>>>> becomes an alias >>>>>>>>>> to the Ptp service resource when you use the ptp environment file. >>>>>>>>>> >>>>>>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>> >>>>>>>>>>> Can you suggest any workarounds or any pointers to look further >>>>>>>>>>> in order to resolve this issue? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> Anirudh Gupta >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> I'm not familiar with PTP, but the error you pasted indicates >>>>>>>>>>>> that the required puppet manifest does not exist in your overcloud >>>>>>>>>>>> node/image. >>>>>>>>>>>> >>>>>>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>>>>>> >>>>>>>>>>>> This should not happen and the class should exist as long as >>>>>>>>>>>> you have puppet-tripleo from stable/train installed. >>>>>>>>>>>> >>>>>>>>>>>> I'd recommend you check installed tripleo/puppet packages and >>>>>>>>>>>> ensure everything is in the consistent release. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta < >>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi All >>>>>>>>>>>>> >>>>>>>>>>>>> Any update on this? >>>>>>>>>>>>> >>>>>>>>>>>>> Regards >>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>> >>>>>>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>>>>>> >>>>>>>>>>>>>> When I was executing the Overcloud deployment script, passing >>>>>>>>>>>>>> the PTP yaml, it gave the following option at the starting >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>>>>>> >>>>>>>>>>>>>> even if passing Y, it starts executing for sometime and the >>>>>>>>>>>>>> gives the following error >>>>>>>>>>>>>> >>>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Can someone suggest some pointers in order to resolve this >>>>>>>>>>>>>> issue and move forward? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Regards >>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta < >>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>>>>>>> successfully. >>>>>>>>>>>>>>> I need to enable PTP service while deploying the overcloud >>>>>>>>>>>>>>> for which I have included the service in my deployment >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> * -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>>>>>> \* >>>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> But it gives the following error >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>>>>>> 'lookup'. See >>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>>>>>> (file: >>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while >>>>>>>>>>>>>>> evaluating a Function Call, Could not find class >>>>>>>>>>>>>>> ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain >>>>>>>>>>>>>>> (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) >>>>>>>>>>>>>>> on node* overcloud-controller-0.localdomain"], "stdout": >>>>>>>>>>>>>>> "", "stdout_lines": []} >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Mon May 16 14:49:31 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Mon, 16 May 2022 16:49:31 +0200 Subject: [infra][stable] reopened old, EOL'd branches Message-ID: <8a5cb2e6-0434-3b50-fad4-a03164ff10c4@est.tech> Hi Infra team, I need some help / advice from you. Some time ago I realized that there are multiple open stable/newton (and some older) branches as well for some projects. I see there multiple kind of open EOL'd branches: 1. where there were no *-eol tag at all and thus branch is still open 2. where there exist *-eol tag, but the branch was not deleted 3. where there exist *-eol tag, the branch was probably deleted but then reopened and some generated patch created a new branch from the same branching point where the original stable/* branch was branched from. ??? (some example generated patches: ?????? * OpenDev Migration Patch ?????? * Replace openstack.org git:// URLs with https:// ???? see specific example [1]) To clean up this I think we need some decision and manual tagging and branch deletion, because our tooling (openstack/releases) does not allow to tag on a branch that has an EOL state. For this I would - push release patches where that is missing from openstack/release (this will not do the work, as mentioned above, but needs some manual tagging / branch deletion afterwards) - this is only to have a better view of the tags and branches from openstack/releases repository - tag branches with *-eol where that is missing - delete branches that have already *-eol tag, even if that means we lose some patches (like the above mentioned generated patches) Is this acceptable? What do you think? (Or should the two latter be done by Infra team via a list that I could collect?) [1] https://paste.opendev.org/show/bCnJaKgdTrlZwDzcP3VM/ Thanks in advance, El?d (irc: elodilles) From fungi at yuggoth.org Mon May 16 15:07:01 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 16 May 2022 15:07:01 +0000 Subject: [infra][stable] reopened old, EOL'd branches In-Reply-To: <8a5cb2e6-0434-3b50-fad4-a03164ff10c4@est.tech> References: <8a5cb2e6-0434-3b50-fad4-a03164ff10c4@est.tech> Message-ID: <20220516150701.fhj54q5m3po7gvrc@yuggoth.org> On 2022-05-16 16:49:31 +0200 (+0200), El?d Ill?s wrote: [...] > where there exist *-eol tag, the branch was probably deleted but then > reopened and some generated patch created a new branch from the same > branching point where the original stable/* branch was branched from. > ??? (some example generated patches: > ?????? * OpenDev Migration Patch > ?????? * Replace openstack.org git:// URLs with https:// > ???? see specific example [1]) [...] Note that Gerrit doesn't allow creation of changes for nonexistent branches, so the branch had to be recreated somehow independent of those changes being pushed. > For this I would > - push release patches where that is missing from openstack/release (this > will not do the work, as mentioned above, but needs some manual tagging / > branch deletion afterwards) - this is only to have a better view of the tags > and branches from openstack/releases repository > - tag branches with *-eol where that is missing > - delete branches that have already *-eol tag, even if that means we lose > some patches (like the above mentioned generated patches) > > Is this acceptable? What do you think? (Or should the two latter be done by > Infra team via a list that I could collect?) [...] What you propose sounds reasonable to me. If the branch already has a corresponding eol tag, I agree that (re)deleting the branch is the thing to do. Any changes which merged to the branch after the eol tag was created won't be "lost" since they still have named refs in the Git repository, they just won't appear in the history of any branch or tag. I have no problem with you doing batch branch deletion for this purpose, same as normal EOL process. I don't see any reason the Gerrit sysadmins would need to handle it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tkajinam at redhat.com Mon May 16 14:47:13 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 16 May 2022 23:47:13 +0900 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: If my observations are correct, there are two bugs causing the error and there is no feasible workaround. 1. puppet-tripleo is not parsing output of the os-net-config command properly. 2. os-net-config it is not formatting the dict value properly and the output can't be parsed by json decoder. I've reported the 2nd bug here. https://bugs.launchpad.net/os-net-config/+bug/1973566 Once the 2nd bug is fixed in master and stable branches back to train, then we'd be able to consider how we can implement a proper parsing but I can't guarantee any timeline atm. The puppet implementation was replaced by ansible and is no longer used in recent versions like wallaby and the above problems do not exist. It might be an option to try ptp deployment in Wallaby. On Mon, May 16, 2022 at 9:18 PM Anirudh Gupta wrote: > Hi Takashi > > Could you infer anything from the output and the issue being faced? > > Is it because the output has keys and values as string? Any workaround to > resolve this issue? > > Regards > Anirudh > > On Fri, 13 May, 2022, 11:15 Anirudh Gupta, wrote: > >> Hi Takashi, >> >> Thanks for your reply. >> >> I tried executing the suggested command and below is the output >> >> [heat-admin at overcloud-controller-1 ~]$ /bin/os-net-config -i eno1 >> {'eno1': 'eno1'} >> >> Regards >> Anirudh Gupta >> >> On Thu, May 12, 2022 at 11:03 AM Takashi Kajinami >> wrote: >> >>> The puppy implementation executes the following command to get the >>> interface information. >>> /bin/os-net-config -i >>> I'd recommend you check the command output in *all overcloud nodes *. >>> >>> If you need to use different interfaces for different roles then you >>> need to define the parameter >>> as role specific one, defined under Parameters. >>> >>> On Wed, May 11, 2022 at 4:26 PM Anirudh Gupta >>> wrote: >>> >>>> Hi Takashi, >>>> >>>> Thanks for clarifying my issues regarding the support of PTP in Wallaby >>>> Release. >>>> >>>> In Train, I have also tried passing the exact interface name and took 2 >>>> runs with and without quotes like below: >>>> >>>> >>>> *PtpInterface: eno1* >>>> >>>> *PtpInterface: 'eno1'* >>>> >>>> But in both the cases, the issue observed was similar >>>> >>>> 2022-05-11 10:33:20.189107 | 5254001f-9952-934d-e901-0000000030be | >>>> FATAL | Wait for puppet host configuration to finish | >>>> overcloud-controller-2 | error={"ansible_job_id": "526310775819.36650", >>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>> --detailed-exitcodes --summarize --color=false >>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>> puppet-user", "delta": "0:00:04.289208", "end": "2022-05-11 >>>> 10:33:08.195052", "failed_when_result": true, "finished": 1, "msg": >>>> "non-zero return code", "rc": 1, "start": "2022-05-11 10:33:03.905844", >>>> "stderr": "<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' >>>> is deprecated in favor of using 'lookup'. See >>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file >>>> & line not available)\n<13>May 11 10:33:03 puppet-user: Warning: >>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>> should be converted to version 5\n<13>May 11 10:33:03 puppet-user: >>>> (file: /etc/puppet/hiera.yaml)\n<13>May 11 10:33:03 puppet-user: Warning: >>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>> available)\n<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module >>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>> resolved. Use 'puppet module list --tree' to see information about >>>> modules\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: >>>> Error: Evaluation Error: A substring operation does not accept a String as >>>> a character index. Expected an Integer (file: >>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>> column: 46) on node overcloud-controller-2.localdomain", "stderr_lines": >>>> ["<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is >>>> deprecated in favor of using 'lookup'. See >>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file >>>> & line not available)", "<13>May 11 10:33:03 puppet-user: Warning: >>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>> should be converted to version 5", "<13>May 11 10:33:03 puppet-user: >>>> (file: /etc/puppet/hiera.yaml)", "<13>May 11 10:33:03 puppet-user: >>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>> available)", "<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: >>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>> are resolved. Use 'puppet module list --tree' to see information about >>>> modules\\n (file & line not available)", "<13>May 11 10:33:03 >>>> puppet-user: *Error: Evaluation Error: A substring operation does not >>>> accept a String as a character index. Expected an Integer (file: >>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>> column: 46) on node overcloud-controller-2.localdomain"], "stdout": "", >>>> "stdout_lines": []}* >>>> 2022-05-11 10:33:20.190263 | 5254001f-9952-934d-e901-0000000030be | >>>> TIMING | Wait for puppet host configuration to finish | >>>> overcloud-controller-2 | 0:12:41.268734 | 7.01s >>>> >>>> I'll be highly grateful if you could further extend your support to >>>> resolve the issue. >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> On Tue, May 10, 2022 at 9:15 PM Takashi Kajinami >>>> wrote: >>>> >>>>> >>>>> >>>>> On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta >>>>> wrote: >>>>> >>>>>> Hi Takashi, >>>>>> >>>>>> Thanks for your suggestion. >>>>>> >>>>>> I downloaded the updated Train Images and they had the ptp.pp file >>>>>> available on the overcloud and undercloud machines >>>>>> >>>>>> [root at overcloud-controller-1 /]# find . -name "ptp.pp" >>>>>> >>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>> >>>>>> With this, I re-executed the deployment and got the below error on >>>>>> the machines >>>>>> >>>>>> 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce | >>>>>> FATAL | Wait for puppet host configuration to finish | >>>>>> overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", >>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>> --detailed-exitcodes --summarize --color=false >>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>> puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 >>>>>> 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": >>>>>> "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", >>>>>> "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' >>>>>> is deprecated in favor of using 'lookup'. See >>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>> (file & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>> should be converted to version 5\n<13>May 10 20:05:37 puppet-user: >>>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>> available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module >>>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>>> resolved. Use 'puppet module list --tree' to see information about >>>>>> modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: >>>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>>> a character index. Expected an Integer (file: >>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>> column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": >>>>>> ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is >>>>>> deprecated in favor of using 'lookup'. See >>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>> (file & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: >>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>> should be converted to version 5", "<13>May 10 20:05:37 puppet-user: >>>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: >>>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>> available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: >>>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>>> modules\\n (file & line not available)", "<13>May 10 20:05:37 >>>>>> puppet-user: *Error: Evaluation Error: A substring operation does >>>>>> not accept a String as a character index. Expected an Integer (file: >>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>> column: 46) *on node overcloud-controller-1.localdomain"], "stdout": >>>>>> "", "stdout_lines": []} >>>>>> >>>>>> The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, >>>>>> line: 41, column: 46 *had the following code: >>>>>> 34 class tripleo::profile::base::time::ptp ( >>>>>> 35 $ptp4l_interface = 'eth0', >>>>>> 36 $ptp4l_conf_slaveonly = 1, >>>>>> 37 $ptp4l_conf_network_transport = 'UDPv4', >>>>>> 38 ) { >>>>>> 39 >>>>>> 40 $interface_mapping = generate('/bin/os-net-config', '-i', >>>>>> $ptp4l_interface) >>>>>> 41 *$ptp4l_interface_name = >>>>>> $interface_mapping[$ptp4l_interface]* >>>>>> >>>>>> >>>>>> *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file >>>>>> is as below: >>>>>> >>>>>> resource_registry: >>>>>> # FIXME(bogdando): switch it, once it is containerized >>>>>> OS::TripleO::Services::Ptp: >>>>>> ../../deployment/time/ptp-baremetal-puppet.yaml >>>>>> OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp >>>>>> >>>>>> parameter_defaults: >>>>>> # PTP hardware interface name >>>>>> *PtpInterface: 'nic1'* >>>>>> >>>>>> # Configure PTP clock in slave mode >>>>>> PtpSlaveMode: 1 >>>>>> >>>>>> # Configure PTP message transport protocol >>>>>> PtpMessageTransport: 'UDPv4' >>>>>> >>>>>> I have also tried modifying the entry as below: >>>>>> *PtpInterface: 'nic1' #*(i.e. without quotes), but the error >>>>>> remains the same. >>>>>> >>>>>> Queries: >>>>>> >>>>>> 1. Any pointers to resolve this? >>>>>> >>>>>> I'm not familiar with ptp but you'd need to use the actual interface >>>>> name >>>>> if you are not using the alias name. >>>>> >>>>> >>>>> >>>>>> >>>>>> 1. You were mentioning something about the support of PTP not >>>>>> there in the wallaby release. Can you please confirm? >>>>>> >>>>>> IIUC PTP is still supported even in master. What we removed is the >>>>> implementation using Puppet >>>>> which was replaced by ansible. >>>>> >>>>> The warning regarding OS::TripleO::Services::Ptp was added when we >>>>> decided to merge >>>>> all time sync services to the single service resource which is >>>>> OS::TripleO::Services::Timesync[1]. >>>>> It's related to how resources are defined in Heat and doesn't affect >>>>> configuration support itself. >>>>> >>>>> [1] >>>>> https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 >>>>> >>>>> >>>>> >>>>>> It would be a great help if you could extend a little more support to >>>>>> resolve the issues. >>>>>> >>>>>> Regards >>>>>> Anirudh Gupta >>>>>> >>>>>> >>>>>> On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta >>>>>> wrote: >>>>>> >>>>>>> I'll check that well. >>>>>>> By the way, I downloaded the images from the below link >>>>>>> >>>>>>> >>>>>>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>>>>>> >>>>>>> They seem to be updated yesterday, I'll download and try the >>>>>>> deployment with the latest images. >>>>>>> >>>>>>> Also are you pointing that the support for PTP would not be there in >>>>>>> Wallaby Release? >>>>>>> >>>>>>> Regards >>>>>>> Anirudh Gupta >>>>>>> >>>>>>> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami < >>>>>>> tkajinam at redhat.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Takashi >>>>>>>>> >>>>>>>>> I have checked this in undercloud only. >>>>>>>>> I don't find any such file in overcloud. Could this be a concern? >>>>>>>>> >>>>>>>> >>>>>>>> The manifest should exist in overcloud nodes and the missing file >>>>>>>> is the exact cause >>>>>>>> of that puppet failure during deployment. >>>>>>>> >>>>>>>> Please check your overcloud images used to install overcloud nodes >>>>>>>> and ensure that >>>>>>>> you're using the right one. You might be using the image for a >>>>>>>> different release. >>>>>>>> We removed the manifest file during the Wallaby cycle. >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Anirudh Gupta >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami < >>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami < >>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta < >>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Takashi, >>>>>>>>>>>> >>>>>>>>>>>> Thanks for your reply. >>>>>>>>>>>> >>>>>>>>>>>> I have checked on my machine and the file "ptp.pp" do exist at >>>>>>>>>>>> path " >>>>>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>>>>> " >>>>>>>>>>>> >>>>>>>>>>> Did you check this in your undercloud or overcloud ? >>>>>>>>>>> During the deployment all configuration files are generated >>>>>>>>>>> using puppet modules >>>>>>>>>>> installed in overcloud nodes, so you should check this in >>>>>>>>>>> overcloud nodes. >>>>>>>>>>> >>>>>>>>>>> Also, the deprecation warning is not implemented >>>>>>>>>>> >>>>>>>>>> Ignore this incomplete line. I was looking for the implementation >>>>>>>>>> which shows the warning >>>>>>>>>> but I found it in tripleoclient and it looks reasonable according >>>>>>>>>> to what we have in >>>>>>>>>> environments/services/ptp.yaml . >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>>>>>>>> for controller and compute *before rendering the templates, >>>>>>>>>>>> but still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> IIUC you don't need this because OS::TripleO::Services::Timesync >>>>>>>>>>> becomes an alias >>>>>>>>>>> to the Ptp service resource when you use the ptp environment >>>>>>>>>>> file. >>>>>>>>>>> >>>>>>>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>> >>>>>>>>>>>> Can you suggest any workarounds or any pointers to look further >>>>>>>>>>>> in order to resolve this issue? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Regards >>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I'm not familiar with PTP, but the error you pasted indicates >>>>>>>>>>>>> that the required puppet manifest does not exist in your overcloud >>>>>>>>>>>>> node/image. >>>>>>>>>>>>> >>>>>>>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>>>>>>> >>>>>>>>>>>>> This should not happen and the class should exist as long as >>>>>>>>>>>>> you have puppet-tripleo from stable/train installed. >>>>>>>>>>>>> >>>>>>>>>>>>> I'd recommend you check installed tripleo/puppet packages and >>>>>>>>>>>>> ensure everything is in the consistent release. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta < >>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi All >>>>>>>>>>>>>> >>>>>>>>>>>>>> Any update on this? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Regards >>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, < >>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> When I was executing the Overcloud deployment script, >>>>>>>>>>>>>>> passing the PTP yaml, it gave the following option at the starting >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> even if passing Y, it starts executing for sometime and the >>>>>>>>>>>>>>> gives the following error >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Can someone suggest some pointers in order to resolve this >>>>>>>>>>>>>>> issue and move forward? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta < >>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>>>>>>>> successfully. >>>>>>>>>>>>>>>> I need to enable PTP service while deploying the overcloud >>>>>>>>>>>>>>>> for which I have included the service in my deployment >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> * -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>>>>>>> \* >>>>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> But it gives the following error >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>>>>>>> 'lookup'. See >>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>>>>>>> (file: >>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while >>>>>>>>>>>>>>>> evaluating a Function Call, Could not find class >>>>>>>>>>>>>>>> ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain >>>>>>>>>>>>>>>> (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) >>>>>>>>>>>>>>>> on node* overcloud-controller-0.localdomain"], "stdout": >>>>>>>>>>>>>>>> "", "stdout_lines": []} >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon May 16 17:52:12 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 16 May 2022 10:52:12 -0700 Subject: [kolla] octavia_rsyslog service fails/keeps restarting | Openstack Wallaby | Tripleo In-Reply-To: References: Message-ID: The key line seems to say it all: 2022-05-15T16:20:55.599266842+00:00 stderr F /usr/local/bin/kolla_start: line 18: /usr/sbin/rsyslogd: No such file or directory It looks like you need to install rsyslog there. dnf install rsyslog or apt-get install rsyslog I would have thought kolla/tripleo would have installed that package. Michael On Sun, May 15, 2022 at 9:41 AM Swogat Pradhan wrote: > > Hi, > I am currently trying to deploy octavia in openstack wallaby, but the octavia_rsyslog service is malfunctioning it seems. And I am checking the logs but am not sure how to fix the issue. > > Attached log for reference. > > With regards, > Swogat Pradhan From dhana.sys at gmail.com Mon May 16 19:08:21 2022 From: dhana.sys at gmail.com (Dhanasekar Kandasamy) Date: Tue, 17 May 2022 00:38:21 +0530 Subject: [neutron] OpenStack Port Creation process Message-ID: Hi, I want to understand the Port Creation process in OpenStack. 1. What happens in the background when we create a Port with Security Groups? Does it create the port in the Control Plane level or is neutron involved in this? 2. How does the port obtain the IP address from DHCP and Who initiates this DHCP Process? 3. Is there any docs/links available for the same? Thanks, Dhana -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Mon May 16 22:41:28 2022 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 16 May 2022 15:41:28 -0700 Subject: OpenTelemetry integration with OSProfiler Message-ID: Hi, I'm curious if anybody has tried integrating OpenTelemetry with OSProfiler. OSProfiler has a Jaeger driver, which uses the Jaeger client. But the Jaeger client has been deprecated in favor of OpenTelemetry [1]. Related to this, is anybody currently using OSProfiler to get distributed traces in OpenStack? -- Duc [1] https://github.com/jaegertracing/jaeger-client-python From tkajinam at redhat.com Mon May 16 23:13:42 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 17 May 2022 08:13:42 +0900 Subject: [puppet] Removal of novajoin support In-Reply-To: References: Message-ID: Hello, Because we haven't heard any objections, I've removed WIP from the removal patch. If anybody still has concerns then please share your concerns in the review. https://review.opendev.org/c/openstack/puppet-nova/+/840802 Thank you, Takashi On Fri, May 6, 2022 at 12:04 PM Takashi Kajinami wrote: > Hello, > > > Because the novajoin project[1] has been unmaintained for a while, > we deprecated support for the service during the previous Yoga cycle[2]. > [1] https://opendev.org/x/novajoin > [2] https://review.opendev.org/c/openstack/puppet-nova/+/833507 > > I've submitted the removal patch[3] so that we can drop the whole > implementation from Zed release. > [3] https://review.opendev.org/c/openstack/puppet-nova/+/840802 > > In case anyone has any concern about the removal, please share your > thoughts in the above review. > > I'll keep WIP on the patch for one week to be open for any feedback. > > Thank you, > Takashi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 16 23:47:25 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 16 May 2022 18:47:25 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 19, 2022 at 1500 UTC Message-ID: <180cf44f986.bc798a6f80942.2327562905416045598@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for May 19, 2022 at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, May 18, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From mnaser at vexxhost.com Tue May 17 00:36:42 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 16 May 2022 20:36:42 -0400 Subject: [neutron][neutron-vpnaas] Reviews needed Message-ID: Hi team, We've got a few things that need to be looked at, so appreciate reviews from the Neutron team on this: - Functional tests fixes for stable/yoga: https://review.opendev.org/c/openstack/neutron-vpnaas/+/840330 (and the rest of the chain) - TOX_CONSTRAINTS_FILE cleanup: https://review.opendev.org/c/openstack/neutron-vpnaas/+/822926 - README file cleanup: https://review.opendev.org/c/openstack/neutron-vpnaas/+/574679 - Backport of "ipsec rereadsecrets" for stable/wallaby: https://review.opendev.org/c/openstack/neutron-vpnaas/+/795884 I would appreciate if the Neutron core team can help move some of those patches since I'd like to get VPNaaS in a nice shape to make a release with those fixes :) Thanks Mohammed -- Mohammed Naser VEXXHOST, Inc. From gagehugo at gmail.com Tue May 17 05:41:24 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 17 May 2022 00:41:24 -0500 Subject: [openstack-helm] No meeting tomorrow Message-ID: Hey team, Since there's nothing on the agenda, the meeting for tomorrow is cancelled. We will meet next week at the regular time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue May 17 05:53:32 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 17 May 2022 07:53:32 +0200 Subject: [neutron] OpenStack Port Creation process In-Reply-To: References: Message-ID: <1815765.r1U1N1eJm3@p1> Hi, Dnia poniedzia?ek, 16 maja 2022 21:08:21 CEST Dhanasekar Kandasamy pisze: > Hi, > > I want to understand the Port Creation process in OpenStack. > > 1. What happens in the background when we create a Port with Security > Groups? Does it create the port in the Control Plane level or is > neutron involved in this? When You call neutron API e.g. with CLI like: openstack port create then Neutron creates port in given network. It is just entry in the Neutron DB, nothing else. Neutron tries then to allocate IP address for the port from one of the subnets existing in that network (or from 2 subnets if there is IPv4 and IPv6 subnet available). When port is later used by e.g. Nova for some VM, Nova asks neutron to update port and to bind it on specific host. Neutron then tries to bind port with one of the available mechanism drivers. When port is bound, nova (os-vif) plugs port on the host (L1 provisioning) and then Neutron agent, or ovn-controller in case of ML2/OVN backend provisions port on the host. > > 2. How does the port obtain the IP address from DHCP and Who initiates this > DHCP Process? It depends on the used backend. In case of ML2/OVN it's OVN who provides DHCP service so DHCP entries are configured in OVN and OVN locally replies to the DHCP requests from the VM. In case of ML2/OVS (or Linuxbridge) there is DHCP agent which spawns and configures dnsmasq for each network. When You create port in Neutron and it allocates IP address, thennew lease is added in leases file so that dnsmasq knows what IP address should be given for specific MAC address. > > 3. Is there any docs/links available for the same? I'm not sure if there is something exactly like Your questions but You can find a lot of information in docs https://docs.openstack.org/neutron/latest/contributor/index.html#neutron-internals > > Thanks, > Dhana > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Tue May 17 05:58:23 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 17 May 2022 07:58:23 +0200 Subject: [neutron][neutron-vpnaas] Reviews needed In-Reply-To: References: Message-ID: <1683442.SlMUAugEhR@p1> Hi, Dnia wtorek, 17 maja 2022 02:36:42 CEST Mohammed Naser pisze: > Hi team, > > We've got a few things that need to be looked at, so appreciate > reviews from the Neutron team on this: > > - Functional tests fixes for stable/yoga: > https://review.opendev.org/c/openstack/neutron-vpnaas/+/840330 (and > the rest of the chain) > - TOX_CONSTRAINTS_FILE cleanup: > https://review.opendev.org/c/openstack/neutron-vpnaas/+/822926 > - README file cleanup: > https://review.opendev.org/c/openstack/neutron-vpnaas/+/574679 > - Backport of "ipsec rereadsecrets" for stable/wallaby: > https://review.opendev.org/c/openstack/neutron-vpnaas/+/795884 > > I would appreciate if the Neutron core team can help move some of > those patches since I'd like to get VPNaaS in a nice shape to make a > release with those fixes :) > > Thanks > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. > > Done. Thx a lot for taking care of the neutron-vpnaas project :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Tue May 17 08:24:45 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 17 May 2022 10:24:45 +0200 Subject: [neutron] CI meeting 17.05.2022 Message-ID: <6242396.uVdP62szxH@p1> Hi, Sorry for late notice but this week I have internal meeting on which I have to be. That meeting overlaps with Neutron CI meeting so lets cancel CI meeting this week. See You on the CI meeting next week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From smooney at redhat.com Tue May 17 09:27:50 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 17 May 2022 10:27:50 +0100 Subject: [neutron] OpenStack Port Creation process In-Reply-To: References: Message-ID: <22049e9f45f4be177299d90e480591cb744f024a.camel@redhat.com> On Tue, 2022-05-17 at 00:38 +0530, Dhanasekar Kandasamy wrote: > Hi, > > I want to understand the Port Creation process in OpenStack. > > 1. What happens in the background when we create a Port with Security > Groups? Does it create the port in the Control Plane level or is > neutron involved in this? it depend on what you mean by control plane. when you create a port at the neutron api then basically it just creates a a db record. an ip address will be assigend if the ip policy is not deffered. for routed networks since the ip of the prot depends on the host/segment it is bound too the ip is not asigned until the port port is bound. for normal prot the ip is assigned at port creation but not configured in the dhcp agent until the point it attached to a vm. > 2. How does the port obtain the IP address from DHCP and Who initiates this > DHCP Process? when a vm or ironic server is allocate a port either via the server create command or a port attach nova/ironic will start a process call port-binding. in the case fo a new vm boot after teh schduler has selected a host nova will call neutron to set the binding:host-id filed with the hostname of the selected host. this will triger port binding in neutron which will result in the dhcp agent or dhcp plugin driver if you are using a odl/ovn ectra to actully configure dhcp for the port. in the case of the agent the dhcp agent will configure the mac/ip pair in the dnsmasq config. once port binding is complete in the boot workflow nova will use that prot infomaiton to create the vm and attach the interfaces to the network backends dataplane. when the instance boots it will run its first boot configuration i.e. cloud-init which will configure the OS in teh guest for dhcp or otherwise depening on if you overrode teh default behavrio with user data. assuming you did not cloud-init would typicly configure the instance for dhcp and the instance will use a dhcpclent to send a dhcp request. with ml2/ovs that dhcp request will be a broadcast packet that will propagate and then eventuraly reach the dhcp server which will respond. for ml2/ovn the dhcp request will be matched by an openflow rules and the local dhcp reponder built in to ovn will repond. > > 3. Is there any docs/links available for the same? im not sure i think there are some high level docs but this partly depends on your network backend. > > Thanks, > Dhana From skaplons at redhat.com Tue May 17 12:59:28 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 17 May 2022 14:59:28 +0200 Subject: [neutron] Static route not added in namespace using DVR on Wallaby In-Reply-To: <30e014f2-0608-9d6e-02c1-fdbca0700fb9@planethoster.info> References: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> <30e014f2-0608-9d6e-02c1-fdbca0700fb9@planethoster.info> Message-ID: <2110973.yFWlo4NoJW@p1> Hi, Dnia czwartek, 12 maja 2022 19:23:15 CEST J-P Methot pisze: > Hi, > > I got the debug logs. They were a bit too long so I put them in a txt > file. Please tell me if you'd prefer a pastebin instead. > > On 5/10/22 02:38, Slawek Kaplonski wrote: > > Hi, > > > > W dniu pon, 9 maj 2022 o 13:49:22 -0400 u?ytkownik J-P Methot > > napisa?: > >> > >> I tested this on my own DVR test environment with a random static > >> route and I'm getting the same results as on production. Here's what > >> I get in the logs : > >> > >> 2022-05-09 17:28:50.018 691 INFO neutron.agent.l3.agent [-] Starting > >> processing update 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, > >> priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait time > >> elapsed: 0.001 > >> 2022-05-09 17:28:50.019 691 INFO neutron.agent.l3.agent [-] Starting > >> router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, > >> priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait time > >> elapsed: 0.002 > >> 2022-05-09 17:28:51.640 691 INFO neutron.agent.l3.agent [-] Finished > >> a router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, update_id > >> 9e112de1-f538-4a41-9526-152aa3937129. Time elapsed: 1.622 > >> > >> As you can see, there was an attempt at updating the router and it > >> did return as successful. However, there was no new route added in > >> the router or floating ip namespace. No error either. > >> > > Can You do the same with debug logs enabled? > > > >> On 5/6/22 14:40, Slawek Kaplonski wrote: > >>> Hi, > >>> > >>> W dniu pi?, 6 maj 2022 o 14:14:47 -0400 u?ytkownik J-P Methot > >>> napisa?: > >>>> > >>>> Hi, > >>>> > >>>> We're in this situation where we are going to move some instances > >>>> from one openstack cluster to another. After this process, we want > >>>> our instances on the new openstack cluster to keep the same > >>>> floating IPs but also to be able to communicate with some instances > >>>> that are in the same public IP range on the first cluster. > >>>> > >>>> To accomplish this, we want to add static routes like 'X.X.X.X/32 > >>>> via Y.Y.Y.Y'. However, we're using DVR and when we add the static > >>>> routes, they do not show up anywhere in any of the namespaces. Is > >>>> there a different way to add static routes on DVR instead of using > >>>> openstack router add route ? > >>>> > >>> No, there is no other way to add static routes to the dvr router. I > >>> don't have any DVR deployment now to check it but IIRC route should > >>> be added in the qrouter namespace in the compute nodes where router > >>> exists. If it's not there please check logs of the l3-agent on those > >>> hosts, maybe there are some errors there. > >>>> -- > >>>> Jean-Philippe M?thot > >>>> Senior Openstack system administrator > >>>> Administrateur syst?me Openstack s?nior > >>>> PlanetHoster inc. > >>> -- > >>> Slawek Kaplonski > >>> Principal Software Engineer > >>> Red Hat > >> -- > >> Jean-Philippe M?thot > >> Senior Openstack system administrator > >> Administrateur syst?me Openstack s?nior > >> PlanetHoster inc. > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > -- > Jean-Philippe M?thot > Senior Openstack system administrator > Administrateur syst?me Openstack s?nior > PlanetHoster inc. > I just tested it today on my local env and everything works fine for me. When I added extra route to some external IP address it was added in snat-XXX namespace, When I tested dvr router only with private networks, extra route was added in the qrouter-XXX namespaces. Also, we have scenario test https://github.com/openstack/neutron-tempest-plugin/blob/6dcc0e81b5f3c656181091025f351eb479cdde21/neutron_tempest_plugin/scenario/test_connectivity.py#L73[1] which is creating such extra routes and uses them to connect between VMs. And this test is running fine AFAIK in our CI. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://github.com/openstack/neutron-tempest-plugin/blob/6dcc0e81b5f3c656181091025f351eb479cdde21/neutron_tempest_plugin/scenario/test_connectivity.py#L73 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From zaitcev at redhat.com Tue May 17 13:31:29 2022 From: zaitcev at redhat.com (Pete Zaitcev) Date: Tue, 17 May 2022 08:31:29 -0500 Subject: Devstack and "The service catalog is empty." In-Reply-To: References: <20220513233751.0be1a075@niphredil.zaitcev.lan> Message-ID: <20220517083129.4a816069@niphredil.zaitcev.lan> Dear Sofia: Thanks for the key hint. I looked at the environment variables that openrc sets and found that region must be provided. So openstack CLI works as expected with either sourcing openrc or --os-region=RegionOne. Thanks again, -- Pete On Mon, 16 May 2022 09:24:42 -0300 Sofia Enriquez wrote: > Hey Pete, > > I'm not sure but I think you need to source openrc [1] in order to interact > with your cloud via CLI: > `source openrc demo demo` > or > `source openrc admin admin` > > Cheers, > Sofi > > [1] https://opendev.org/openstack/devstack#start-a-dev-cloud > > > On Sat, May 14, 2022 at 1:41 AM Pete Zaitcev wrote: > > > Hello: > > > > For certain reasons I tried to run devstack for the first time. Worked > > on OpenStack since 2011 and always used something else, like Packstack. > > Anyway, it was painless, just a git checkout and ./stack.sh, done. > > > > But then, any access with openstack CLI says: > > > > $ openstack --os-auth-url http://192.168.128.11/identity/v3/ user list > > The service catalog is empty. > > > > Looks like Keystone wasn't populated... Is it how it's supposed to be? > > > > -- Pete From jp.methot at planethoster.info Tue May 17 15:32:39 2022 From: jp.methot at planethoster.info (J-P Methot) Date: Tue, 17 May 2022 11:32:39 -0400 Subject: [neutron] Static route not added in namespace using DVR on Wallaby In-Reply-To: <2110973.yFWlo4NoJW@p1> References: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> <30e014f2-0608-9d6e-02c1-fdbca0700fb9@planethoster.info> <2110973.yFWlo4NoJW@p1> Message-ID: Is your local environment on Wallaby? I might consider upgrading if it's more recent than that. We use Kolla to deploy so it might also play into this. On 5/17/22 08:59, Slawek Kaplonski wrote: > > Hi, > > > Dnia czwartek, 12 maja 2022 19:23:15 CEST J-P Methot pisze: > > > Hi, > > > > > > I got the debug logs. They were a bit too long so I put them in a txt > > > file. Please tell me if you'd prefer a pastebin instead. > > > > > > On 5/10/22 02:38, Slawek Kaplonski wrote: > > > > Hi, > > > > > > > > W?dniu pon, 9 maj 2022 o?13:49:22 -0400 u?ytkownik J-P Methot > > > > napisa?: > > > >> > > > >> I tested this on my own DVR test environment with a random static > > > >> route and I'm getting the same results as on production. Here's what > > > >> I get in the logs : > > > >> > > > >> 2022-05-09 17:28:50.018 691 INFO neutron.agent.l3.agent [-] Starting > > > >> processing update 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, > > > >> priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait > time > > > >> elapsed: 0.001 > > > >> 2022-05-09 17:28:50.019 691 INFO neutron.agent.l3.agent [-] Starting > > > >> router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, > > > >> priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait > time > > > >> elapsed: 0.002 > > > >> 2022-05-09 17:28:51.640 691 INFO neutron.agent.l3.agent [-] Finished > > > >> a router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, update_id > > > >> 9e112de1-f538-4a41-9526-152aa3937129. Time elapsed: 1.622 > > > >> > > > >> As you can see, there was an attempt at updating the router and it > > > >> did return as successful. However, there was no new route added in > > > >> the router or floating ip namespace. No error either. > > > >> > > > > Can You do the same with debug logs enabled? > > > > > > > >> On 5/6/22 14:40, Slawek Kaplonski wrote: > > > >>> Hi, > > > >>> > > > >>> W?dniu pi?, 6 maj 2022 o?14:14:47 -0400 u?ytkownik J-P Methot > > > >>> napisa?: > > > >>>> > > > >>>> Hi, > > > >>>> > > > >>>> We're in this situation where we are going to move some instances > > > >>>> from one openstack cluster to another. After this process, we want > > > >>>> our instances on the new openstack cluster to keep the same > > > >>>> floating IPs but also to be able to communicate with some > instances > > > >>>> that are in the same public IP range on the first cluster. > > > >>>> > > > >>>> To accomplish this, we want to add static routes like 'X.X.X.X/32 > > > >>>> via Y.Y.Y.Y'. However, we're using DVR and when we add the static > > > >>>> routes, they do not show up anywhere in any of the namespaces. Is > > > >>>> there a different way to add static routes on DVR instead of using > > > >>>> openstack router add route ? > > > >>>> > > > >>> No, there is no other way to add static routes to the dvr router. I > > > >>> don't have any DVR deployment now to check it but IIRC route should > > > >>> be added in the qrouter namespace in the compute nodes where router > > > >>> exists. If it's not there please check logs of the l3-agent on > those > > > >>> hosts, maybe there are some errors there. > > > >>>> -- > > > >>>> Jean-Philippe M?thot > > > >>>> Senior Openstack system administrator > > > >>>> Administrateur syst?me Openstack s?nior > > > >>>> PlanetHoster inc. > > > >>> -- > > > >>> Slawek Kaplonski > > > >>> Principal Software Engineer > > > >>> Red Hat > > > >> -- > > > >> Jean-Philippe M?thot > > > >> Senior Openstack system administrator > > > >> Administrateur syst?me Openstack s?nior > > > >> PlanetHoster inc. > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > -- > > > Jean-Philippe M?thot > > > Senior Openstack system administrator > > > Administrateur syst?me Openstack s?nior > > > PlanetHoster inc. > > > > > > I just tested it today on my local env and everything works fine for > me. When I added extra route to some external IP address it was added > in snat-XXX namespace, > > When I tested dvr router only with private networks, extra route was > added in the qrouter-XXX namespaces. > > Also, we have scenario test > https://github.com/openstack/neutron-tempest-plugin/blob/6dcc0e81b5f3c656181091025f351eb479cdde21/neutron_tempest_plugin/scenario/test_connectivity.py#L73?which > is creating such extra routes and uses them to connect between VMs. > And this test is running fine AFAIK in our CI. > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Jean-Philippe M?thot Senior Openstack system administrator Administrateur syst?me Openstack s?nior PlanetHoster inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue May 17 15:46:13 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 17 May 2022 17:46:13 +0200 Subject: [neutron] Static route not added in namespace using DVR on Wallaby In-Reply-To: References: <61b8df37-3bd9-3968-3352-fa47ab75aad3@planethoster.info> <2110973.yFWlo4NoJW@p1> Message-ID: <4895453.RP0LhrZYZ6@p1> Hi, Dnia wtorek, 17 maja 2022 17:32:39 CEST J-P Methot pisze: > Is your local environment on Wallaby? I might consider upgrading if it's > more recent than that. We use Kolla to deploy so it might also play into > this. No, I was testing on master (as I usually do in my dev env) > > On 5/17/22 08:59, Slawek Kaplonski wrote: > > > > Hi, > > > > > > Dnia czwartek, 12 maja 2022 19:23:15 CEST J-P Methot pisze: > > > > > Hi, > > > > > > > > > > I got the debug logs. They were a bit too long so I put them in a txt > > > > > file. Please tell me if you'd prefer a pastebin instead. > > > > > > > > > > On 5/10/22 02:38, Slawek Kaplonski wrote: > > > > > > Hi, > > > > > > > > > > > > W dniu pon, 9 maj 2022 o 13:49:22 -0400 u?ytkownik J-P Methot > > > > > > napisa?: > > > > > >> > > > > > >> I tested this on my own DVR test environment with a random static > > > > > >> route and I'm getting the same results as on production. Here's what > > > > > >> I get in the logs : > > > > > >> > > > > > >> 2022-05-09 17:28:50.018 691 INFO neutron.agent.l3.agent [-] Starting > > > > > >> processing update 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, > > > > > >> priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait > > time > > > > > >> elapsed: 0.001 > > > > > >> 2022-05-09 17:28:50.019 691 INFO neutron.agent.l3.agent [-] Starting > > > > > >> router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, action 3, > > > > > >> priority 1, update_id 9e112de1-f538-4a41-9526-152aa3937129. Wait > > time > > > > > >> elapsed: 0.002 > > > > > >> 2022-05-09 17:28:51.640 691 INFO neutron.agent.l3.agent [-] Finished > > > > > >> a router update for 41fcd10b-7db5-45d9-b23c-e22f34c45eec, update_id > > > > > >> 9e112de1-f538-4a41-9526-152aa3937129. Time elapsed: 1.622 > > > > > >> > > > > > >> As you can see, there was an attempt at updating the router and it > > > > > >> did return as successful. However, there was no new route added in > > > > > >> the router or floating ip namespace. No error either. > > > > > >> > > > > > > Can You do the same with debug logs enabled? > > > > > > > > > > > >> On 5/6/22 14:40, Slawek Kaplonski wrote: > > > > > >>> Hi, > > > > > >>> > > > > > >>> W dniu pi?, 6 maj 2022 o 14:14:47 -0400 u?ytkownik J-P Methot > > > > > >>> napisa?: > > > > > >>>> > > > > > >>>> Hi, > > > > > >>>> > > > > > >>>> We're in this situation where we are going to move some instances > > > > > >>>> from one openstack cluster to another. After this process, we want > > > > > >>>> our instances on the new openstack cluster to keep the same > > > > > >>>> floating IPs but also to be able to communicate with some > > instances > > > > > >>>> that are in the same public IP range on the first cluster. > > > > > >>>> > > > > > >>>> To accomplish this, we want to add static routes like 'X.X.X.X/32 > > > > > >>>> via Y.Y.Y.Y'. However, we're using DVR and when we add the static > > > > > >>>> routes, they do not show up anywhere in any of the namespaces. Is > > > > > >>>> there a different way to add static routes on DVR instead of using > > > > > >>>> openstack router add route ? > > > > > >>>> > > > > > >>> No, there is no other way to add static routes to the dvr router. I > > > > > >>> don't have any DVR deployment now to check it but IIRC route should > > > > > >>> be added in the qrouter namespace in the compute nodes where router > > > > > >>> exists. If it's not there please check logs of the l3-agent on > > those > > > > > >>> hosts, maybe there are some errors there. > > > > > >>>> -- > > > > > >>>> Jean-Philippe M?thot > > > > > >>>> Senior Openstack system administrator > > > > > >>>> Administrateur syst?me Openstack s?nior > > > > > >>>> PlanetHoster inc. > > > > > >>> -- > > > > > >>> Slawek Kaplonski > > > > > >>> Principal Software Engineer > > > > > >>> Red Hat > > > > > >> -- > > > > > >> Jean-Philippe M?thot > > > > > >> Senior Openstack system administrator > > > > > >> Administrateur syst?me Openstack s?nior > > > > > >> PlanetHoster inc. > > > > > > > > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > > > > > > > > -- > > > > > Jean-Philippe M?thot > > > > > Senior Openstack system administrator > > > > > Administrateur syst?me Openstack s?nior > > > > > PlanetHoster inc. > > > > > > > > > > > I just tested it today on my local env and everything works fine for > > me. When I added extra route to some external IP address it was added > > in snat-XXX namespace, > > > > When I tested dvr router only with private networks, extra route was > > added in the qrouter-XXX namespaces. > > > > Also, we have scenario test > > https://github.com/openstack/neutron-tempest-plugin/blob/6dcc0e81b5f3c656181091025f351eb479cdde21/neutron_tempest_plugin/scenario/test_connectivity.py#L73 which > > is creating such extra routes and uses them to connect between VMs. > > And this test is running fine AFAIK in our CI. > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > -- > Jean-Philippe M?thot > Senior Openstack system administrator > Administrateur syst?me Openstack s?nior > PlanetHoster inc. > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From dtantsur at redhat.com Tue May 17 17:56:03 2022 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 17 May 2022 19:56:03 +0200 Subject: [ironic] [requirements] broken libvirt-python on yoga on stream 9 Message-ID: Hi all, It is happening again, the Bifrost CI is broken because libvirt-python cannot be built from source, this time on Stream 9. Missing type converters: int *:1 ERROR: failed virDomainQemuMonitorCommandWithFiles I created a gist with a reproducer: https://gist.github.com/dtantsur/835303c6a68ed77157016f5955183115. I cannot count how many times we had to deal with similar errors. I assume, libvirt-python has to be newer than the installed Python (8.2.0 in CS9, 8.0.0 in constraints). Should we stop constraining libvirt-python? Any other ideas? Dmitry -- Red Hat GmbH , Registered seat: Werner von Siemens Ring 14, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue May 17 18:17:39 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 17 May 2022 18:17:39 +0000 Subject: [ironic] [requirements] broken libvirt-python on yoga on stream 9 In-Reply-To: References: Message-ID: <20220517181739.dck2u3kxkj6r4ujh@yuggoth.org> On 2022-05-17 19:56:03 +0200 (+0200), Dmitry Tantsur wrote: [...] > I cannot count how many times we had to deal with similar errors. > I assume, libvirt-python has to be newer than the installed Python > (8.2.0 in CS9, 8.0.0 in constraints). Should we stop constraining > libvirt-python? Any other ideas? If being able to install constrained versions on CentOS is important to projects, then having openstack/requirements run CentOS-based jobs for changes to upper-constraints.txt would be a logical addition. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Tue May 17 18:17:08 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 17 May 2022 11:17:08 -0700 Subject: [ironic] [requirements] broken libvirt-python on yoga on stream 9 In-Reply-To: References: Message-ID: <0548adf4-e274-4ad8-a1cf-c531ca688e39@www.fastmail.com> On Tue, May 17, 2022, at 10:56 AM, Dmitry Tantsur wrote: > Hi all, > > It is happening again, the Bifrost CI is broken because libvirt-python > cannot be built from source, this time on Stream 9. > > Missing type converters: > int *:1 > ERROR: failed virDomainQemuMonitorCommandWithFiles > > I created a gist with a reproducer: > https://gist.github.com/dtantsur/835303c6a68ed77157016f5955183115. > > I cannot count how many times we had to deal with similar errors. I > assume, libvirt-python has to be newer than the installed Python (8.2.0 > in CS9, 8.0.0 in constraints). Should we stop constraining > libvirt-python? Any other ideas? Your libvirt-python version needs to be at least as new as your libvirt version. New libvirt-python versions are expected to continue to work with old libvirt versions as well (though it may need to be built against the specific libvirt?). In this case it looks like CentOS Stream 9 libvirt is newer than what was in constraints. Generally constraints should update quickly. Looking at master upper-constraints libvirt-python was updated to 8.3.0 on May 4 and according to pypi the package updated on May 2 which seems reasonable. The problem here appears to be that you want this to work on a stable branch (yoga) and stable branches do not update constraints. My suggestion is that we use stable platforms for testing stable releases. The CentOS Stream releases seem to get updates that break stable software expectations far more than our other platforms. When working against master and trying to chase the latest and greatest this is probably a feature, but is problematic when you want rate of change to fall to near zero. I would consider not using Stream on stable branches if these problems persist. > > Dmitry > > -- > Red Hat GmbH , Registered seat: > Werner von Siemens Ring 14, D-85630 Grasbrunn, Germany > Commercial register: Amtsgericht Muenchen/Munich, HRB 153243, > Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, > Amy Ross From radoslaw.piliszek at gmail.com Tue May 17 18:18:10 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 17 May 2022 20:18:10 +0200 Subject: [ironic] [requirements] broken libvirt-python on yoga on stream 9 In-Reply-To: References: Message-ID: In Nova and Masakari (and Kolla for that matter), the relied upon is the packaged version. Using libvirt from PyPI is simply too much PITA to care about. Kind regards, -yoctozepto On Tue, 17 May 2022 at 19:57, Dmitry Tantsur wrote: > > Hi all, > > It is happening again, the Bifrost CI is broken because libvirt-python cannot be built from source, this time on Stream 9. > > Missing type converters: > int *:1 > ERROR: failed virDomainQemuMonitorCommandWithFiles > > I created a gist with a reproducer: https://gist.github.com/dtantsur/835303c6a68ed77157016f5955183115. > > I cannot count how many times we had to deal with similar errors. I assume, libvirt-python has to be newer than the installed Python (8.2.0 in CS9, 8.0.0 in constraints). Should we stop constraining libvirt-python? Any other ideas? > > Dmitry > > -- > > Red Hat GmbH, Registered seat: Werner von Siemens Ring 14, D-85630 Grasbrunn, Germany > Commercial register: Amtsgericht Muenchen/Munich, HRB 153243, > Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross From smooney at redhat.com Tue May 17 19:32:02 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 17 May 2022 20:32:02 +0100 Subject: [ironic] [requirements] broken libvirt-python on yoga on stream 9 In-Reply-To: References: Message-ID: On Tue, 2022-05-17 at 20:18 +0200, Rados?aw Piliszek wrote: > In Nova and Masakari (and Kolla for that matter), the relied upon is > the packaged version. Using libvirt from PyPI is simply too much PITA > to care about. we do not depend on libvirt-python in our unti or funtional tests as by design unit and fucnitonal tests should mock external depencices like that. we have libvirt fixutres we use when we need to fake interactions with libvirt. so libvirt-python is very deliberatly not part of nova's requirement.txt or test-requirement.txt its a runtime dep that can be provieed by pypi or the disto but its not managed by upper constraits since its not installed via pypi in our any our tox environments. devstack has the lovely hack https://github.com/openstack/devstack/blob/3155217fb6a14b9c7d9c9a6f1bf11e9580c949c5/lib/nova_plugins/functions-libvirt#L59-L69= which remvoes libvirt from uc but its actully installed form the disto pacages in our devstack envionments https://github.com/openstack/devstack/blob/3155217fb6a14b9c7d9c9a6f1bf11e9580c949c5/lib/nova_plugins/functions-libvirt#L71-L95= it proably should be in our bindep.txt https://github.com/openstack/nova/blob/master/bindep.txt or incldued in extras like the other virt dirver libs https://github.com/openstack/nova/blob/master/setup.cfg#L28= but as i said we make sure its not a depency for our tox jobs and its installed by devstack from the disto repos for our integration testing jobs. > > Kind regards, > -yoctozepto > > On Tue, 17 May 2022 at 19:57, Dmitry Tantsur wrote: > > > > Hi all, > > > > It is happening again, the Bifrost CI is broken because libvirt-python cannot be built from source, this time on Stream 9. > > > > Missing type converters: > > int *:1 > > ERROR: failed virDomainQemuMonitorCommandWithFiles > > > > I created a gist with a reproducer: https://gist.github.com/dtantsur/835303c6a68ed77157016f5955183115. > > > > I cannot count how many times we had to deal with similar errors. I assume, libvirt-python has to be newer than the installed Python (8.2.0 in CS9, 8.0.0 in constraints). Should we stop constraining libvirt-python? Any other ideas? > > > > Dmitry > > > > -- > > > > Red Hat GmbH, Registered seat: Werner von Siemens Ring 14, D-85630 Grasbrunn, Germany > > Commercial register: Amtsgericht Muenchen/Munich, HRB 153243, > > Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross > From tkajinam at redhat.com Wed May 18 04:12:04 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 18 May 2022 13:12:04 +0900 Subject: [vmware-nsx] Missing releases and project status Message-ID: Hello, I recently noticed that the vmware-nsx repo doesn't have the stable/wallaby branch and the stable/yoga branch. Also, no release has been created since victoria release. There is a bug fix recently merged directly to stable/xena, and this was backported to ussuri and train with victoria skipped. https://review.opendev.org/c/x/vmware-nsx/+/836034 I'm quite confused with this because it looks like master and victoria are no longer maintained. May I ask for some clarification about the current status of the project, especially regarding the following points. - Is there any plan to create stable/wallaby and stable/yoga ? - Is there any plan to create a release for stable/wallaby and later ? - May I know the maintenance status of each branch ? - What is the future plan of that repo? Will you have stable/zed release ? I'm asking this because the plugin is supported by puppet-neutron but we are thinking of deprecating/removing the implementation if the plugin will be unmaintained. I'd appreciate your input on this. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed May 18 06:24:44 2022 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 18 May 2022 08:24:44 +0200 Subject: [ironic] [requirements] broken libvirt-python on yoga on stream 9 In-Reply-To: <0548adf4-e274-4ad8-a1cf-c531ca688e39@www.fastmail.com> References: <0548adf4-e274-4ad8-a1cf-c531ca688e39@www.fastmail.com> Message-ID: On Tue, May 17, 2022 at 8:24 PM Clark Boylan wrote: > On Tue, May 17, 2022, at 10:56 AM, Dmitry Tantsur wrote: > > Hi all, > > > > It is happening again, the Bifrost CI is broken because libvirt-python > > cannot be built from source, this time on Stream 9. > > > > Missing type converters: > > int *:1 > > ERROR: failed virDomainQemuMonitorCommandWithFiles > > > > I created a gist with a reproducer: > > https://gist.github.com/dtantsur/835303c6a68ed77157016f5955183115. > > > > I cannot count how many times we had to deal with similar errors. I > > assume, libvirt-python has to be newer than the installed Python (8.2.0 > > in CS9, 8.0.0 in constraints). Should we stop constraining > > libvirt-python? Any other ideas? > > Your libvirt-python version needs to be at least as new as your libvirt > version. New libvirt-python versions are expected to continue to work with > old libvirt versions as well (though it may need to be built against the > specific libvirt?). In this case it looks like CentOS Stream 9 libvirt is > newer than what was in constraints. > > Generally constraints should update quickly. Looking at master > upper-constraints libvirt-python was updated to 8.3.0 on May 4 and > according to pypi the package updated on May 2 which seems reasonable. The > problem here appears to be that you want this to work on a stable branch > (yoga) and stable branches do not update constraints. > > My suggestion is that we use stable platforms for testing stable releases. > The CentOS Stream releases seem to get updates that break stable software > expectations far more than our other platforms. When working against master > and trying to chase the latest and greatest this is probably a feature, but > is problematic when you want rate of change to fall to near zero. I would > consider not using Stream on stable branches if these problems persist. > What would you suggest to use to test Red Hat systems then? Dmitry > > > > > Dmitry > > > > -- > > Red Hat GmbH , Registered seat: > > Werner von Siemens Ring 14, D-85630 Grasbrunn, Germany > > Commercial register: Amtsgericht Muenchen/Munich, HRB 153243, > > Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, > > Amy Ross > > -- Red Hat GmbH , Registered seat: Werner von Siemens Ring 14, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed May 18 07:46:49 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 18 May 2022 09:46:49 +0200 Subject: [ironic] [requirements] broken libvirt-python on yoga on stream 9 In-Reply-To: References: <0548adf4-e274-4ad8-a1cf-c531ca688e39@www.fastmail.com> Message-ID: On Wed, 18 May 2022 at 08:27, Dmitry Tantsur wrote: > > > > On Tue, May 17, 2022 at 8:24 PM Clark Boylan wrote: >> >> On Tue, May 17, 2022, at 10:56 AM, Dmitry Tantsur wrote: >> > Hi all, >> > >> > It is happening again, the Bifrost CI is broken because libvirt-python >> > cannot be built from source, this time on Stream 9. >> > >> > Missing type converters: >> > int *:1 >> > ERROR: failed virDomainQemuMonitorCommandWithFiles >> > >> > I created a gist with a reproducer: >> > https://gist.github.com/dtantsur/835303c6a68ed77157016f5955183115. >> > >> > I cannot count how many times we had to deal with similar errors. I >> > assume, libvirt-python has to be newer than the installed Python (8.2.0 >> > in CS9, 8.0.0 in constraints). Should we stop constraining >> > libvirt-python? Any other ideas? >> >> Your libvirt-python version needs to be at least as new as your libvirt version. New libvirt-python versions are expected to continue to work with old libvirt versions as well (though it may need to be built against the specific libvirt?). In this case it looks like CentOS Stream 9 libvirt is newer than what was in constraints. >> >> Generally constraints should update quickly. Looking at master upper-constraints libvirt-python was updated to 8.3.0 on May 4 and according to pypi the package updated on May 2 which seems reasonable. The problem here appears to be that you want this to work on a stable branch (yoga) and stable branches do not update constraints. >> >> My suggestion is that we use stable platforms for testing stable releases. The CentOS Stream releases seem to get updates that break stable software expectations far more than our other platforms. When working against master and trying to chase the latest and greatest this is probably a feature, but is problematic when you want rate of change to fall to near zero. I would consider not using Stream on stable branches if these problems persist. > > > What would you suggest to use to test Red Hat systems then? Eh, come on. Stream breaks but so will the next minor RHEL release. The real issue is that libvirt python relies on the correct C bindings too heavily. Long-term I would love to see pure Python bindings but short-term I suggest Ironic follows the steps of Nova, Masakari, Kolla and DevStack. -yoctozepto From lokendrarathour at gmail.com Wed May 18 07:57:13 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 18 May 2022 13:27:13 +0530 Subject: [ Triple0 - NETWORKS ] Openstack Network getting created from wrong network segment range Message-ID: Hi Team, We have deployed tripl0 Train using two tenant networks by configuring the parameters for additional networks. environments.yaml: *NeutronBridgeMappings: datacentre:br-tenant,datacentre2:br-extratenant* in environments/network-environment.yaml * NeutronNetworkVLANRanges: 'datacentre:1:500,datacentre2:501:1000'* and have also isolated and separated the physical nic configuration for ovs in network/config/bond-with-vlans/compute.yaml with this setting, overcloud is deployed and the network is getting created, Checking further, in Controller configs(/etc/neutron/plugins/ml2/) we see the changes as below, which looks fine: *[ml2_type_vlan]* *network_vlan_ranges=datacentre:1:500,datacentre2:501:1000* *but it is also allowing* the network to be created from the wrong network segment range. For example, while creating a network: - openstack network create --share --provider-network-type vlan --provider-physical-network datacentre2 --provider-segment 420 datacenter_2_420 -provider-physical-network- *"datacentre2"* is having VLAN range from 501-1000 and if I am passing the provider segment as 420(which is out of range) then *also a network is getting created.* This does not look fine. please help share any inputs on the same. -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From sharath.madhava at gmail.com Wed May 18 08:29:50 2022 From: sharath.madhava at gmail.com (Sharath Ck) Date: Wed, 18 May 2022 13:59:50 +0530 Subject: [keystone][swift] audit logs Message-ID: Hi, I am currently trying to add keystone audit middleware in Swift. Middleware is managed in swift proxy server, hence I have added the audit filter in proxy server conf and have mentioned audit_middleware_notifications driver as log in swift.conf . I can see REST API call flow reaching audit middleware and constructing the audit event with minimal data as Swift is not loading service catalog information. But the audit event is not getting notified as per audit_middleware_notifications. I tried adding oslo_messaging_notifications with the driver as log, but audit events are not getting notified. Below are the changes in swift_proxy_server container, proxy-server.conf [pipeline:main] pipeline = catch_errors gatekeeper healthcheck cache container_sync bulk tempurl ratelimit formpost authtoken keystoneauth audit container_quotas account_quotas slo dlo keymaster encryption proxy-server [filter:audit] paste.filter_factory = keystonemiddleware.audit:filter_factory audit_map_file = /etc/swift/api_audit_map.conf swift.conf [oslo_messaging_notifications] driver = log [audit_middleware_notifications] driver = log Kindly confirm whether the configuration changes are enough or need more changes. Regards, Sharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed May 18 11:00:00 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 18 May 2022 08:00:00 -0300 Subject: [cinder] Bug deputy report for week of 05-18-2022 Message-ID: This is a bug report from 05-11-2022 to 05-18-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- No meeting today! Medium - https://bugs.launchpad.net/cinder/+bug/1973228 "Volume can be deleted during downloading." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1973625 "Cinder doesn't work for IBM 3700." Unassigned. Low - https://bugs.launchpad.net/cinder/+bug/1973787 "Huawei driver references constants that don't exist." Unassigned. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Wed May 18 11:09:20 2022 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Wed, 18 May 2022 16:39:20 +0530 Subject: [horizon] Cancelling Today's Weekly meeting Message-ID: Hello team, Since there is no topic added for discussion in the agenda etherpad[1] for today's meeting. So let's cancel it. If anyone like to discuss anything, ping me on IRC channel(#openstack-horizon). See you next week! Thanks & regards, Vishal Manchanda(irc:vishalmanchanda) [1] https://etherpad.opendev.org/p/horizon-release-priorities#L38 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed May 18 07:50:29 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 18 May 2022 13:20:29 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi Takashi, Thanks for your reply. We had initially started with a Wallaby release only, but we faced some issues even without PTP which did not get resolved and ultimately we had to come back to Train. Can you look into the issue and suggest some pointer, so that we can come back on wallaby *heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron *>* is not available for resource type *>* OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network *>* endpoint is not in service catalog.* The issue was also posted on the Openstack Discuss Forum. http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html Regards Anirudh Gupta On Mon, May 16, 2022 at 8:17 PM Takashi Kajinami wrote: > If my observations are correct, there are two bugs causing the error and > there is no feasible workaround. > > 1. puppet-tripleo is not parsing output of the os-net-config command > properly. > 2. os-net-config it is not formatting the dict value properly and the > output can't be parsed by json decoder. > > I've reported the 2nd bug here. > https://bugs.launchpad.net/os-net-config/+bug/1973566 > > Once the 2nd bug is fixed in master and stable branches back to train, > then we'd be able to consider > how we can implement a proper parsing but I can't guarantee any timeline > atm. > > The puppet implementation was replaced by ansible and is no longer used in > recent versions like > wallaby and the above problems do not exist. It might be an option to try > ptp deployment in Wallaby. > > On Mon, May 16, 2022 at 9:18 PM Anirudh Gupta wrote: > >> Hi Takashi >> >> Could you infer anything from the output and the issue being faced? >> >> Is it because the output has keys and values as string? Any workaround to >> resolve this issue? >> >> Regards >> Anirudh >> >> On Fri, 13 May, 2022, 11:15 Anirudh Gupta, wrote: >> >>> Hi Takashi, >>> >>> Thanks for your reply. >>> >>> I tried executing the suggested command and below is the output >>> >>> [heat-admin at overcloud-controller-1 ~]$ /bin/os-net-config -i eno1 >>> {'eno1': 'eno1'} >>> >>> Regards >>> Anirudh Gupta >>> >>> On Thu, May 12, 2022 at 11:03 AM Takashi Kajinami >>> wrote: >>> >>>> The puppy implementation executes the following command to get the >>>> interface information. >>>> /bin/os-net-config -i >>>> I'd recommend you check the command output in *all overcloud nodes *. >>>> >>>> If you need to use different interfaces for different roles then you >>>> need to define the parameter >>>> as role specific one, defined under Parameters. >>>> >>>> On Wed, May 11, 2022 at 4:26 PM Anirudh Gupta >>>> wrote: >>>> >>>>> Hi Takashi, >>>>> >>>>> Thanks for clarifying my issues regarding the support of PTP in >>>>> Wallaby Release. >>>>> >>>>> In Train, I have also tried passing the exact interface name and took >>>>> 2 runs with and without quotes like below: >>>>> >>>>> >>>>> *PtpInterface: eno1* >>>>> >>>>> *PtpInterface: 'eno1'* >>>>> >>>>> But in both the cases, the issue observed was similar >>>>> >>>>> 2022-05-11 10:33:20.189107 | 5254001f-9952-934d-e901-0000000030be | >>>>> FATAL | Wait for puppet host configuration to finish | >>>>> overcloud-controller-2 | error={"ansible_job_id": "526310775819.36650", >>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>> --detailed-exitcodes --summarize --color=false >>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>> puppet-user", "delta": "0:00:04.289208", "end": "2022-05-11 >>>>> 10:33:08.195052", "failed_when_result": true, "finished": 1, "msg": >>>>> "non-zero return code", "rc": 1, "start": "2022-05-11 10:33:03.905844", >>>>> "stderr": "<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' >>>>> is deprecated in favor of using 'lookup'. See >>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>> (file & line not available)\n<13>May 11 10:33:03 puppet-user: Warning: >>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>> should be converted to version 5\n<13>May 11 10:33:03 puppet-user: >>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 11 10:33:03 puppet-user: Warning: >>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>> available)\n<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module >>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>> resolved. Use 'puppet module list --tree' to see information about >>>>> modules\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: >>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>> a character index. Expected an Integer (file: >>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>> column: 46) on node overcloud-controller-2.localdomain", "stderr_lines": >>>>> ["<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is >>>>> deprecated in favor of using 'lookup'. See >>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>> (file & line not available)", "<13>May 11 10:33:03 puppet-user: Warning: >>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>> should be converted to version 5", "<13>May 11 10:33:03 puppet-user: >>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 11 10:33:03 puppet-user: >>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>> available)", "<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: >>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>> modules\\n (file & line not available)", "<13>May 11 10:33:03 >>>>> puppet-user: *Error: Evaluation Error: A substring operation does not >>>>> accept a String as a character index. Expected an Integer (file: >>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>> column: 46) on node overcloud-controller-2.localdomain"], "stdout": "", >>>>> "stdout_lines": []}* >>>>> 2022-05-11 10:33:20.190263 | 5254001f-9952-934d-e901-0000000030be | >>>>> TIMING | Wait for puppet host configuration to finish | >>>>> overcloud-controller-2 | 0:12:41.268734 | 7.01s >>>>> >>>>> I'll be highly grateful if you could further extend your support to >>>>> resolve the issue. >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> On Tue, May 10, 2022 at 9:15 PM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta >>>>>> wrote: >>>>>> >>>>>>> Hi Takashi, >>>>>>> >>>>>>> Thanks for your suggestion. >>>>>>> >>>>>>> I downloaded the updated Train Images and they had the ptp.pp file >>>>>>> available on the overcloud and undercloud machines >>>>>>> >>>>>>> [root at overcloud-controller-1 /]# find . -name "ptp.pp" >>>>>>> >>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>> >>>>>>> With this, I re-executed the deployment and got the below error on >>>>>>> the machines >>>>>>> >>>>>>> 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce | >>>>>>> FATAL | Wait for puppet host configuration to finish | >>>>>>> overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", >>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>> puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 >>>>>>> 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": >>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", >>>>>>> "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' >>>>>>> is deprecated in favor of using 'lookup'. See >>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>> (file & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>> should be converted to version 5\n<13>May 10 20:05:37 puppet-user: >>>>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>> available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module >>>>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>>>> resolved. Use 'puppet module list --tree' to see information about >>>>>>> modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: >>>>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>>>> a character index. Expected an Integer (file: >>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>> column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": >>>>>>> ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is >>>>>>> deprecated in favor of using 'lookup'. See >>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>> (file & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: >>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>> should be converted to version 5", "<13>May 10 20:05:37 puppet-user: >>>>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: >>>>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>> available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: >>>>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>>>> modules\\n (file & line not available)", "<13>May 10 20:05:37 >>>>>>> puppet-user: *Error: Evaluation Error: A substring operation does >>>>>>> not accept a String as a character index. Expected an Integer (file: >>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>> column: 46) *on node overcloud-controller-1.localdomain"], >>>>>>> "stdout": "", "stdout_lines": []} >>>>>>> >>>>>>> The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, >>>>>>> line: 41, column: 46 *had the following code: >>>>>>> 34 class tripleo::profile::base::time::ptp ( >>>>>>> 35 $ptp4l_interface = 'eth0', >>>>>>> 36 $ptp4l_conf_slaveonly = 1, >>>>>>> 37 $ptp4l_conf_network_transport = 'UDPv4', >>>>>>> 38 ) { >>>>>>> 39 >>>>>>> 40 $interface_mapping = generate('/bin/os-net-config', '-i', >>>>>>> $ptp4l_interface) >>>>>>> 41 *$ptp4l_interface_name = >>>>>>> $interface_mapping[$ptp4l_interface]* >>>>>>> >>>>>>> >>>>>>> *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file >>>>>>> is as below: >>>>>>> >>>>>>> resource_registry: >>>>>>> # FIXME(bogdando): switch it, once it is containerized >>>>>>> OS::TripleO::Services::Ptp: >>>>>>> ../../deployment/time/ptp-baremetal-puppet.yaml >>>>>>> OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp >>>>>>> >>>>>>> parameter_defaults: >>>>>>> # PTP hardware interface name >>>>>>> *PtpInterface: 'nic1'* >>>>>>> >>>>>>> # Configure PTP clock in slave mode >>>>>>> PtpSlaveMode: 1 >>>>>>> >>>>>>> # Configure PTP message transport protocol >>>>>>> PtpMessageTransport: 'UDPv4' >>>>>>> >>>>>>> I have also tried modifying the entry as below: >>>>>>> *PtpInterface: 'nic1' #*(i.e. without quotes), but the error >>>>>>> remains the same. >>>>>>> >>>>>>> Queries: >>>>>>> >>>>>>> 1. Any pointers to resolve this? >>>>>>> >>>>>>> I'm not familiar with ptp but you'd need to use the actual interface >>>>>> name >>>>>> if you are not using the alias name. >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> 1. You were mentioning something about the support of PTP not >>>>>>> there in the wallaby release. Can you please confirm? >>>>>>> >>>>>>> IIUC PTP is still supported even in master. What we removed is the >>>>>> implementation using Puppet >>>>>> which was replaced by ansible. >>>>>> >>>>>> The warning regarding OS::TripleO::Services::Ptp was added when we >>>>>> decided to merge >>>>>> all time sync services to the single service resource which is >>>>>> OS::TripleO::Services::Timesync[1]. >>>>>> It's related to how resources are defined in Heat and doesn't affect >>>>>> configuration support itself. >>>>>> >>>>>> [1] >>>>>> https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 >>>>>> >>>>>> >>>>>> >>>>>>> It would be a great help if you could extend a little more support >>>>>>> to resolve the issues. >>>>>>> >>>>>>> Regards >>>>>>> Anirudh Gupta >>>>>>> >>>>>>> >>>>>>> On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta >>>>>>> wrote: >>>>>>> >>>>>>>> I'll check that well. >>>>>>>> By the way, I downloaded the images from the below link >>>>>>>> >>>>>>>> >>>>>>>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>>>>>>> >>>>>>>> They seem to be updated yesterday, I'll download and try the >>>>>>>> deployment with the latest images. >>>>>>>> >>>>>>>> Also are you pointing that the support for PTP would not be there >>>>>>>> in Wallaby Release? >>>>>>>> >>>>>>>> Regards >>>>>>>> Anirudh Gupta >>>>>>>> >>>>>>>> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami < >>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi Takashi >>>>>>>>>> >>>>>>>>>> I have checked this in undercloud only. >>>>>>>>>> I don't find any such file in overcloud. Could this be a concern? >>>>>>>>>> >>>>>>>>> >>>>>>>>> The manifest should exist in overcloud nodes and the missing file >>>>>>>>> is the exact cause >>>>>>>>> of that puppet failure during deployment. >>>>>>>>> >>>>>>>>> Please check your overcloud images used to install overcloud nodes >>>>>>>>> and ensure that >>>>>>>>> you're using the right one. You might be using the image for a >>>>>>>>> different release. >>>>>>>>> We removed the manifest file during the Wallaby cycle. >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Anirudh Gupta >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami < >>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami < >>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta < >>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Takashi, >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks for your reply. >>>>>>>>>>>>> >>>>>>>>>>>>> I have checked on my machine and the file "ptp.pp" do exist at >>>>>>>>>>>>> path " >>>>>>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>>>>>> " >>>>>>>>>>>>> >>>>>>>>>>>> Did you check this in your undercloud or overcloud ? >>>>>>>>>>>> During the deployment all configuration files are generated >>>>>>>>>>>> using puppet modules >>>>>>>>>>>> installed in overcloud nodes, so you should check this in >>>>>>>>>>>> overcloud nodes. >>>>>>>>>>>> >>>>>>>>>>>> Also, the deprecation warning is not implemented >>>>>>>>>>>> >>>>>>>>>>> Ignore this incomplete line. I was looking for the >>>>>>>>>>> implementation which shows the warning >>>>>>>>>>> but I found it in tripleoclient and it looks reasonable >>>>>>>>>>> according to what we have in >>>>>>>>>>> environments/services/ptp.yaml . >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>>>>>>>>> for controller and compute *before rendering the templates, >>>>>>>>>>>>> but still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> IIUC you don't need this because >>>>>>>>>>>> OS::TripleO::Services::Timesync becomes an alias >>>>>>>>>>>> to the Ptp service resource when you use the ptp environment >>>>>>>>>>>> file. >>>>>>>>>>>> >>>>>>>>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>> >>>>>>>>>>>>> Can you suggest any workarounds or any pointers to look >>>>>>>>>>>>> further in order to resolve this issue? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Regards >>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> I'm not familiar with PTP, but the error you pasted indicates >>>>>>>>>>>>>> that the required puppet manifest does not exist in your overcloud >>>>>>>>>>>>>> node/image. >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>>>>>>>> >>>>>>>>>>>>>> This should not happen and the class should exist as long as >>>>>>>>>>>>>> you have puppet-tripleo from stable/train installed. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I'd recommend you check installed tripleo/puppet packages and >>>>>>>>>>>>>> ensure everything is in the consistent release. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta < >>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi All >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Any update on this? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, < >>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> When I was executing the Overcloud deployment script, >>>>>>>>>>>>>>>> passing the PTP yaml, it gave the following option at the starting >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> even if passing Y, it starts executing for sometime and the >>>>>>>>>>>>>>>> gives the following error >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Can someone suggest some pointers in order to resolve this >>>>>>>>>>>>>>>> issue and move forward? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta < >>>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>>>>>>>>> successfully. >>>>>>>>>>>>>>>>> I need to enable PTP service while deploying the overcloud >>>>>>>>>>>>>>>>> for which I have included the service in my deployment >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>> /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>> * -e >>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>>>>>>>> \* >>>>>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> But it gives the following error >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>>>>>>>> 'lookup'. See >>>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>>>>>>>> (file: >>>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while >>>>>>>>>>>>>>>>> evaluating a Function Call, Could not find class >>>>>>>>>>>>>>>>> ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain >>>>>>>>>>>>>>>>> (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) >>>>>>>>>>>>>>>>> on node* overcloud-controller-0.localdomain"], "stdout": >>>>>>>>>>>>>>>>> "", "stdout_lines": []} >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeepggn93 at gmail.com Wed May 18 12:45:08 2022 From: sandeepggn93 at gmail.com (Sandeep Yadav) Date: Wed, 18 May 2022 18:15:08 +0530 Subject: [TripleO] Gate blocker on Master - standalone-on-multinode-ipa failing Message-ID: Hello All, We have a check/gate blocker on Master branch - standalone-on-multinode-ipa[0] job is failing with bug[1]. Please hold rechecks on master till [1] resolves. [0] https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-9-standalone-on-multinode-ipa [1] https://bugs.launchpad.net/bugs/1973863 Thank you Sandeep(on behalf of tripleo-ci) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Wed May 18 13:27:31 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 18 May 2022 15:27:31 +0200 Subject: [ Triple0 - NETWORKS ] Openstack Network getting created from wrong network segment range In-Reply-To: References: Message-ID: Hi Lokendra: The only place I found a documented reference is [1]. If the "--provider-segment" is given to the network creation command, that will override the VLAN ranges defined per physical network in "ml2_type_vlan:network_vlan_ranges". If no pre-allocated segment is found (this process is done during the Neutron server initialization), a new one is created with the parameters the user provided [2]. Regards. [1] https://www.oreilly.com/library/view/learning-openstack-networking/9781788392495/e0069ba3-9b46-41d3-bf4f-7520779cb38d.xhtml [2] https://github.com/openstack/neutron/blob/db83514d052ceede559894a1439c2b45eeebe933/neutron/plugins/ml2/drivers/helpers.py#L107-L113 On Wed, May 18, 2022 at 10:22 AM Lokendra Rathour wrote: > Hi Team, > We have deployed tripl0 Train using two tenant networks by configuring the > parameters for additional networks. > environments.yaml: > *NeutronBridgeMappings: datacentre:br-tenant,datacentre2:br-extratenant* > > in environments/network-environment.yaml > > > * NeutronNetworkVLANRanges: 'datacentre:1:500,datacentre2:501:1000'* > > and have also isolated and separated the physical nic configuration for > ovs in network/config/bond-with-vlans/compute.yaml > > with this setting, overcloud is deployed and the network is getting > created, Checking further, in Controller > configs(/etc/neutron/plugins/ml2/) we see the changes as below, which > looks fine: > > > *[ml2_type_vlan]* > *network_vlan_ranges=datacentre:1:500,datacentre2:501:1000* > > > *but it is also allowing* the network to be created from the wrong > network segment range. > For example, while creating a network: > > - openstack network create --share --provider-network-type vlan > --provider-physical-network datacentre2 --provider-segment 420 > datacenter_2_420 > > -provider-physical-network- *"datacentre2"* is having VLAN range from > 501-1000 and if I am passing the provider segment as 420(which is out of > range) then *also a network is getting created.* > > This does not look fine. please help share any inputs on the same. > -- > ~ Lokendra > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Wed May 18 13:44:58 2022 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Wed, 18 May 2022 15:44:58 +0200 Subject: octavia_rsyslog service fails/keeps restarting | Openstack Wallaby | Tripleo In-Reply-To: References: Message-ID: Hi, There is an error with the name of the octavia_rsyslog image, Brent Eagles proposed a fix for this issue: https://review.opendev.org/c/openstack/tripleo-heat-templates/+/842351 On Sun, May 15, 2022 at 6:39 PM Swogat Pradhan wrote: > Hi, > I am currently trying to deploy octavia in openstack wallaby, but the > octavia_rsyslog service is malfunctioning it seems. And I am checking the > logs but am not sure how to fix the issue. > > Attached log for reference. > > With regards, > Swogat Pradhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From milan.paul010 at gmail.com Wed May 18 13:28:21 2022 From: milan.paul010 at gmail.com (Milan Paul) Date: Wed, 18 May 2022 18:58:21 +0530 Subject: Can external API endpoints be registered in Openstack Keystone service catalogue/registry? Message-ID: Hi All, We are developing few wrapper APIs (in SpringBoot and Python) on top of Openstack components like Nova, Neutron and glance. As Keystone works as a service registry for internal Openstack APIs, is it possible to register our custom wrapper APIs in keystone registry? Basically, we want to use keystone as an API registry without having another API gateway. Any help would be highly appreciated. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed May 18 13:59:33 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 18 May 2022 14:59:33 +0100 Subject: Can external API endpoints be registered in Openstack Keystone service catalogue/registry? In-Reply-To: References: Message-ID: <650b4abcc0477d4686eb62d10b898d96a3ceb013.camel@redhat.com> On Wed, 2022-05-18 at 18:58 +0530, Milan Paul wrote: > Hi All, > > We are developing few wrapper APIs (in SpringBoot and Python) on top of > Openstack components like Nova, Neutron and glance. > > As Keystone works as a service registry for internal Openstack APIs, is it > possible to register our custom wrapper APIs in keystone registry? > Basically, we want to use keystone as an API registry without having > another API gateway. i have not done that personally but i have heard of others doing it in the past. when we add optional api service to openstack in the big tent they do not need to modify keystone to add there endpoint so you should be able to regeister external api endpoint and use it as your cataloge if i under stand how this works correctly. it would be good to get someone who works on keystone to respond but you shoudl be able to do this. > > Any help would be highly appreciated. > > Regards. From elod.illes at est.tech Wed May 18 14:58:57 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Wed, 18 May 2022 16:58:57 +0200 Subject: [infra][stable] reopened old, EOL'd branches In-Reply-To: <20220516150701.fhj54q5m3po7gvrc@yuggoth.org> References: <8a5cb2e6-0434-3b50-fad4-a03164ff10c4@est.tech> <20220516150701.fhj54q5m3po7gvrc@yuggoth.org> Message-ID: <8dbe0a33-5182-1739-ec04-7d9209246ae5@est.tech> Thanks Jeremy, then I'll setup the list and start doing the steps I described. Cheers, El?d On 2022. 05. 16. 17:07, Jeremy Stanley wrote: > On 2022-05-16 16:49:31 +0200 (+0200), El?d Ill?s wrote: > [...] >> where there exist *-eol tag, the branch was probably deleted but then >> reopened and some generated patch created a new branch from the same >> branching point where the original stable/* branch was branched from. >> ??? (some example generated patches: >> ?????? * OpenDev Migration Patch >> ?????? * Replace openstack.org git:// URLs with https:// >> ???? see specific example [1]) > [...] > > Note that Gerrit doesn't allow creation of changes for nonexistent > branches, so the branch had to be recreated somehow independent of > those changes being pushed. > >> For this I would >> - push release patches where that is missing from openstack/release (this >> will not do the work, as mentioned above, but needs some manual tagging / >> branch deletion afterwards) - this is only to have a better view of the tags >> and branches from openstack/releases repository >> - tag branches with *-eol where that is missing >> - delete branches that have already *-eol tag, even if that means we lose >> some patches (like the above mentioned generated patches) >> >> Is this acceptable? What do you think? (Or should the two latter be done by >> Infra team via a list that I could collect?) > [...] > > What you propose sounds reasonable to me. If the branch already has > a corresponding eol tag, I agree that (re)deleting the branch is the > thing to do. Any changes which merged to the branch after the eol > tag was created won't be "lost" since they still have named refs in > the Git repository, they just won't appear in the history of any > branch or tag. > > I have no problem with you doing batch branch deletion for this > purpose, same as normal EOL process. I don't see any reason the > Gerrit sysadmins would need to handle it. From gmann at ghanshyammann.com Thu May 19 04:23:36 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 18 May 2022 23:23:36 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 19, 2022 at 1500 UTC In-Reply-To: <180cf44f986.bc798a6f80942.2327562905416045598@ghanshyammann.com> References: <180cf44f986.bc798a6f80942.2327562905416045598@ghanshyammann.com> Message-ID: <180da8e8ad5.12994d58d238155.2899789998182864598@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check ** Fixing Zuul config error in OpenStack *** https://etherpad.opendev.org/p/zuul-config-error-openstack * Zed cycle tracker checks ** https://etherpad.opendev.org/p/tc-zed-tracker * TC meeting with Board of Directors ** https://etherpad.opendev.org/p/tc-board-meeting-2022 * New ELK service dashboard: e-r service ** https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global ** https://review.opendev.org/c/openstack/governance-sigs/+/835838 * 'SLURP' as release cadence terminology ** https://review.opendev.org/c/openstack/governance/+/840354 * release notes discussion ** Use release number or name in development process/cycle ** https://review.opendev.org/c/openstack/governance/+/841800 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 16 May 2022 18:47:25 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for May 19, 2022 at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, May 18, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From akekane at redhat.com Thu May 19 05:54:38 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 19 May 2022 11:24:38 +0530 Subject: [stable][glance] Proposing Cyril for stable branch core Message-ID: Hi All, Cyril Roelandt is helping a lot in the review process on master as well as stable branches. I would like to have him in our 'glance-stable-maint' team. Could you please help me to add 'cyril at redhat.com' to the stable core team. Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From sharath.madhava at gmail.com Thu May 19 06:10:34 2022 From: sharath.madhava at gmail.com (Sharath Ck) Date: Thu, 19 May 2022 11:40:34 +0530 Subject: [keystone][nova][placement] audit logs Message-ID: Hi, Placement does not have an api-paste.ini configuration file. Also Placement does not have a placeholder to mention or add middleware. Hence keystone audit middleware is not supported in Placement ? Need to collect the audit events from Nova itself. Can anyone please confirm this. Regards, Sharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From nsitlani03 at gmail.com Thu May 19 06:19:53 2022 From: nsitlani03 at gmail.com (Namrata Sitlani) Date: Thu, 19 May 2022 11:49:53 +0530 Subject: [magnum] [xena] [IPv6] Message-ID: Hello All, We run release Xena, and we have deployed Kubernetes version 1.21.10 on Magnum with Fedora CoreOS version 34 and we are trying to have IPv6 support for that Kubernetes cluster, as Kubernetes 1.21 and later versions have dual-stack(IPv4/IPv6) support enabled by default. To achieve the same, we used a network with both IPv4 and IPv6 subnets and we got a successful creation of the cluster. The compute instances of the cluster get both IPv4 and IPv6 addresses, but the master and minion nodes get IPv4 as external IP, and also the container does not get an IPv6 address. Can someone please help us with the information, what changes are required to be made at the magnum client-side to have an IPv6 supported Kubernetes Cluster? Thanks, Namrata -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Thu May 19 08:57:50 2022 From: mkopec at redhat.com (Martin Kopec) Date: Thu, 19 May 2022 10:57:50 +0200 Subject: [qa][openeuler] QA wants to drop openeuler job Message-ID: Hi everyone, we're considering dropping the devstack-platform-openEuler-20.03-SP2 job. It's been failing for a long time and based on the fact that no one has complained about it, we assume that there isn't much interest in having the job around. If that's not the case, please, reach out and let's discuss updating the job. Thanks, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA IM: kopecmartin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sharath.madhava at gmail.com Thu May 19 09:24:55 2022 From: sharath.madhava at gmail.com (Sharath Ck) Date: Thu, 19 May 2022 14:54:55 +0530 Subject: [keystone][nova][placement] audit logs In-Reply-To: References: Message-ID: Hi Balazs, Thanks for the confirmation. Since audit is not part of the hardcoded list. I am concluding that audit cannot be enabled in Placement. Regards, Sharath On Thu, May 19, 2022 at 2:41 PM Balazs Gibizer wrote: > > > On Thu, May 19 2022 at 11:40:34 AM +05:30:00, Sharath Ck > wrote: > > Hi, > > > > Placement does not have an api-paste.ini configuration file. Also > > Placement does not have a placeholder to mention or add middleware. > > Hence keystone audit middleware is not supported in Placement ? Need > > to collect the audit events from Nova itself. Can anyone please > > confirm this. > > You are right there is no api-paste.ini support in Placement. The used > middlewares are hard coded in [1]. > > Cheers, > gibi > > [1] > > https://github.com/openstack/placement/blob/03d567928e31d3dc85d4dd3f5617785e7380b6b1/placement/deploy.py#L108-L117 > > > > > Regards, > > Sharath > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeepggn93 at gmail.com Thu May 19 10:19:41 2022 From: sandeepggn93 at gmail.com (Sandeep Yadav) Date: Thu, 19 May 2022 15:49:41 +0530 Subject: [TripleO] Gate blocker on Master - standalone-on-multinode-ipa failing In-Reply-To: References: Message-ID: Hello All, We reverted the patch that introduced this issue[1] for a proper fix in a follow-up and also updated the files filter[2] on ipa job so that it gets triggered on similar keystone related t-h-t changes. Earlier affected job[3] is back to green. [1] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/842320 [2] https://review.opendev.org/c/openstack/tripleo-ci/+/842356 [3] https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-9-standalone-on-multinode-ipa On Wed, May 18, 2022 at 6:15 PM Sandeep Yadav wrote: > Hello All, > > We have a check/gate blocker on Master branch - > standalone-on-multinode-ipa[0] job > is failing with bug[1]. > > Please hold rechecks on master till [1] resolves. > > [0] > https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-9-standalone-on-multinode-ipa > [1] https://bugs.launchpad.net/bugs/1973863 > > Thank you > Sandeep(on behalf of tripleo-ci) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgibizer at redhat.com Thu May 19 09:11:54 2022 From: bgibizer at redhat.com (Balazs Gibizer) Date: Thu, 19 May 2022 11:11:54 +0200 Subject: [keystone][nova][placement] audit logs In-Reply-To: References: Message-ID: On Thu, May 19 2022 at 11:40:34 AM +05:30:00, Sharath Ck wrote: > Hi, > > Placement does not have an api-paste.ini configuration file. Also > Placement does not have a placeholder to mention or add middleware. > Hence keystone audit middleware is not supported in Placement ? Need > to collect the audit events from Nova itself. Can anyone please > confirm this. You are right there is no api-paste.ini support in Placement. The used middlewares are hard coded in [1]. Cheers, gibi [1] https://github.com/openstack/placement/blob/03d567928e31d3dc85d4dd3f5617785e7380b6b1/placement/deploy.py#L108-L117 > > Regards, > Sharath From anyrude10 at gmail.com Thu May 19 10:41:16 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Thu, 19 May 2022 16:11:16 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi Takashi Did you get a chance to look into the wallaby issue which we are facing? Request you to please provide some pointers to move ahead? Regards Anirudh Gupta On Wed, May 18, 2022 at 1:20 PM Anirudh Gupta wrote: > Hi Takashi, > > Thanks for your reply. We had initially started with a Wallaby release > only, but we faced some issues even without PTP which did not get resolved > and ultimately we had to come back to Train. > > Can you look into the issue and suggest some pointer, so that we can come > back on wallaby > > *heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron > *>* is not available for resource type > *>* OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network > *>* endpoint is not in service catalog.* > > The issue was also posted on the Openstack Discuss Forum. > > http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html > > Regards > Anirudh Gupta > > On Mon, May 16, 2022 at 8:17 PM Takashi Kajinami > wrote: > >> If my observations are correct, there are two bugs causing the error and >> there is no feasible workaround. >> >> 1. puppet-tripleo is not parsing output of the os-net-config command >> properly. >> 2. os-net-config it is not formatting the dict value properly and the >> output can't be parsed by json decoder. >> >> I've reported the 2nd bug here. >> https://bugs.launchpad.net/os-net-config/+bug/1973566 >> >> Once the 2nd bug is fixed in master and stable branches back to train, >> then we'd be able to consider >> how we can implement a proper parsing but I can't guarantee any timeline >> atm. >> >> The puppet implementation was replaced by ansible and is no longer used >> in recent versions like >> wallaby and the above problems do not exist. It might be an option to >> try ptp deployment in Wallaby. >> >> On Mon, May 16, 2022 at 9:18 PM Anirudh Gupta >> wrote: >> >>> Hi Takashi >>> >>> Could you infer anything from the output and the issue being faced? >>> >>> Is it because the output has keys and values as string? Any workaround >>> to resolve this issue? >>> >>> Regards >>> Anirudh >>> >>> On Fri, 13 May, 2022, 11:15 Anirudh Gupta, wrote: >>> >>>> Hi Takashi, >>>> >>>> Thanks for your reply. >>>> >>>> I tried executing the suggested command and below is the output >>>> >>>> [heat-admin at overcloud-controller-1 ~]$ /bin/os-net-config -i eno1 >>>> {'eno1': 'eno1'} >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> On Thu, May 12, 2022 at 11:03 AM Takashi Kajinami >>>> wrote: >>>> >>>>> The puppy implementation executes the following command to get the >>>>> interface information. >>>>> /bin/os-net-config -i >>>>> I'd recommend you check the command output in *all overcloud nodes *. >>>>> >>>>> If you need to use different interfaces for different roles then you >>>>> need to define the parameter >>>>> as role specific one, defined under Parameters. >>>>> >>>>> On Wed, May 11, 2022 at 4:26 PM Anirudh Gupta >>>>> wrote: >>>>> >>>>>> Hi Takashi, >>>>>> >>>>>> Thanks for clarifying my issues regarding the support of PTP in >>>>>> Wallaby Release. >>>>>> >>>>>> In Train, I have also tried passing the exact interface name and took >>>>>> 2 runs with and without quotes like below: >>>>>> >>>>>> >>>>>> *PtpInterface: eno1* >>>>>> >>>>>> *PtpInterface: 'eno1'* >>>>>> >>>>>> But in both the cases, the issue observed was similar >>>>>> >>>>>> 2022-05-11 10:33:20.189107 | 5254001f-9952-934d-e901-0000000030be | >>>>>> FATAL | Wait for puppet host configuration to finish | >>>>>> overcloud-controller-2 | error={"ansible_job_id": "526310775819.36650", >>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>> --detailed-exitcodes --summarize --color=false >>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>> puppet-user", "delta": "0:00:04.289208", "end": "2022-05-11 >>>>>> 10:33:08.195052", "failed_when_result": true, "finished": 1, "msg": >>>>>> "non-zero return code", "rc": 1, "start": "2022-05-11 10:33:03.905844", >>>>>> "stderr": "<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' >>>>>> is deprecated in favor of using 'lookup'. See >>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>> (file & line not available)\n<13>May 11 10:33:03 puppet-user: Warning: >>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>> should be converted to version 5\n<13>May 11 10:33:03 puppet-user: >>>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 11 10:33:03 puppet-user: Warning: >>>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>> available)\n<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module >>>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>>> resolved. Use 'puppet module list --tree' to see information about >>>>>> modules\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: >>>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>>> a character index. Expected an Integer (file: >>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>> column: 46) on node overcloud-controller-2.localdomain", "stderr_lines": >>>>>> ["<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is >>>>>> deprecated in favor of using 'lookup'. See >>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>> (file & line not available)", "<13>May 11 10:33:03 puppet-user: Warning: >>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>> should be converted to version 5", "<13>May 11 10:33:03 puppet-user: >>>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 11 10:33:03 puppet-user: >>>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>> available)", "<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: >>>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>>> modules\\n (file & line not available)", "<13>May 11 10:33:03 >>>>>> puppet-user: *Error: Evaluation Error: A substring operation does >>>>>> not accept a String as a character index. Expected an Integer (file: >>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>> column: 46) on node overcloud-controller-2.localdomain"], "stdout": "", >>>>>> "stdout_lines": []}* >>>>>> 2022-05-11 10:33:20.190263 | 5254001f-9952-934d-e901-0000000030be | >>>>>> TIMING | Wait for puppet host configuration to finish | >>>>>> overcloud-controller-2 | 0:12:41.268734 | 7.01s >>>>>> >>>>>> I'll be highly grateful if you could further extend your support to >>>>>> resolve the issue. >>>>>> >>>>>> Regards >>>>>> Anirudh Gupta >>>>>> >>>>>> On Tue, May 10, 2022 at 9:15 PM Takashi Kajinami >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Takashi, >>>>>>>> >>>>>>>> Thanks for your suggestion. >>>>>>>> >>>>>>>> I downloaded the updated Train Images and they had the ptp.pp file >>>>>>>> available on the overcloud and undercloud machines >>>>>>>> >>>>>>>> [root at overcloud-controller-1 /]# find . -name "ptp.pp" >>>>>>>> >>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>> >>>>>>>> With this, I re-executed the deployment and got the below error on >>>>>>>> the machines >>>>>>>> >>>>>>>> 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce | >>>>>>>> FATAL | Wait for puppet host configuration to finish | >>>>>>>> overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", >>>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>> puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 >>>>>>>> 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": >>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", >>>>>>>> "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' >>>>>>>> is deprecated in favor of using 'lookup'. See >>>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>>> (file & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>>> should be converted to version 5\n<13>May 10 20:05:37 puppet-user: >>>>>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>>> available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module >>>>>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>>>>> resolved. Use 'puppet module list --tree' to see information about >>>>>>>> modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: >>>>>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>>>>> a character index. Expected an Integer (file: >>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>>> column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": >>>>>>>> ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is >>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>>> (file & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: >>>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>>> should be converted to version 5", "<13>May 10 20:05:37 puppet-user: >>>>>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: >>>>>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>>> available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: >>>>>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>>>>> modules\\n (file & line not available)", "<13>May 10 20:05:37 >>>>>>>> puppet-user: *Error: Evaluation Error: A substring operation does >>>>>>>> not accept a String as a character index. Expected an Integer (file: >>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>>> column: 46) *on node overcloud-controller-1.localdomain"], >>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>> >>>>>>>> The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, >>>>>>>> line: 41, column: 46 *had the following code: >>>>>>>> 34 class tripleo::profile::base::time::ptp ( >>>>>>>> 35 $ptp4l_interface = 'eth0', >>>>>>>> 36 $ptp4l_conf_slaveonly = 1, >>>>>>>> 37 $ptp4l_conf_network_transport = 'UDPv4', >>>>>>>> 38 ) { >>>>>>>> 39 >>>>>>>> 40 $interface_mapping = generate('/bin/os-net-config', '-i', >>>>>>>> $ptp4l_interface) >>>>>>>> 41 *$ptp4l_interface_name = >>>>>>>> $interface_mapping[$ptp4l_interface]* >>>>>>>> >>>>>>>> >>>>>>>> *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file >>>>>>>> is as below: >>>>>>>> >>>>>>>> resource_registry: >>>>>>>> # FIXME(bogdando): switch it, once it is containerized >>>>>>>> OS::TripleO::Services::Ptp: >>>>>>>> ../../deployment/time/ptp-baremetal-puppet.yaml >>>>>>>> OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp >>>>>>>> >>>>>>>> parameter_defaults: >>>>>>>> # PTP hardware interface name >>>>>>>> *PtpInterface: 'nic1'* >>>>>>>> >>>>>>>> # Configure PTP clock in slave mode >>>>>>>> PtpSlaveMode: 1 >>>>>>>> >>>>>>>> # Configure PTP message transport protocol >>>>>>>> PtpMessageTransport: 'UDPv4' >>>>>>>> >>>>>>>> I have also tried modifying the entry as below: >>>>>>>> *PtpInterface: 'nic1' #*(i.e. without quotes), but the error >>>>>>>> remains the same. >>>>>>>> >>>>>>>> Queries: >>>>>>>> >>>>>>>> 1. Any pointers to resolve this? >>>>>>>> >>>>>>>> I'm not familiar with ptp but you'd need to use the actual >>>>>>> interface name >>>>>>> if you are not using the alias name. >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> 1. You were mentioning something about the support of PTP not >>>>>>>> there in the wallaby release. Can you please confirm? >>>>>>>> >>>>>>>> IIUC PTP is still supported even in master. What we removed is the >>>>>>> implementation using Puppet >>>>>>> which was replaced by ansible. >>>>>>> >>>>>>> The warning regarding OS::TripleO::Services::Ptp was added when we >>>>>>> decided to merge >>>>>>> all time sync services to the single service resource which is >>>>>>> OS::TripleO::Services::Timesync[1]. >>>>>>> It's related to how resources are defined in Heat and doesn't affect >>>>>>> configuration support itself. >>>>>>> >>>>>>> [1] >>>>>>> https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 >>>>>>> >>>>>>> >>>>>>> >>>>>>>> It would be a great help if you could extend a little more support >>>>>>>> to resolve the issues. >>>>>>>> >>>>>>>> Regards >>>>>>>> Anirudh Gupta >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta >>>>>>>> wrote: >>>>>>>> >>>>>>>>> I'll check that well. >>>>>>>>> By the way, I downloaded the images from the below link >>>>>>>>> >>>>>>>>> >>>>>>>>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>>>>>>>> >>>>>>>>> They seem to be updated yesterday, I'll download and try the >>>>>>>>> deployment with the latest images. >>>>>>>>> >>>>>>>>> Also are you pointing that the support for PTP would not be there >>>>>>>>> in Wallaby Release? >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Anirudh Gupta >>>>>>>>> >>>>>>>>> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami < >>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta < >>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Takashi >>>>>>>>>>> >>>>>>>>>>> I have checked this in undercloud only. >>>>>>>>>>> I don't find any such file in overcloud. Could this be a concern? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The manifest should exist in overcloud nodes and the missing file >>>>>>>>>> is the exact cause >>>>>>>>>> of that puppet failure during deployment. >>>>>>>>>> >>>>>>>>>> Please check your overcloud images used to install overcloud >>>>>>>>>> nodes and ensure that >>>>>>>>>> you're using the right one. You might be using the image for a >>>>>>>>>> different release. >>>>>>>>>> We removed the manifest file during the Wallaby cycle. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> Anirudh Gupta >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami < >>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami < >>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta < >>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Takashi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks for your reply. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I have checked on my machine and the file "ptp.pp" do exist >>>>>>>>>>>>>> at path " >>>>>>>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>>>>>>> " >>>>>>>>>>>>>> >>>>>>>>>>>>> Did you check this in your undercloud or overcloud ? >>>>>>>>>>>>> During the deployment all configuration files are generated >>>>>>>>>>>>> using puppet modules >>>>>>>>>>>>> installed in overcloud nodes, so you should check this in >>>>>>>>>>>>> overcloud nodes. >>>>>>>>>>>>> >>>>>>>>>>>>> Also, the deprecation warning is not implemented >>>>>>>>>>>>> >>>>>>>>>>>> Ignore this incomplete line. I was looking for the >>>>>>>>>>>> implementation which shows the warning >>>>>>>>>>>> but I found it in tripleoclient and it looks reasonable >>>>>>>>>>>> according to what we have in >>>>>>>>>>>> environments/services/ptp.yaml . >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>>>>>>>>>> for controller and compute *before rendering the templates, >>>>>>>>>>>>>> but still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> IIUC you don't need this because >>>>>>>>>>>>> OS::TripleO::Services::Timesync becomes an alias >>>>>>>>>>>>> to the Ptp service resource when you use the ptp environment >>>>>>>>>>>>> file. >>>>>>>>>>>>> >>>>>>>>>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>> >>>>>>>>>>>>>> Can you suggest any workarounds or any pointers to look >>>>>>>>>>>>>> further in order to resolve this issue? >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Regards >>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> I'm not familiar with PTP, but the error you pasted >>>>>>>>>>>>>>> indicates that the required puppet manifest does not exist in your >>>>>>>>>>>>>>> overcloud node/image. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> This should not happen and the class should exist as long as >>>>>>>>>>>>>>> you have puppet-tripleo from stable/train installed. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I'd recommend you check installed tripleo/puppet packages >>>>>>>>>>>>>>> and ensure everything is in the consistent release. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta < >>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi All >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Any update on this? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, < >>>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> When I was executing the Overcloud deployment script, >>>>>>>>>>>>>>>>> passing the PTP yaml, it gave the following option at the starting >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> even if passing Y, it starts executing for sometime and >>>>>>>>>>>>>>>>> the gives the following error >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a >>>>>>>>>>>>>>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Can someone suggest some pointers in order to resolve this >>>>>>>>>>>>>>>>> issue and move forward? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta < >>>>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>>>>>>>>>> successfully. >>>>>>>>>>>>>>>>>> I need to enable PTP service while deploying the >>>>>>>>>>>>>>>>>> overcloud for which I have included the service in my deployment >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>> /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>>> * -e >>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>>>>>>>>> \* >>>>>>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> But it gives the following error >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>>>>>>>>> 'lookup'. See >>>>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>>>>>>>>> (file: >>>>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while >>>>>>>>>>>>>>>>>> evaluating a Function Call, Could not find class >>>>>>>>>>>>>>>>>> ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain >>>>>>>>>>>>>>>>>> (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) >>>>>>>>>>>>>>>>>> on node* overcloud-controller-0.localdomain"], "stdout": >>>>>>>>>>>>>>>>>> "", "stdout_lines": []} >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Thu May 19 14:47:58 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 19 May 2022 23:47:58 +0900 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: I myself has quite limited experience about networking v2 thing we have in wallaby but I roughly guess you need to include environments/deployed-server-deployed-neutron-ports.yaml to disable Neutron resource associated with that vip resource by default. On Thu, May 19, 2022 at 7:41 PM Anirudh Gupta wrote: > Hi Takashi > > Did you get a chance to look into the wallaby issue which we are facing? > > Request you to please provide some pointers to move ahead? > > Regards > Anirudh Gupta > > > > On Wed, May 18, 2022 at 1:20 PM Anirudh Gupta wrote: > >> Hi Takashi, >> >> Thanks for your reply. We had initially started with a Wallaby release >> only, but we faced some issues even without PTP which did not get resolved >> and ultimately we had to come back to Train. >> >> Can you look into the issue and suggest some pointer, so that we can come >> back on wallaby >> >> *heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron >> *>* is not available for resource type >> *>* OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network >> *>* endpoint is not in service catalog.* >> >> The issue was also posted on the Openstack Discuss Forum. >> >> >> http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html >> >> Regards >> Anirudh Gupta >> >> On Mon, May 16, 2022 at 8:17 PM Takashi Kajinami >> wrote: >> >>> If my observations are correct, there are two bugs causing the error and >>> there is no feasible workaround. >>> >>> 1. puppet-tripleo is not parsing output of the os-net-config command >>> properly. >>> 2. os-net-config it is not formatting the dict value properly and the >>> output can't be parsed by json decoder. >>> >>> I've reported the 2nd bug here. >>> https://bugs.launchpad.net/os-net-config/+bug/1973566 >>> >>> Once the 2nd bug is fixed in master and stable branches back to train, >>> then we'd be able to consider >>> how we can implement a proper parsing but I can't guarantee any timeline >>> atm. >>> >>> The puppet implementation was replaced by ansible and is no longer used >>> in recent versions like >>> wallaby and the above problems do not exist. It might be an option to >>> try ptp deployment in Wallaby. >>> >>> On Mon, May 16, 2022 at 9:18 PM Anirudh Gupta >>> wrote: >>> >>>> Hi Takashi >>>> >>>> Could you infer anything from the output and the issue being faced? >>>> >>>> Is it because the output has keys and values as string? Any workaround >>>> to resolve this issue? >>>> >>>> Regards >>>> Anirudh >>>> >>>> On Fri, 13 May, 2022, 11:15 Anirudh Gupta, wrote: >>>> >>>>> Hi Takashi, >>>>> >>>>> Thanks for your reply. >>>>> >>>>> I tried executing the suggested command and below is the output >>>>> >>>>> [heat-admin at overcloud-controller-1 ~]$ /bin/os-net-config -i eno1 >>>>> {'eno1': 'eno1'} >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> On Thu, May 12, 2022 at 11:03 AM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> The puppy implementation executes the following command to get the >>>>>> interface information. >>>>>> /bin/os-net-config -i >>>>>> I'd recommend you check the command output in *all overcloud nodes *. >>>>>> >>>>>> If you need to use different interfaces for different roles then you >>>>>> need to define the parameter >>>>>> as role specific one, defined under Parameters. >>>>>> >>>>>> On Wed, May 11, 2022 at 4:26 PM Anirudh Gupta >>>>>> wrote: >>>>>> >>>>>>> Hi Takashi, >>>>>>> >>>>>>> Thanks for clarifying my issues regarding the support of PTP in >>>>>>> Wallaby Release. >>>>>>> >>>>>>> In Train, I have also tried passing the exact interface name and >>>>>>> took 2 runs with and without quotes like below: >>>>>>> >>>>>>> >>>>>>> *PtpInterface: eno1* >>>>>>> >>>>>>> *PtpInterface: 'eno1'* >>>>>>> >>>>>>> But in both the cases, the issue observed was similar >>>>>>> >>>>>>> 2022-05-11 10:33:20.189107 | 5254001f-9952-934d-e901-0000000030be | >>>>>>> FATAL | Wait for puppet host configuration to finish | >>>>>>> overcloud-controller-2 | error={"ansible_job_id": "526310775819.36650", >>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>> puppet-user", "delta": "0:00:04.289208", "end": "2022-05-11 >>>>>>> 10:33:08.195052", "failed_when_result": true, "finished": 1, "msg": >>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-11 10:33:03.905844", >>>>>>> "stderr": "<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' >>>>>>> is deprecated in favor of using 'lookup'. See >>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>> (file & line not available)\n<13>May 11 10:33:03 puppet-user: Warning: >>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>> should be converted to version 5\n<13>May 11 10:33:03 puppet-user: >>>>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 11 10:33:03 puppet-user: Warning: >>>>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>> available)\n<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module >>>>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>>>> resolved. Use 'puppet module list --tree' to see information about >>>>>>> modules\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: >>>>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>>>> a character index. Expected an Integer (file: >>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>> column: 46) on node overcloud-controller-2.localdomain", "stderr_lines": >>>>>>> ["<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is >>>>>>> deprecated in favor of using 'lookup'. See >>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>> (file & line not available)", "<13>May 11 10:33:03 puppet-user: Warning: >>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>> should be converted to version 5", "<13>May 11 10:33:03 puppet-user: >>>>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 11 10:33:03 puppet-user: >>>>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>> available)", "<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: >>>>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>>>> modules\\n (file & line not available)", "<13>May 11 10:33:03 >>>>>>> puppet-user: *Error: Evaluation Error: A substring operation does >>>>>>> not accept a String as a character index. Expected an Integer (file: >>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>> column: 46) on node overcloud-controller-2.localdomain"], "stdout": "", >>>>>>> "stdout_lines": []}* >>>>>>> 2022-05-11 10:33:20.190263 | 5254001f-9952-934d-e901-0000000030be | >>>>>>> TIMING | Wait for puppet host configuration to finish | >>>>>>> overcloud-controller-2 | 0:12:41.268734 | 7.01s >>>>>>> >>>>>>> I'll be highly grateful if you could further extend your support to >>>>>>> resolve the issue. >>>>>>> >>>>>>> Regards >>>>>>> Anirudh Gupta >>>>>>> >>>>>>> On Tue, May 10, 2022 at 9:15 PM Takashi Kajinami < >>>>>>> tkajinam at redhat.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Takashi, >>>>>>>>> >>>>>>>>> Thanks for your suggestion. >>>>>>>>> >>>>>>>>> I downloaded the updated Train Images and they had the ptp.pp file >>>>>>>>> available on the overcloud and undercloud machines >>>>>>>>> >>>>>>>>> [root at overcloud-controller-1 /]# find . -name "ptp.pp" >>>>>>>>> >>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>> >>>>>>>>> With this, I re-executed the deployment and got the below error on >>>>>>>>> the machines >>>>>>>>> >>>>>>>>> 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce >>>>>>>>> | FATAL | Wait for puppet host configuration to finish | >>>>>>>>> overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", >>>>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>> puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 >>>>>>>>> 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", >>>>>>>>> "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' >>>>>>>>> is deprecated in favor of using 'lookup'. See >>>>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>>>> (file & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>>>> should be converted to version 5\n<13>May 10 20:05:37 puppet-user: >>>>>>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>>>> available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module >>>>>>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>>>>>> resolved. Use 'puppet module list --tree' to see information about >>>>>>>>> modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: >>>>>>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>>>>>> a character index. Expected an Integer (file: >>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>>>> column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": >>>>>>>>> ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is >>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>>>> (file & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: >>>>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>>>> should be converted to version 5", "<13>May 10 20:05:37 puppet-user: >>>>>>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: >>>>>>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>>>> available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: >>>>>>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>>>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>>>>>> modules\\n (file & line not available)", "<13>May 10 20:05:37 >>>>>>>>> puppet-user: *Error: Evaluation Error: A substring operation does >>>>>>>>> not accept a String as a character index. Expected an Integer (file: >>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>>>> column: 46) *on node overcloud-controller-1.localdomain"], >>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>> >>>>>>>>> The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, >>>>>>>>> line: 41, column: 46 *had the following code: >>>>>>>>> 34 class tripleo::profile::base::time::ptp ( >>>>>>>>> 35 $ptp4l_interface = 'eth0', >>>>>>>>> 36 $ptp4l_conf_slaveonly = 1, >>>>>>>>> 37 $ptp4l_conf_network_transport = 'UDPv4', >>>>>>>>> 38 ) { >>>>>>>>> 39 >>>>>>>>> 40 $interface_mapping = generate('/bin/os-net-config', >>>>>>>>> '-i', $ptp4l_interface) >>>>>>>>> 41 *$ptp4l_interface_name = >>>>>>>>> $interface_mapping[$ptp4l_interface]* >>>>>>>>> >>>>>>>>> >>>>>>>>> *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file >>>>>>>>> is as below: >>>>>>>>> >>>>>>>>> resource_registry: >>>>>>>>> # FIXME(bogdando): switch it, once it is containerized >>>>>>>>> OS::TripleO::Services::Ptp: >>>>>>>>> ../../deployment/time/ptp-baremetal-puppet.yaml >>>>>>>>> OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp >>>>>>>>> >>>>>>>>> parameter_defaults: >>>>>>>>> # PTP hardware interface name >>>>>>>>> *PtpInterface: 'nic1'* >>>>>>>>> >>>>>>>>> # Configure PTP clock in slave mode >>>>>>>>> PtpSlaveMode: 1 >>>>>>>>> >>>>>>>>> # Configure PTP message transport protocol >>>>>>>>> PtpMessageTransport: 'UDPv4' >>>>>>>>> >>>>>>>>> I have also tried modifying the entry as below: >>>>>>>>> *PtpInterface: 'nic1' #*(i.e. without quotes), but the error >>>>>>>>> remains the same. >>>>>>>>> >>>>>>>>> Queries: >>>>>>>>> >>>>>>>>> 1. Any pointers to resolve this? >>>>>>>>> >>>>>>>>> I'm not familiar with ptp but you'd need to use the actual >>>>>>>> interface name >>>>>>>> if you are not using the alias name. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> 1. You were mentioning something about the support of PTP not >>>>>>>>> there in the wallaby release. Can you please confirm? >>>>>>>>> >>>>>>>>> IIUC PTP is still supported even in master. What we removed is the >>>>>>>> implementation using Puppet >>>>>>>> which was replaced by ansible. >>>>>>>> >>>>>>>> The warning regarding OS::TripleO::Services::Ptp was added when we >>>>>>>> decided to merge >>>>>>>> all time sync services to the single service resource which is >>>>>>>> OS::TripleO::Services::Timesync[1]. >>>>>>>> It's related to how resources are defined in Heat and doesn't >>>>>>>> affect configuration support itself. >>>>>>>> >>>>>>>> [1] >>>>>>>> https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> It would be a great help if you could extend a little more support >>>>>>>>> to resolve the issues. >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Anirudh Gupta >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> I'll check that well. >>>>>>>>>> By the way, I downloaded the images from the below link >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>>>>>>>>> >>>>>>>>>> They seem to be updated yesterday, I'll download and try the >>>>>>>>>> deployment with the latest images. >>>>>>>>>> >>>>>>>>>> Also are you pointing that the support for PTP would not be there >>>>>>>>>> in Wallaby Release? >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Anirudh Gupta >>>>>>>>>> >>>>>>>>>> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami < >>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta < >>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Takashi >>>>>>>>>>>> >>>>>>>>>>>> I have checked this in undercloud only. >>>>>>>>>>>> I don't find any such file in overcloud. Could this be a >>>>>>>>>>>> concern? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The manifest should exist in overcloud nodes and the missing >>>>>>>>>>> file is the exact cause >>>>>>>>>>> of that puppet failure during deployment. >>>>>>>>>>> >>>>>>>>>>> Please check your overcloud images used to install overcloud >>>>>>>>>>> nodes and ensure that >>>>>>>>>>> you're using the right one. You might be using the image for a >>>>>>>>>>> different release. >>>>>>>>>>> We removed the manifest file during the Wallaby cycle. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Regards >>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami < >>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami < >>>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta < >>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Takashi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks for your reply. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I have checked on my machine and the file "ptp.pp" do exist >>>>>>>>>>>>>>> at path " >>>>>>>>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>>>>>>>> " >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Did you check this in your undercloud or overcloud ? >>>>>>>>>>>>>> During the deployment all configuration files are generated >>>>>>>>>>>>>> using puppet modules >>>>>>>>>>>>>> installed in overcloud nodes, so you should check this in >>>>>>>>>>>>>> overcloud nodes. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Also, the deprecation warning is not implemented >>>>>>>>>>>>>> >>>>>>>>>>>>> Ignore this incomplete line. I was looking for the >>>>>>>>>>>>> implementation which shows the warning >>>>>>>>>>>>> but I found it in tripleoclient and it looks reasonable >>>>>>>>>>>>> according to what we have in >>>>>>>>>>>>> environments/services/ptp.yaml . >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data "*ServicesDefault" >>>>>>>>>>>>>>> for controller and compute *before rendering the templates, >>>>>>>>>>>>>>> but still I am getting the same issue on all the 3 Controllers and 1 Compute >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> IIUC you don't need this because >>>>>>>>>>>>>> OS::TripleO::Services::Timesync becomes an alias >>>>>>>>>>>>>> to the Ptp service resource when you use the ptp environment >>>>>>>>>>>>>> file. >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Can you suggest any workarounds or any pointers to look >>>>>>>>>>>>>>> further in order to resolve this issue? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I'm not familiar with PTP, but the error you pasted >>>>>>>>>>>>>>>> indicates that the required puppet manifest does not exist in your >>>>>>>>>>>>>>>> overcloud node/image. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> This should not happen and the class should exist as long >>>>>>>>>>>>>>>> as you have puppet-tripleo from stable/train installed. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I'd recommend you check installed tripleo/puppet packages >>>>>>>>>>>>>>>> and ensure everything is in the consistent release. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta < >>>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi All >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Any update on this? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, < >>>>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> When I was executing the Overcloud deployment script, >>>>>>>>>>>>>>>>>> passing the PTP yaml, it gave the following option at the starting >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> even if passing Y, it starts executing for sometime and >>>>>>>>>>>>>>>>>> the gives the following error >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a >>>>>>>>>>>>>>>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Can someone suggest some pointers in order to resolve >>>>>>>>>>>>>>>>>> this issue and move forward? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta < >>>>>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I have installed Undercloud with Openstack Train Release >>>>>>>>>>>>>>>>>>> successfully. >>>>>>>>>>>>>>>>>>> I need to enable PTP service while deploying the >>>>>>>>>>>>>>>>>>> overcloud for which I have included the service in my deployment >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>> /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>>>> * -e >>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>>>>>>>>>> \* >>>>>>>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> But it gives the following error >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>>>>>>>>>> 'lookup'. See >>>>>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>>>>>>>>>> (file: >>>>>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while >>>>>>>>>>>>>>>>>>> evaluating a Function Call, Could not find class >>>>>>>>>>>>>>>>>>> ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain >>>>>>>>>>>>>>>>>>> (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) >>>>>>>>>>>>>>>>>>> on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaitcev at redhat.com Thu May 19 15:23:14 2022 From: zaitcev at redhat.com (Pete Zaitcev) Date: Thu, 19 May 2022 10:23:14 -0500 Subject: [keystone][swift] audit logs In-Reply-To: References: Message-ID: <20220519102314.6c8e6070@niphredil.zaitcev.lan> I looked briefly at keystonemiddleware.audit here https://github.com/openstack/keystonemiddleware/tree/master/keystonemiddleware/audit And I highly doubt that it can work in Swift's pipeline. For one thing, it gets its configuration with oslo_config, and I don't know if that's compatible. -- Pete On Wed, 18 May 2022 13:59:50 +0530 Sharath Ck wrote: > Hi, > > I am currently trying to add keystone audit middleware in Swift. Middleware > is managed in swift proxy server, hence I have added the audit filter in > proxy server conf and have mentioned audit_middleware_notifications driver > as log in swift.conf . > I can see REST API call flow reaching audit middleware and constructing the > audit event with minimal data as Swift is not loading service catalog > information. But the audit event is not getting notified as per > audit_middleware_notifications. I tried adding oslo_messaging_notifications > with the driver as log, but audit events are not getting notified. > > Below are the changes in swift_proxy_server container, > > proxy-server.conf > > [pipeline:main] > pipeline = catch_errors gatekeeper healthcheck cache container_sync bulk > tempurl ratelimit formpost authtoken keystoneauth audit container_quotas > account_quotas slo dlo keymaster encryption proxy-server > > [filter:audit] > paste.filter_factory = keystonemiddleware.audit:filter_factory > audit_map_file = /etc/swift/api_audit_map.conf > > swift.conf > > [oslo_messaging_notifications] > driver = log > > [audit_middleware_notifications] > driver = log > > Kindly confirm whether the configuration changes are enough or need more > changes. > > Regards, > Sharath From sharath.madhava at gmail.com Thu May 19 15:27:37 2022 From: sharath.madhava at gmail.com (Sharath Ck) Date: Thu, 19 May 2022 20:57:37 +0530 Subject: [keystone][swift] audit logs In-Reply-To: <20220519102314.6c8e6070@niphredil.zaitcev.lan> References: <20220519102314.6c8e6070@niphredil.zaitcev.lan> Message-ID: Hi Pete, That?s correct. Audit map file path is picked from proxy_server.conf but notification details are not. Is this a known issue? Or Audit is not supported in Swift ? Regards, Sharath On Thu, 19 May 2022 at 8:53 PM, Pete Zaitcev wrote: > I looked briefly at keystonemiddleware.audit here > > https://github.com/openstack/keystonemiddleware/tree/master/keystonemiddleware/audit > > And I highly doubt that it can work in Swift's pipeline. > For one thing, it gets its configuration with oslo_config, > and I don't know if that's compatible. > > -- Pete > > On Wed, 18 May 2022 13:59:50 +0530 > Sharath Ck wrote: > > > Hi, > > > > I am currently trying to add keystone audit middleware in Swift. > Middleware > > is managed in swift proxy server, hence I have added the audit filter in > > proxy server conf and have mentioned audit_middleware_notifications > driver > > as log in swift.conf . > > I can see REST API call flow reaching audit middleware and constructing > the > > audit event with minimal data as Swift is not loading service catalog > > information. But the audit event is not getting notified as per > > audit_middleware_notifications. I tried adding > oslo_messaging_notifications > > with the driver as log, but audit events are not getting notified. > > > > Below are the changes in swift_proxy_server container, > > > > proxy-server.conf > > > > [pipeline:main] > > pipeline = catch_errors gatekeeper healthcheck cache container_sync bulk > > tempurl ratelimit formpost authtoken keystoneauth audit container_quotas > > account_quotas slo dlo keymaster encryption proxy-server > > > > [filter:audit] > > paste.filter_factory = keystonemiddleware.audit:filter_factory > > audit_map_file = /etc/swift/api_audit_map.conf > > > > swift.conf > > > > [oslo_messaging_notifications] > > driver = log > > > > [audit_middleware_notifications] > > driver = log > > > > Kindly confirm whether the configuration changes are enough or need more > > changes. > > > > Regards, > > Sharath > > -- Regards, Sharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu May 19 15:49:10 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 19 May 2022 08:49:10 -0700 Subject: [ironic] [requirements] broken libvirt-python on yoga on stream 9 In-Reply-To: References: <0548adf4-e274-4ad8-a1cf-c531ca688e39@www.fastmail.com> Message-ID: <1e1ecf65-3778-4fcc-b24f-c2caa42885cd@www.fastmail.com> On Wed, May 18, 2022, at 12:46 AM, Rados?aw Piliszek wrote: > On Wed, 18 May 2022 at 08:27, Dmitry Tantsur wrote: >> >> >> >> On Tue, May 17, 2022 at 8:24 PM Clark Boylan wrote: >>> >>> On Tue, May 17, 2022, at 10:56 AM, Dmitry Tantsur wrote: >>> > Hi all, >>> > >>> > It is happening again, the Bifrost CI is broken because libvirt-python >>> > cannot be built from source, this time on Stream 9. >>> > >>> > Missing type converters: >>> > int *:1 >>> > ERROR: failed virDomainQemuMonitorCommandWithFiles >>> > >>> > I created a gist with a reproducer: >>> > https://gist.github.com/dtantsur/835303c6a68ed77157016f5955183115. >>> > >>> > I cannot count how many times we had to deal with similar errors. I >>> > assume, libvirt-python has to be newer than the installed Python (8.2.0 >>> > in CS9, 8.0.0 in constraints). Should we stop constraining >>> > libvirt-python? Any other ideas? >>> >>> Your libvirt-python version needs to be at least as new as your libvirt version. New libvirt-python versions are expected to continue to work with old libvirt versions as well (though it may need to be built against the specific libvirt?). In this case it looks like CentOS Stream 9 libvirt is newer than what was in constraints. >>> >>> Generally constraints should update quickly. Looking at master upper-constraints libvirt-python was updated to 8.3.0 on May 4 and according to pypi the package updated on May 2 which seems reasonable. The problem here appears to be that you want this to work on a stable branch (yoga) and stable branches do not update constraints. >>> >>> My suggestion is that we use stable platforms for testing stable releases. The CentOS Stream releases seem to get updates that break stable software expectations far more than our other platforms. When working against master and trying to chase the latest and greatest this is probably a feature, but is problematic when you want rate of change to fall to near zero. I would consider not using Stream on stable branches if these problems persist. >> >> >> What would you suggest to use to test Red Hat systems then? We have both Rocky and OpenEuler images available. I believe both attempt to be RHEL downstreams similar to what CentOS once provided. > > Eh, come on. Stream breaks but so will the next minor RHEL release. > The real issue is that libvirt python relies on the correct C bindings > too heavily. Long-term I would love to see pure Python bindings but > short-term I suggest Ironic follows the steps of Nova, Masakari, Kolla > and DevStack. I disagree with several aspects of this. While old CentOS point releases did occasionally break things, the pain was nowhere near what stream has been like. I've had to debug a number of stream problems that are assumed to be infra problems but it is just the upstream moving quickly and breaking but not also fixing quickly. Ping was broken for about a month for example. Fixing Rocky for example, once a year or whatever the update cadence is, would be nothing like the Stream treadmill we have been running since the beginning of the year. As far as libvirt python bindings go the issue that caused you to update devstack was because CentOS stream was using CentOS !stream wheels. That issue is completely separate to whether or not you should consume libvirt-python from properly built for that distro wheels and we hadn't had problems doing that for many years that I know of. We fixed the mixup in wheel sources for CentOS Stream and you could've continued to consume the wheels from pypi as far as I know. However, it is possible that recent updates that have broken bifrost would've also forced us to rebuild those wheels, but we don't know because we basically stopped doing that in devstack. > > -yoctozepto From katonalala at gmail.com Thu May 19 16:35:32 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 19 May 2022 18:35:32 +0200 Subject: [neutron] Drivers meeting agenda - 20.05.2022. Message-ID: Hi Neutron Drivers, The agenda for tomorrow's drivers meeting is at [1]. We have one RFE to discuss tomorrow: * [RFE] Allow setting --dst-port for all port based protocols at once (#link https://bugs.launchpad.net/neutron/+bug/1973487) [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda See you at the meeting tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin at openstack.org Thu May 19 16:40:12 2022 From: erin at openstack.org (Erin Disney) Date: Thu, 19 May 2022 11:40:12 -0500 Subject: Save the Date: PTG October 2022 Message-ID: <869EBAAE-B174-4EAA-BA99-77A2920EBF33@openstack.org> We are very excited to announce our first in-person Project Teams Gathering (PTG) since Shanghai in 2019! Can?t wait to get everyone back together again this October 17-20th at the Hyatt Regency in lovely Columbus, Ohio. The venue is located in the heart of downtown, within walking distance of local sports arenas and the Short North Arts District that hosts dozens of restaurants, coffee shops, bars, art galleries and shops. Kendall Nelson will be reaching out soon to start collecting team sign ups so everyone knows who is planning on meeting in Columbus. We will also have registration, reduced rate hotel block, and sponsorship information coming soon, all of which will be posted to openinfra.dev/ptg once available. Stay tuned and we can?t wait to see you all in Columbus! Erin Disney Event Marketing Open Infrastructure Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmendiza at redhat.com Thu May 19 17:06:54 2022 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Thu, 19 May 2022 12:06:54 -0500 Subject: [keystone] Gerrit Group Updates Message-ID: Hi openstack-discuss, The Keystone team is doing a big of housekeeping with our Gerrit groups: For keystone-core we'll be removing folks who have not reviewed any patches in 60+ days. [1] This is going to leave us with 3 active core reviewers, so we are actively looking for folks who want to help review patches. Let us know if you're interested! For keystone-stable-int [2], I need to figure out who owns this group to get myself added as current PTL. Unfortunately, Colleen is no longer contributing to the project and Lance has not been able to review stable branch patches recently. Please let me know if you can help me get this group updated. Thanks, - Douglas Mendiz?bal Keystone PTL [1] https://www.stackalytics.io/report/contribution?module=keystone-group&project_type=openstack&days=60 [2] https://review.opendev.org/admin/groups/ed7e98cd14f0ec4dfd09d10bbb64e1bd3ac1fa15,members From cboylan at sapwetik.org Thu May 19 17:10:43 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 19 May 2022 10:10:43 -0700 Subject: [all] Deprecated Zuul queue syntax Message-ID: <657e015c-1da4-4bc6-891b-f4373b93a001@www.fastmail.com> Hello, Zuul deprecated declaring shared queues at a pipeline level with release 4.1.0 [0]. Zuul's current plan is to stop accepting this form of configuration with a future 7.0.0 release. More details on what needs to change can be found in Zuul's reminder notice on the Zuul mailing list [1]. I expect we have about a month before the v7 release which should give us plenty of time to fix this. If we don't fix this before the v7 release the expected fallout will be that these project branches will not have any pipelines defined, which means no Zuul jobs will run against that project branch until the configs are updated. I've marked this for everyone because 98 repositories are affected, and I'm not sure listing them all in the subject would be productive. Here is the list of affected repos: openstack/adjutant openstack/ansible-collection-kolla openstack/ansible-role-collect-logs openstack/ansible-role-tripleo-modify-image openstack/barbican openstack/barbican-tempest-plugin openstack/barbican-ui openstack/blazar openstack/blazar-tempest-plugin openstack/ceilometer openstack/cinderlib openstack/cloudkitty openstack/cloudkitty-tempest-plugin openstack/cyborg-tempest-plugin openstack/designate openstack/designate-dashboard openstack/designate-tempest-plugin openstack/devstack-gate openstack/ec2-api openstack/ec2api-tempest-plugin openstack/freezer openstack/freezer-tempest-plugin openstack/heat openstack/heat-dashboard openstack/heat-tempest-plugin openstack/horizon openstack/ironic openstack/ironic-inspector openstack/ironic-lib openstack/ironic-prometheus-exporter openstack/ironic-python-agent openstack/ironic-tempest-plugin openstack/kayobe openstack/kayobe-config openstack/kayobe-config-dev openstack/kolla openstack/kolla-ansible openstack/kuryr openstack/magnum openstack/magnum-tempest-plugin openstack/manila openstack/manila-tempest-plugin openstack/masakari openstack/masakari-monitors openstack/mistral openstack/mistral-tempest-plugin openstack/monasca-api openstack/monasca-common openstack/monasca-events-api openstack/monasca-log-api openstack/monasca-tempest-plugin openstack/murano openstack/murano-tempest-plugin openstack/networking-generic-switch openstack/octavia openstack/octavia-tempest-plugin openstack/openstack-chef openstack/os-net-config openstack/os-win openstack/oswin-tempest-plugin openstack/paunch openstack/project-config openstack/puppet-pacemaker openstack/puppet-tripleo openstack/python-designateclient openstack/python-ironicclient openstack/python-saharaclient openstack/python-tripleoclient openstack/python-troveclient openstack/sahara openstack/sahara-tests openstack/senlin openstack/senlin-tempest-plugin openstack/solum openstack/solum-tempest-plugin openstack/telemetry-tempest-plugin openstack/tenks openstack/tripleo-ci openstack/tripleo-ci-health-queries openstack/tripleo-common openstack/tripleo-heat-templates openstack/tripleo-quickstart openstack/tripleo-quickstart-extras openstack/tripleo-upgrade openstack/trove openstack/trove-tempest-plugin openstack/vitrage openstack/vitrage-tempest-plugin openstack/watcher openstack/watcher-tempest-plugin openstack/zaqar openstack/zaqar-tempest-plugin openstack/zun openstack/zun-tempest-plugin windmill/windmill windmill/windmill-backup x/neutron-classifier x/vmware-nsx I'll go ahead and push changes to windmill/* and x/* as those are leftovers from splitting things up into tenants and haven't ended up in their own tenants yet. But it would be great if OpenStack can work on cleaning up this deprecated syntax in the OpenStack repos. I've attached a file with a listing that includes more details for each repo (branches and specific config files) to make it easier for people to find where the old syntax is located so that it can be replaced with the new syntax. [0] https://zuul-ci.org/docs/zuul/latest/releasenotes.html#relnotes-4-1-0-deprecation-notes [1] https://lists.zuul-ci.org/pipermail/zuul-discuss/2022-May/001801.html -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: deprecated_zuul_queue.txt URL: From elod.illes at est.tech Thu May 19 17:34:26 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 19 May 2022 19:34:26 +0200 Subject: [keystone] Gerrit Group Updates In-Reply-To: References: Message-ID: Hi, Since TC already decided that teams can manage their own stable-maint groups I can only just encourage you and any other stable maintainers to read and follow the Stable Policy [1] :) I've added you now to the group [2]. Thanks in advance for keeping Keystone stable branches maintained! [1] https://docs.openstack.org/project-team-guide/stable-branches.html [2] https://review.opendev.org/admin/groups/ed7e98cd14f0ec4dfd09d10bbb64e1bd3ac1fa15,members Cheers, El?d On 2022. 05. 19. 19:06, Douglas Mendizabal wrote: > Hi openstack-discuss, > > The Keystone team is doing a big of housekeeping with our Gerrit groups: > > For keystone-core we'll be removing folks who have not reviewed any > patches in 60+ days. [1]? This is going to leave us with 3 active core > reviewers, so we are actively looking for folks who want to help > review patches.? Let us know if you're interested! > > For keystone-stable-int [2], I need to figure out who owns this group > to get myself added as current PTL.? Unfortunately, Colleen is no > longer contributing to the project and Lance has not been able to > review stable branch patches recently.? Please let me know if you can > help me get this group updated. > > Thanks, > - Douglas Mendiz?bal > ? Keystone PTL > > [1] > https://www.stackalytics.io/report/contribution?module=keystone-group&project_type=openstack&days=60 > > [2] > https://review.opendev.org/admin/groups/ed7e98cd14f0ec4dfd09d10bbb64e1bd3ac1fa15,members > > From elod.illes at est.tech Thu May 19 17:50:25 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 19 May 2022 19:50:25 +0200 Subject: [stable][glance] Proposing Cyril for stable branch core In-Reply-To: References: Message-ID: <88a1f254-cc0e-ae51-55c6-284ec2a7e04e@est.tech> Hi, I've added Cyril to the glance-stable-maint group [1]. And as always (and as in my other mail a couple of minutes ago :D), I'd like to encourage any stable reviewer to *read* and *follow* the Stable Policy [2] when reviewing stable patches. Welcome Cyril among stable maintainers, and thanks for keeping things stable! +1: feel free to ping me if a patch is not clear whether it is appropriate according to the policy, or not. [1] https://review.opendev.org/admin/groups/6a290a73668d7cdefb7bdfdc5a85f9adb61bbaa5,members [2] https://docs.openstack.org/project-team-guide/stable-branches.html Cheers, El?d (irc: elodilles) On 2022. 05. 19. 7:54, Abhishek Kekane wrote: > Hi All, > > Cyril Roelandt is helping a lot in the review process on master as > well as stable branches. I would like to have him in our > 'glance-stable-maint' team. > > Could you please help me to add 'cyril at redhat.com' to the stable core > team. > > Thanks & Best Regards, > Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu May 19 18:07:44 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 19 May 2022 23:37:44 +0530 Subject: [stable][glance] Proposing Cyril for stable branch core In-Reply-To: <88a1f254-cc0e-ae51-55c6-284ec2a7e04e@est.tech> References: <88a1f254-cc0e-ae51-55c6-284ec2a7e04e@est.tech> Message-ID: Thank you Elod! Abhishek Kekane On Thu, May 19, 2022 at 11:24 PM El?d Ill?s wrote: > Hi, > > I've added Cyril to the glance-stable-maint group [1]. And as always (and > as in my other mail a couple of minutes ago :D), I'd like to encourage any > stable reviewer to *read* and *follow* the Stable Policy [2] when reviewing > stable patches. > > Welcome Cyril among stable maintainers, and thanks for keeping things > stable! > +1: feel free to ping me if a patch is not clear whether it is appropriate > according to the policy, or not. > > [1] > https://review.opendev.org/admin/groups/6a290a73668d7cdefb7bdfdc5a85f9adb61bbaa5,members > [2] https://docs.openstack.org/project-team-guide/stable-branches.html > > Cheers, > > El?d > (irc: elodilles) > > On 2022. 05. 19. 7:54, Abhishek Kekane wrote: > > Hi All, > > Cyril Roelandt is helping a lot in the review process on master as well > as stable branches. I would like to have him in our 'glance-stable-maint' > team. > > Could you please help me to add 'cyril at redhat.com' to the stable core > team. > > Thanks & Best Regards, > Abhishek Kekane > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu May 19 20:00:58 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 19 May 2022 20:00:58 +0000 Subject: [all][infra][tact-sig] Deprecated Zuul queue syntax In-Reply-To: <657e015c-1da4-4bc6-891b-f4373b93a001@www.fastmail.com> References: <657e015c-1da4-4bc6-891b-f4373b93a001@www.fastmail.com> Message-ID: <20220519200057.tjdjnliz6atva4sc@yuggoth.org> On 2022-05-19 10:10:43 -0700 (-0700), Clark Boylan wrote: [...] > If we don't fix this before the v7 release the expected fallout > will be that these project branches will not have any pipelines > defined, which means no Zuul jobs will run against that project > branch until the configs are updated. [...] Well, almost anyway. Per further discussion in Zuul's Matrix channel, what we expect would happen is that Zuul is unable to parse the project section of the in-branch configuration and so ignores it, but would still apply any configuration for that project in openstack/project-config's zuul.d/projects.yaml file. The likely end result is that only the centrally-applied jobs run for changes to those branches, and then merge if those jobs are successful (without running any of the additional jobs the project wanted but were in the broken in-branch config). So, potentially worse than just not running any jobs there and ignoring the changes. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Thu May 19 21:43:57 2022 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 19 May 2022 17:43:57 -0400 Subject: [stable][glance] backport exception request for external HTTP API response code change Message-ID: Hello stable maintenance core team, We recently noticed that an Image API call implemented in Yoga, which returns no content, is returning a 200 instead of a 202 (as the design [0] had specified). We would like to correct this in master (Zed development) and in the stable/yoga branch. Here are our reasons why we think this is a legitimate exception to the no-http-api-changes-backport policy: (1) Even ignoring the lack of a response body, a 200 (OK) is misleading because operation initiated by the PUT /v2/cache/{image_id} request may not succeed. A 202 (Accepted) signals more clearly to the operator that some followup is necessary to be sure the image has been cached. (2) This is intended as an admin-only API call; the default policy is admin-only. So it has not been exposed to end-users. (3) The change [1] was first released in Yoga and there has not been much time for admins to begin consuming this feature. Further, there has not yet been a release from stable/yoga after glance 24.0.0. For these reasons, we request that the glance team be allowed to make this change. Thank you for considering this proposal! [0] https://opendev.org/openstack/glance-specs/blame/commit/2638ada23d92f714f54b71db00330e4a6c921beb/specs/xena/approved/glance/cache-api.rst#L153 [1] https://review.opendev.org/c/openstack/glance/+/792022 From dms at danplanet.com Thu May 19 22:05:00 2022 From: dms at danplanet.com (Dan Smith) Date: Thu, 19 May 2022 15:05:00 -0700 Subject: [stable][glance] backport exception request for external HTTP API response code change In-Reply-To: (Brian Rosmaita's message of "Thu, 19 May 2022 17:43:57 -0400") References: Message-ID: > We recently noticed that an Image API call implemented in Yoga, which > returns no content, is returning a 200 instead of a 202 (as the design > [0] had specified). We would like to correct this in master (Zed > development) and in the stable/yoga branch. Here are our reasons why > we think this is a legitimate exception to the > no-http-api-changes-backport policy: > > (1) Even ignoring the lack of a response body, a 200 (OK) is > misleading because operation initiated by the PUT /v2/cache/{image_id} > request may not succeed. A 202 (Accepted) signals more clearly to the > operator that some followup is necessary to be sure the image has been > cached. ...may not succeed, will be processed in the background, and may never actually happen. In other words, the textbook definition of 202 :) > (2) This is intended as an admin-only API call; the default policy is > admin-only. So it has not been exposed to end-users. > > (3) The change [1] was first released in Yoga and there has not been > much time for admins to begin consuming this feature. Further, there > has not yet been a release from stable/yoga after glance 24.0.0. > > For these reasons, we request that the glance team be allowed to make > this change. Yep, my feeling is that we caught this very early and that it's a very limited-scope operation which means the number of people potentially impacted is very small. It's unlikely many people are yet running Yoga in production, and of those, only some admins and scripts could be affected, and only those that were super excited to use this brand new rather minor feature right away. It's an API for a function that was already being handled by a different tool, so people would have had to *convert* their existing stuff to this already, which is even less likely than some new exciting functionality that didn't exist before. I think the calculus comes down to: If we do backport it, a small number of super bleeding-edge admins *may* have already jumped on writing against this API and may notice. The vast majority of people that end up deploying Yoga will never experience pain. If we don't backport it, we'll ensure that many more people will be affected. People _will_ eventually deploy Yoga, and then they _will_ deploy Zed. Those people _will_ experience a change, and I think it's clear that this class of people is larger than the one described above. It sucks, but I think it's less pain overall to backport ASAP, reno the mea culpa clearly, and limit the number of affected people to the much smaller number. --Dan From ygk.kmr at gmail.com Fri May 20 06:51:36 2022 From: ygk.kmr at gmail.com (Gk Gk) Date: Fri, 20 May 2022 12:21:36 +0530 Subject: Need help Message-ID: Hi All, In this link https://docs.openstack.org/nova/queens/user/aggregates.html , there is a warning section about availability zones: "That last rule can be very error-prone. Since the user can see the list of availability zones, they have no way to know whether the default availability zone name (currently *nova*) is provided because an host belongs to an aggregate whose AZ metadata key is set to *nova*, or because there are at least one host belonging to no aggregate. Consequently, it is highly recommended for users to never ever ask for booting an instance by specifying an explicit AZ named *nova* and for operators to never set the AZ metadata for an aggregate to *nova*. That leads to some problems due to the fact that the instance AZ information is explicitly attached to *nova* which could break further move operations when either the host is moved to another aggregate or when the user would like to migrate the instance." My questions are, 1. If a host belongs to an aggregate whose AZ metadata key is set to nova, why is it not possible to know if a default AZ is provided or not ? 2. Why is it a problem when the host is moved to another aggregate ? We can move hosts from one aggregate to another. Right ? 3. Why is it a problem when we migrate the instance ? Thanks Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri May 20 07:24:50 2022 From: marios at redhat.com (Marios Andreou) Date: Fri, 20 May 2022 10:24:50 +0300 Subject: [tripleo] hold recheck tripleo-heat-templates gate blocker tox-py36 - opendev infra mirrors? Message-ID: Hello Unfortunately instead of happy friday \o/ we get a tripleo-heat-templates gate blocker on the tox-py36 job (bug is at [1]). We cannot reproduce this locally and suspect there may be an issue with pypi mirrors in opendev infra? We are digging here but please refrain from recheck until we find a workaround thank you for your help o/ regards, marios (for the tripleo-ci team) [1] https://bugs.launchpad.net/tripleo/+bug/1974244 From marios at redhat.com Fri May 20 07:49:40 2022 From: marios at redhat.com (Marios Andreou) Date: Fri, 20 May 2022 10:49:40 +0300 Subject: [tripleo] hold recheck tripleo-heat-templates gate blocker tox-py36 - opendev infra mirrors? In-Reply-To: References: Message-ID: On Fri, May 20, 2022 at 10:24 AM Marios Andreou wrote: > > Hello > > Unfortunately instead of happy friday \o/ we get a > tripleo-heat-templates gate blocker on the tox-py36 job (bug is at > [1]). > > We cannot reproduce this locally and suspect there may be an issue > with pypi mirrors in opendev infra? We are digging here but please > refrain from recheck until we find a workaround chkumar++ tkajinam++ trending resolved - more info in the bug but we are going with https://review.opendev.org/c/openstack/tripleo-heat-templates/+/842675/ to update the python jobs template to the z version > > thank you for your help o/ > > regards, marios (for the tripleo-ci team) > > [1] https://bugs.launchpad.net/tripleo/+bug/1974244 From anyrude10 at gmail.com Fri May 20 10:08:29 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 20 May 2022 15:38:29 +0530 Subject: [TripleO] Support of PTP in Openstack Train In-Reply-To: References: Message-ID: Hi Takashi Thanks for your suggestion. I have installed Undercloud wallaby successfully. I have added the *deployed-server-deployed-neutron-ports.yaml *file and the command I am using to deploy Overcloud is: openstack overcloud deploy --templates \ -n /home/stack/templates/network_data.yaml \ -r /home/stack/templates/roles_data.yaml \ -e /home/stack/templates/environment.yaml \ -e /home/stack/templates/environments/network-isolation.yaml \ -e /home/stack/templates/environments/network-environment.yaml \ * -e /home/stack/templates/environments/deployed-server-deployed-neutron-ports.yaml \* -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml \ -e /home/stack/templates/ironic-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /home/stack/containers-prepare-parameter.yaml But the error encountered is: *HEAT-E99001 Service neutron is not available for resource type OS::Neutron::Net, reason: neutron network endpoint is not in service catalog.\n'].* 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud During handling of the above exception, another exception occurred: 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last): 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, self).run(parsed_args) 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in run 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, self).run(parsed_args) 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = self.take_action(parsed_args) or 0 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 1226, in take_action 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud user_tht_root, created_env_files) 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 357, in deploy_tripleo_heat_templates 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud deployment_options=deployment_options) 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 397, in _try_overcloud_deploy_with_compat_yaml 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud raise ValueError(messages) 2022-05-20 15:03:52.640 298110 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud ValueError: Failed to deploy: ERROR: Internal Error Any idea regarding this? Do you someone who can help in this? Regards Anirudh Gupta On Thu, May 19, 2022 at 8:18 PM Takashi Kajinami wrote: > I myself has quite limited experience about networking v2 thing we have in > wallaby > but I roughly guess you need to include > environments/deployed-server-deployed-neutron-ports.yaml to > disable Neutron resource associated with that vip resource by default. > > On Thu, May 19, 2022 at 7:41 PM Anirudh Gupta wrote: > >> Hi Takashi >> >> Did you get a chance to look into the wallaby issue which we are facing? >> >> Request you to please provide some pointers to move ahead? >> >> Regards >> Anirudh Gupta >> >> >> >> On Wed, May 18, 2022 at 1:20 PM Anirudh Gupta >> wrote: >> >>> Hi Takashi, >>> >>> Thanks for your reply. We had initially started with a Wallaby release >>> only, but we faced some issues even without PTP which did not get resolved >>> and ultimately we had to come back to Train. >>> >>> Can you look into the issue and suggest some pointer, so that we can >>> come back on wallaby >>> >>> *heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron >>> *>* is not available for resource type >>> *>* OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network >>> *>* endpoint is not in service catalog.* >>> >>> The issue was also posted on the Openstack Discuss Forum. >>> >>> >>> http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html >>> >>> Regards >>> Anirudh Gupta >>> >>> On Mon, May 16, 2022 at 8:17 PM Takashi Kajinami >>> wrote: >>> >>>> If my observations are correct, there are two bugs causing the error >>>> and there is no feasible workaround. >>>> >>>> 1. puppet-tripleo is not parsing output of the os-net-config command >>>> properly. >>>> 2. os-net-config it is not formatting the dict value properly and the >>>> output can't be parsed by json decoder. >>>> >>>> I've reported the 2nd bug here. >>>> https://bugs.launchpad.net/os-net-config/+bug/1973566 >>>> >>>> Once the 2nd bug is fixed in master and stable branches back to train, >>>> then we'd be able to consider >>>> how we can implement a proper parsing but I can't guarantee any >>>> timeline atm. >>>> >>>> The puppet implementation was replaced by ansible and is no longer used >>>> in recent versions like >>>> wallaby and the above problems do not exist. It might be an option to >>>> try ptp deployment in Wallaby. >>>> >>>> On Mon, May 16, 2022 at 9:18 PM Anirudh Gupta >>>> wrote: >>>> >>>>> Hi Takashi >>>>> >>>>> Could you infer anything from the output and the issue being faced? >>>>> >>>>> Is it because the output has keys and values as string? Any workaround >>>>> to resolve this issue? >>>>> >>>>> Regards >>>>> Anirudh >>>>> >>>>> On Fri, 13 May, 2022, 11:15 Anirudh Gupta, >>>>> wrote: >>>>> >>>>>> Hi Takashi, >>>>>> >>>>>> Thanks for your reply. >>>>>> >>>>>> I tried executing the suggested command and below is the output >>>>>> >>>>>> [heat-admin at overcloud-controller-1 ~]$ /bin/os-net-config -i eno1 >>>>>> {'eno1': 'eno1'} >>>>>> >>>>>> Regards >>>>>> Anirudh Gupta >>>>>> >>>>>> On Thu, May 12, 2022 at 11:03 AM Takashi Kajinami < >>>>>> tkajinam at redhat.com> wrote: >>>>>> >>>>>>> The puppy implementation executes the following command to get the >>>>>>> interface information. >>>>>>> /bin/os-net-config -i >>>>>>> I'd recommend you check the command output in *all overcloud nodes * >>>>>>> . >>>>>>> >>>>>>> If you need to use different interfaces for different roles then you >>>>>>> need to define the parameter >>>>>>> as role specific one, defined under Parameters. >>>>>>> >>>>>>> On Wed, May 11, 2022 at 4:26 PM Anirudh Gupta >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Takashi, >>>>>>>> >>>>>>>> Thanks for clarifying my issues regarding the support of PTP in >>>>>>>> Wallaby Release. >>>>>>>> >>>>>>>> In Train, I have also tried passing the exact interface name and >>>>>>>> took 2 runs with and without quotes like below: >>>>>>>> >>>>>>>> >>>>>>>> *PtpInterface: eno1* >>>>>>>> >>>>>>>> *PtpInterface: 'eno1'* >>>>>>>> >>>>>>>> But in both the cases, the issue observed was similar >>>>>>>> >>>>>>>> 2022-05-11 10:33:20.189107 | 5254001f-9952-934d-e901-0000000030be | >>>>>>>> FATAL | Wait for puppet host configuration to finish | >>>>>>>> overcloud-controller-2 | error={"ansible_job_id": "526310775819.36650", >>>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>> puppet-user", "delta": "0:00:04.289208", "end": "2022-05-11 >>>>>>>> 10:33:08.195052", "failed_when_result": true, "finished": 1, "msg": >>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-11 10:33:03.905844", >>>>>>>> "stderr": "<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' >>>>>>>> is deprecated in favor of using 'lookup'. See >>>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>>> (file & line not available)\n<13>May 11 10:33:03 puppet-user: Warning: >>>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>>> should be converted to version 5\n<13>May 11 10:33:03 puppet-user: >>>>>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 11 10:33:03 puppet-user: Warning: >>>>>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>>> available)\n<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: module >>>>>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>>>>> resolved. Use 'puppet module list --tree' to see information about >>>>>>>> modules\\n (file & line not available)\n<13>May 11 10:33:03 puppet-user: >>>>>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>>>>> a character index. Expected an Integer (file: >>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>>> column: 46) on node overcloud-controller-2.localdomain", "stderr_lines": >>>>>>>> ["<13>May 11 10:33:03 puppet-user: Warning: The function 'hiera' is >>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>>> (file & line not available)", "<13>May 11 10:33:03 puppet-user: Warning: >>>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>>> should be converted to version 5", "<13>May 11 10:33:03 puppet-user: >>>>>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 11 10:33:03 puppet-user: >>>>>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>>> available)", "<13>May 11 10:33:03 puppet-user: Warning: ModuleLoader: >>>>>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>>>>> modules\\n (file & line not available)", "<13>May 11 10:33:03 >>>>>>>> puppet-user: *Error: Evaluation Error: A substring operation does >>>>>>>> not accept a String as a character index. Expected an Integer (file: >>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>>> column: 46) on node overcloud-controller-2.localdomain"], "stdout": "", >>>>>>>> "stdout_lines": []}* >>>>>>>> 2022-05-11 10:33:20.190263 | 5254001f-9952-934d-e901-0000000030be | >>>>>>>> TIMING | Wait for puppet host configuration to finish | >>>>>>>> overcloud-controller-2 | 0:12:41.268734 | 7.01s >>>>>>>> >>>>>>>> I'll be highly grateful if you could further extend your support to >>>>>>>> resolve the issue. >>>>>>>> >>>>>>>> Regards >>>>>>>> Anirudh Gupta >>>>>>>> >>>>>>>> On Tue, May 10, 2022 at 9:15 PM Takashi Kajinami < >>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, May 11, 2022 at 12:19 AM Anirudh Gupta < >>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Takashi, >>>>>>>>>> >>>>>>>>>> Thanks for your suggestion. >>>>>>>>>> >>>>>>>>>> I downloaded the updated Train Images and they had the ptp.pp >>>>>>>>>> file available on the overcloud and undercloud machines >>>>>>>>>> >>>>>>>>>> [root at overcloud-controller-1 /]# find . -name "ptp.pp" >>>>>>>>>> >>>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>>> >>>>>>>>>> With this, I re-executed the deployment and got the below error >>>>>>>>>> on the machines >>>>>>>>>> >>>>>>>>>> 2022-05-10 20:05:53.133423 | 5254001f-9952-0364-51a1-0000000030ce >>>>>>>>>> | FATAL | Wait for puppet host configuration to finish | >>>>>>>>>> overcloud-controller-1 | error={"ansible_job_id": "321785316135.36755", >>>>>>>>>> "attempts": 3, "changed": true, "cmd": "set -o pipefail; puppet apply >>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>> puppet-user", "delta": "0:00:04.279435", "end": "2022-05-10 >>>>>>>>>> 20:05:41.355328", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-10 20:05:37.075893", >>>>>>>>>> "stderr": "<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' >>>>>>>>>> is deprecated in favor of using 'lookup'. See >>>>>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>>>>> (file & line not available)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>>>>> should be converted to version 5\n<13>May 10 20:05:37 puppet-user: >>>>>>>>>> (file: /etc/puppet/hiera.yaml)\n<13>May 10 20:05:37 puppet-user: Warning: >>>>>>>>>> Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>>>>> available)\n<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: module >>>>>>>>>> 'tripleo' has unresolved dependencies - it will only see those that are >>>>>>>>>> resolved. Use 'puppet module list --tree' to see information about >>>>>>>>>> modules\\n (file & line not available)\n<13>May 10 20:05:37 puppet-user: >>>>>>>>>> Error: Evaluation Error: A substring operation does not accept a String as >>>>>>>>>> a character index. Expected an Integer (file: >>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>>>>> column: 46) on node overcloud-controller-1.localdomain", "stderr_lines": >>>>>>>>>> ["<13>May 10 20:05:37 puppet-user: Warning: The function 'hiera' is >>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>> https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n >>>>>>>>>> (file & line not available)", "<13>May 10 20:05:37 puppet-user: Warning: >>>>>>>>>> /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It >>>>>>>>>> should be converted to version 5", "<13>May 10 20:05:37 puppet-user: >>>>>>>>>> (file: /etc/puppet/hiera.yaml)", "<13>May 10 20:05:37 puppet-user: >>>>>>>>>> Warning: Undefined variable '::deploy_config_name'; \\n (file & line not >>>>>>>>>> available)", "<13>May 10 20:05:37 puppet-user: Warning: ModuleLoader: >>>>>>>>>> module 'tripleo' has unresolved dependencies - it will only see those that >>>>>>>>>> are resolved. Use 'puppet module list --tree' to see information about >>>>>>>>>> modules\\n (file & line not available)", "<13>May 10 20:05:37 >>>>>>>>>> puppet-user: *Error: Evaluation Error: A substring operation >>>>>>>>>> does not accept a String as a character index. Expected an Integer (file: >>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, line: 41, >>>>>>>>>> column: 46) *on node overcloud-controller-1.localdomain"], >>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>> >>>>>>>>>> The file */etc/puppet/modules/tripleo/manifests/profile/base/time/ptp.pp, >>>>>>>>>> line: 41, column: 46 *had the following code: >>>>>>>>>> 34 class tripleo::profile::base::time::ptp ( >>>>>>>>>> 35 $ptp4l_interface = 'eth0', >>>>>>>>>> 36 $ptp4l_conf_slaveonly = 1, >>>>>>>>>> 37 $ptp4l_conf_network_transport = 'UDPv4', >>>>>>>>>> 38 ) { >>>>>>>>>> 39 >>>>>>>>>> 40 $interface_mapping = generate('/bin/os-net-config', >>>>>>>>>> '-i', $ptp4l_interface) >>>>>>>>>> 41 *$ptp4l_interface_name = >>>>>>>>>> $interface_mapping[$ptp4l_interface]* >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *"/usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml"* file >>>>>>>>>> is as below: >>>>>>>>>> >>>>>>>>>> resource_registry: >>>>>>>>>> # FIXME(bogdando): switch it, once it is containerized >>>>>>>>>> OS::TripleO::Services::Ptp: >>>>>>>>>> ../../deployment/time/ptp-baremetal-puppet.yaml >>>>>>>>>> OS::TripleO::Services::Timesync: OS::TripleO::Services::Ptp >>>>>>>>>> >>>>>>>>>> parameter_defaults: >>>>>>>>>> # PTP hardware interface name >>>>>>>>>> *PtpInterface: 'nic1'* >>>>>>>>>> >>>>>>>>>> # Configure PTP clock in slave mode >>>>>>>>>> PtpSlaveMode: 1 >>>>>>>>>> >>>>>>>>>> # Configure PTP message transport protocol >>>>>>>>>> PtpMessageTransport: 'UDPv4' >>>>>>>>>> >>>>>>>>>> I have also tried modifying the entry as below: >>>>>>>>>> *PtpInterface: 'nic1' #*(i.e. without quotes), but the error >>>>>>>>>> remains the same. >>>>>>>>>> >>>>>>>>>> Queries: >>>>>>>>>> >>>>>>>>>> 1. Any pointers to resolve this? >>>>>>>>>> >>>>>>>>>> I'm not familiar with ptp but you'd need to use the actual >>>>>>>>> interface name >>>>>>>>> if you are not using the alias name. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> 1. You were mentioning something about the support of PTP not >>>>>>>>>> there in the wallaby release. Can you please confirm? >>>>>>>>>> >>>>>>>>>> IIUC PTP is still supported even in master. What we removed is >>>>>>>>> the implementation using Puppet >>>>>>>>> which was replaced by ansible. >>>>>>>>> >>>>>>>>> The warning regarding OS::TripleO::Services::Ptp was added when we >>>>>>>>> decided to merge >>>>>>>>> all time sync services to the single service resource which is >>>>>>>>> OS::TripleO::Services::Timesync[1]. >>>>>>>>> It's related to how resources are defined in Heat and doesn't >>>>>>>>> affect configuration support itself. >>>>>>>>> >>>>>>>>> [1] >>>>>>>>> https://review.opendev.org/c/openstack/tripleo-heat-templates/+/586679 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> It would be a great help if you could extend a little more >>>>>>>>>> support to resolve the issues. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Anirudh Gupta >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, May 10, 2022 at 6:07 PM Anirudh Gupta < >>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> I'll check that well. >>>>>>>>>>> By the way, I downloaded the images from the below link >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>>>>>>>>>> >>>>>>>>>>> They seem to be updated yesterday, I'll download and try the >>>>>>>>>>> deployment with the latest images. >>>>>>>>>>> >>>>>>>>>>> Also are you pointing that the support for PTP would not be >>>>>>>>>>> there in Wallaby Release? >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> Anirudh Gupta >>>>>>>>>>> >>>>>>>>>>> On Tue, May 10, 2022 at 5:44 PM Takashi Kajinami < >>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, May 10, 2022 at 8:57 PM Anirudh Gupta < >>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Takashi >>>>>>>>>>>>> >>>>>>>>>>>>> I have checked this in undercloud only. >>>>>>>>>>>>> I don't find any such file in overcloud. Could this be a >>>>>>>>>>>>> concern? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The manifest should exist in overcloud nodes and the missing >>>>>>>>>>>> file is the exact cause >>>>>>>>>>>> of that puppet failure during deployment. >>>>>>>>>>>> >>>>>>>>>>>> Please check your overcloud images used to install overcloud >>>>>>>>>>>> nodes and ensure that >>>>>>>>>>>> you're using the right one. You might be using the image for a >>>>>>>>>>>> different release. >>>>>>>>>>>> We removed the manifest file during the Wallaby cycle. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Regards >>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, May 10, 2022 at 5:08 PM Takashi Kajinami < >>>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, May 10, 2022 at 8:33 PM Takashi Kajinami < >>>>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Tue, May 10, 2022 at 6:58 PM Anirudh Gupta < >>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Takashi, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks for your reply. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I have checked on my machine and the file "ptp.pp" do exist >>>>>>>>>>>>>>>> at path " >>>>>>>>>>>>>>>> *./usr/share/openstack-puppet/modules/tripleo/manifests/profile/base/time/ptp.pp* >>>>>>>>>>>>>>>> " >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Did you check this in your undercloud or overcloud ? >>>>>>>>>>>>>>> During the deployment all configuration files are generated >>>>>>>>>>>>>>> using puppet modules >>>>>>>>>>>>>>> installed in overcloud nodes, so you should check this in >>>>>>>>>>>>>>> overcloud nodes. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Also, the deprecation warning is not implemented >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Ignore this incomplete line. I was looking for the >>>>>>>>>>>>>> implementation which shows the warning >>>>>>>>>>>>>> but I found it in tripleoclient and it looks reasonable >>>>>>>>>>>>>> according to what we have in >>>>>>>>>>>>>> environments/services/ptp.yaml . >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I tried putting OS::TripleO::Services::Ptp in my roles_data >>>>>>>>>>>>>>>> "*ServicesDefault" for controller and compute *before >>>>>>>>>>>>>>>> rendering the templates, but still I am getting the same issue on all the 3 >>>>>>>>>>>>>>>> Controllers and 1 Compute >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> IIUC you don't need this because >>>>>>>>>>>>>>> OS::TripleO::Services::Timesync becomes an alias >>>>>>>>>>>>>>> to the Ptp service resource when you use the ptp environment >>>>>>>>>>>>>>> file. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/ptp.yaml#L5-L6 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a Function >>>>>>>>>>>>>>>> Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Can you suggest any workarounds or any pointers to look >>>>>>>>>>>>>>>> further in order to resolve this issue? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Tue, May 10, 2022 at 2:18 PM Takashi Kajinami < >>>>>>>>>>>>>>>> tkajinam at redhat.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I'm not familiar with PTP, but the error you pasted >>>>>>>>>>>>>>>>> indicates that the required puppet manifest does not exist in your >>>>>>>>>>>>>>>>> overcloud node/image. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> https://github.com/openstack/puppet-tripleo/blob/stable/train/manifests/profile/base/time/ptp.pp >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> This should not happen and the class should exist as long >>>>>>>>>>>>>>>>> as you have puppet-tripleo from stable/train installed. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I'd recommend you check installed tripleo/puppet packages >>>>>>>>>>>>>>>>> and ensure everything is in the consistent release. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Tue, May 10, 2022 at 5:28 AM Anirudh Gupta < >>>>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hi All >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Any update on this? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Mon, 9 May, 2022, 17:21 Anirudh Gupta, < >>>>>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Is there any Support for PTP in Openstack TripleO ? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> When I was executing the Overcloud deployment script, >>>>>>>>>>>>>>>>>>> passing the PTP yaml, it gave the following option at the starting >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> *service OS::TripleO::Services::Ptp is enabled in >>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml. >>>>>>>>>>>>>>>>>>> Deprecated in favour of OS::TripleO::Services::TimesyncDo you still wish to >>>>>>>>>>>>>>>>>>> continue with deployment [y/N]* >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> even if passing Y, it starts executing for sometime and >>>>>>>>>>>>>>>>>>> the gives the following error >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> *Error: Evaluation Error: Error while evaluating a >>>>>>>>>>>>>>>>>>> Function Call, Could not find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Can someone suggest some pointers in order to resolve >>>>>>>>>>>>>>>>>>> this issue and move forward? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Fri, May 6, 2022 at 3:50 PM Anirudh Gupta < >>>>>>>>>>>>>>>>>>> anyrude10 at gmail.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I have installed Undercloud with Openstack Train >>>>>>>>>>>>>>>>>>>> Release successfully. >>>>>>>>>>>>>>>>>>>> I need to enable PTP service while deploying the >>>>>>>>>>>>>>>>>>>> overcloud for which I have included the service in my deployment >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>>>>>>>> -n /home/stack/templates/network_data.yaml \ >>>>>>>>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>>> /home/stack/templates/environments/network-isolation.yaml \ >>>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>>> /home/stack/templates/environments/network-environment.yaml \ >>>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>>>>>> * -e >>>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml >>>>>>>>>>>>>>>>>>>> \* >>>>>>>>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> But it gives the following error >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> 2022-05-06 11:30:10.707655 | >>>>>>>>>>>>>>>>>>>> 5254001f-9952-7fed-4a6d-000000002fde | FATAL | Wait for puppet host >>>>>>>>>>>>>>>>>>>> configuration to finish | overcloud-controller-0 | error={"ansible_job_id": >>>>>>>>>>>>>>>>>>>> "5188783868.37685", "attempts": 3, "changed": true, "cmd": "set -o >>>>>>>>>>>>>>>>>>>> pipefail; puppet apply >>>>>>>>>>>>>>>>>>>> --modulepath=/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules >>>>>>>>>>>>>>>>>>>> --detailed-exitcodes --summarize --color=false >>>>>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp 2>&1 | logger -s -t >>>>>>>>>>>>>>>>>>>> puppet-user", "delta": "0:00:04.440700", "end": "2022-05-06 >>>>>>>>>>>>>>>>>>>> 11:30:12.685508", "failed_when_result": true, "finished": 1, "msg": >>>>>>>>>>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2022-05-06 11:30:08.244808", >>>>>>>>>>>>>>>>>>>> "stderr": "<13>May 6 11:30:08 puppet-user: Warning: The function 'hiera' is >>>>>>>>>>>>>>>>>>>> deprecated in favor of using 'lookup'. See >>>>>>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html\n<13>May >>>>>>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>>> puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 >>>>>>>>>>>>>>>>>>>> is deprecated. It should be converted to version 5\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>>> puppet-user: (file: /etc/puppet/hiera.yaml)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; \n<13>May >>>>>>>>>>>>>>>>>>>> 6 11:30:08 puppet-user: (file & line not available)\n<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>>> puppet-user: Warning: Unknown variable: '::deployment_type'. (file: >>>>>>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>>>>>> line: 89, column: 8)\n<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>>>>>> connect to controller: Connection refused\n<13>May 6 11:30:08 puppet-user: >>>>>>>>>>>>>>>>>>>> Error: Evaluation Error: Error while evaluating a Function Call, Could not >>>>>>>>>>>>>>>>>>>> find class ::tripleo::profile::base::time::ptp for >>>>>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain (file: >>>>>>>>>>>>>>>>>>>> /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) on node >>>>>>>>>>>>>>>>>>>> overcloud-controller-0.localdomain", "stderr_lines": ["<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>>> puppet-user: Warning: The function 'hiera' is deprecated in favor of using >>>>>>>>>>>>>>>>>>>> 'lookup'. See >>>>>>>>>>>>>>>>>>>> https://puppet.com/docs/puppet/6.14/deprecated_language.html", >>>>>>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' >>>>>>>>>>>>>>>>>>>> version 3 is deprecated. It should be converted to version 5", "<13>May 6 >>>>>>>>>>>>>>>>>>>> 11:30:08 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>>> puppet-user: Warning: Undefined variable '::deploy_config_name'; ", >>>>>>>>>>>>>>>>>>>> "<13>May 6 11:30:08 puppet-user: (file & line not available)", "<13>May 6 >>>>>>>>>>>>>>>>>>>> 11:30:08 puppet-user: Warning: Unknown variable: '::deployment_type'. >>>>>>>>>>>>>>>>>>>> (file: >>>>>>>>>>>>>>>>>>>> /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, >>>>>>>>>>>>>>>>>>>> line: 89, column: 8)", "<13>May 6 11:30:08 puppet-user: error: Could not >>>>>>>>>>>>>>>>>>>> connect to controller: Connection refused", "<13>May 6 11:30:08 >>>>>>>>>>>>>>>>>>>> puppet-user: *Error: Evaluation Error: Error while >>>>>>>>>>>>>>>>>>>> evaluating a Function Call, Could not find class >>>>>>>>>>>>>>>>>>>> ::tripleo::profile::base::time::ptp for overcloud-controller-0.localdomain >>>>>>>>>>>>>>>>>>>> (file: /var/lib/tripleo-config/puppet_step_config.pp, line: 41, column: 1) >>>>>>>>>>>>>>>>>>>> on node* overcloud-controller-0.localdomain"], >>>>>>>>>>>>>>>>>>>> "stdout": "", "stdout_lines": []} >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Can someone please help in resolving this issue? >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>>>>> Anirudh Gupta >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nsitlani03 at gmail.com Fri May 20 12:28:31 2022 From: nsitlani03 at gmail.com (Namrata Sitlani) Date: Fri, 20 May 2022 17:58:31 +0530 Subject: [magnum] [xena] [IPv6] What is needed to make dual-stack IPv4/6 work in Magnum-managed Kubernetes 1.21+? Message-ID: Hello All, We run release Xena, and we have deployed Kubernetes version 1.21.10 on Magnum with Fedora CoreOS version 34 and we are trying to have IPv6 support for that Kubernetes cluster, as Kubernetes 1.21 and later versions have dual-stack(IPv4/IPv6) support enabled by default. To achieve the same, we used a network with both IPv4 and IPv6 subnets and we got a successful creation of the cluster. The compute instances of the cluster get both IPv4 and IPv6 addresses, but the master and minion nodes get IPv4 as external IP, and also the container does not get an IPv6 address. Can someone please help us with the information, what changes are required to be made at the magnum client-side to have an IPv6 supported Kubernetes Cluster? Thanks, Namrata -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at etc.gen.nz Fri May 20 13:22:54 2022 From: andrew at etc.gen.nz (Andrew Ruthven) Date: Sat, 21 May 2022 01:22:54 +1200 Subject: [magnum] [xena] [IPv6] What is needed to make dual-stack IPv4/6 work in Magnum-managed Kubernetes 1.21+? In-Reply-To: References: Message-ID: <218cc6fc19116ebafd69c824217cc4e175e4ef23.camel@etc.gen.nz> Hi Nimrata, On Fri, 2022-05-20 at 17:58 +0530, Namrata Sitlani wrote: > Can someone please help us with the information, what changes are > required to be made at the magnum client-side to have an IPv6 > supported Kubernetes Cluster? Purely because I happened to be looking at the patches currently pending for Magnum just now, I spotted this one which will be of interest to you: https://review.opendev.org/c/openstack/magnum/+/802235 It is to add IPv6 support in Magnum. So it appears it isn't currently supported. Cheers, Andrew -- Andrew Ruthven, Wellington, New Zealand andrew at etc.gen.nz | Catalyst Cloud: | This space intentionally left blank https://catalystcloud.nz | -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri May 20 14:28:07 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 20 May 2022 16:28:07 +0200 Subject: [release] Release countdown for week R-19, May 23 - 27 Message-ID: Development Focus ----------------- We are now past the Zed-1 milestone. Teams should now be focused on feature development. General Information ------------------- Our next milestone in this development cycle will be Zed-2, on July 14th, 2022. This milestone is when we freeze the list of deliverables that will be included in the Zed final release, so if you plan to introduce new deliverables in this release, please propose a change to add an empty deliverable file in the deliverables/zed directory of the openstack/releases repository. Now is also generally a good time to look at bugfixes that were introduced in the master branch that might make sense to be backported and released in a stable release. If you have any question around the OpenStack release process, feel free to ask on this mailing-list or on the #openstack-release channel on IRC. Upcoming Deadlines & Dates -------------------------- Zed-2 Milestone: July 14th, 2022 OpenInfra Summit: June 7-9, 2022 (Berlin) El?d Ill?s irc: elodilles From elod.illes at est.tech Fri May 20 17:20:11 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 20 May 2022 19:20:11 +0200 Subject: [stable][glance] backport exception request for external HTTP API response code change In-Reply-To: References: Message-ID: <9572a95a-785d-ab96-28d3-3c63d186e5d2@est.tech> Thanks for the clear explanation Brian. Based on what you described + Dan's comment, this sounds logical as an exception to the policy and be backported to stable/yoga. It's good that this was cought in this fairly early phase. :) El?d (irc: elodilles) On 2022. 05. 20. 0:05, Dan Smith wrote: >> We recently noticed that an Image API call implemented in Yoga, which >> returns no content, is returning a 200 instead of a 202 (as the design >> [0] had specified). We would like to correct this in master (Zed >> development) and in the stable/yoga branch. Here are our reasons why >> we think this is a legitimate exception to the >> no-http-api-changes-backport policy: >> >> (1) Even ignoring the lack of a response body, a 200 (OK) is >> misleading because operation initiated by the PUT /v2/cache/{image_id} >> request may not succeed. A 202 (Accepted) signals more clearly to the >> operator that some followup is necessary to be sure the image has been >> cached. > ...may not succeed, will be processed in the background, and may never > actually happen. In other words, the textbook definition of 202 :) > >> (2) This is intended as an admin-only API call; the default policy is >> admin-only. So it has not been exposed to end-users. >> >> (3) The change [1] was first released in Yoga and there has not been >> much time for admins to begin consuming this feature. Further, there >> has not yet been a release from stable/yoga after glance 24.0.0. >> >> For these reasons, we request that the glance team be allowed to make >> this change. > Yep, my feeling is that we caught this very early and that it's a very > limited-scope operation which means the number of people potentially > impacted is very small. It's unlikely many people are yet running Yoga > in production, and of those, only some admins and scripts could be > affected, and only those that were super excited to use this brand new > rather minor feature right away. It's an API for a function that was > already being handled by a different tool, so people would have had to > *convert* their existing stuff to this already, which is even less > likely than some new exciting functionality that didn't exist before. > > I think the calculus comes down to: > > If we do backport it, a small number of super bleeding-edge admins *may* > have already jumped on writing against this API and may notice. The vast > majority of people that end up deploying Yoga will never experience > pain. > > If we don't backport it, we'll ensure that many more people will be > affected. People _will_ eventually deploy Yoga, and then they _will_ > deploy Zed. Those people _will_ experience a change, and I think it's > clear that this class of people is larger than the one described above. > > It sucks, but I think it's less pain overall to backport ASAP, reno the > mea culpa clearly, and limit the number of affected people to the much > smaller number. > > --Dan > From gmann at ghanshyammann.com Fri May 20 17:42:28 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 20 May 2022 12:42:28 -0500 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> Message-ID: <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> ---- On Fri, 13 May 2022 20:43:57 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Writing on the top for quick reading as we have a consensus on this. > > In today's TC meeting [3], we discussed this topic with Foundation staff and we all > agreed to give the release name process handling to Foundation. TC will not be involved > in the process. The release name will be mainly used for marketing purposes and we will > use the release number as the primary identifier in our release scheduling, automated > processes, directory structure etc. > > I have proposed the patch to document it in the TC release naming process. > > - https://review.opendev.org/c/openstack/governance/+/841800 As the next step, we will have a voice call to figure out how to use the number/name in our development cycle/tooling. Meeting details are below: * When Tue May 24, 2022 9am ? 10am Pacific Time (16:00 - 17:00 UTC) * Joining info Join with Google Meet = meet.google.com/gwi-cphb-rya * Join by phone: (US) +1 563-447-5079 (PIN: 263292866) -gmann > > [1] https://meetings.opendev.org/meetings/tc/2022/tc.2022-05-12-15.00.log.html#l-245 > > -gmann > > > ---- On Fri, 29 Apr 2022 14:47:31 -0500 Ghanshyam Mann wrote ---- > > > > ---- On Fri, 29 Apr 2022 14:11:12 -0500 Goutham Pacha Ravi wrote ---- > > > On Fri, Apr 29, 2022 at 8:36 PM Slawek Kaplonski wrote: > > > > > > > > Hi, > > > > > > > > > > > > During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > > > > > > > > It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > > > > > > > > > > > > Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > > > > > > > > This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > > > > > > Beloved TC, > > > > > > I'm highly disappointed in this 'decision', and would like for you to > > > reconsider. I see the reasons you cite, but I feel like we're throwing > > > the baby out with the bathwater here. Disagreements need not be > > > feared, why not allow them to be aired publicly? That's a tenet of > > > this open community. Allow names to be downvoted with reason during > > > the proposal phase, and they'll organically fall-off from favor. > > > > > > Release names have always been a bonding factor. I've been happy to > > > drum up contributor morale with our release names and the > > > stories/anecdotes behind them. Release naming will not hurt/help the > > > tick-tock release IMHO. We can append the release number to the name, > > > and call it a day if you want. > > > > I agree with the disagrement ratio and that should not stop us doing the things. > > But here we need to understand what type of disagrement we have and on what. > > Most of the disagrement were cutural or historical where people has shown it > > emotinally. And I personally as well as a TC or communitiy member does not feel > > goot to ignore them or give them any reasoning not to listen them (because I do not > > have any reasoning on these cultural/historical disagrement). > > > > Zed cycle was one good example of such thing when it was brought up in TC > > channel about war thing[1] and asked about change the Zed name. I will be happy > > to know what is best solution for this. > > > > 1. Change Zed name: it involve lot of technical work and communciation too. If yes then > > let's do this now. > > > > 2. Do not listen to these emotional request to change name: We did this at the end and I > > do not feel ok to do that. At least I do not want to ignore such request in future. > > > > Those are main reason we in TC decided to remvoe the name as they are culturally, emotionally > > tied. That is main reason of droping those not any techncial or work wise issue. > > > > [1] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2022-03-08.log.html#t2022-03-08T14:35:26 > > > > -gmann > > > > > > > > I do believe our current release naming process is a step out of the > > > TC's perceived charter. There are many technical challenges that the > > > TC is tackling, and coordinating a vote/slugfest about names isn't as > > > important as those. > > > As Allison suggests, we could seek help from the foundation to run the > > > community voting and vetting for the release naming process - and > > > expect the same level of transparency as the 4 opens that the > > > OpenStack community espouses. > > > > Yes we will offcourse open to that but at the same time we will be waiting > > for the foudnation proposal to sovle such issue irespective of who is doing name > > selection. So let's wait for that. > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > > > > > > > > [2] https://review.opendev.org/c/openstack/governance/+/839897 > > > > > > > > > > > > -- > > > > > > > > Slawek Kaplonski > > > > > > > > Principal Software Engineer > > > > > > > > Red Hat > > > > > > > > > > > > From gmann at ghanshyammann.com Fri May 20 18:07:33 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 20 May 2022 13:07:33 -0500 Subject: [all][tc] What's happening in Technical Committee: summary May 20, 2022: Reading: 5 min Message-ID: <180e2a740c5.f736664017388.5326238419604810055@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on May 19. Most of the meeting discussions are summarized in this email. Meeting summary logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-05-19-15.00.log.html * Next TC weekly meeting will be on May 26 Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by May 25. 2. What we completed this week: ========================= * Changed new release cadence terminology from tick-tock to "SLURP" [2][3]. 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[4], we have started the many items. Open Reviews ----------------- * Seven open reviews for ongoing activities[5]. Change OpenStack release naming policy proposal ----------------------------------------------------------- As the next step, we will have a voice call to figure out how to use the number/name in our development cycle/tooling. Meeting detail is shared in ML[6] New release cadence "SLURP" open items -------------------------------------------------- 1. release notes strategy: Brian will be proposing the agreed approach for release notes for the "SLURP" releases. Improve project governance --------------------------------- Slawek has the proposal the framework up and it is under review[7]. New ELK service dashboard: e-r service ----------------------------------------------- Not much update in this than last week. TripleO team and dpawlik are working on the e-r repo merge and hosting part. Consistent and Secure Default RBAC ------------------------------------------- We will have our next call on May 24[8], feel free to add the topic you would like to discuss in etherpad[9]. We will try to cover the heat topic if there is feedback on Takashi sent an email asking the feedback about the 'split stack' approach[10] 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[11]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's the situation again on ML and hope Braden will be ready with their company side permission[12]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[13][14]. Project updates ------------------- * None 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[15]. [2] https://governance.openstack.org/tc/goals/selected/fips.html 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [16] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/840354 [3] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028540.html [4] https://etherpad.opendev.org/p/tc-zed-tracker [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028618.html [7] https://review.opendev.org/c/openstack/governance/+/839880 [8] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting [9] https://etherpad.opendev.org/p/rbac-zed-ptg#L97 [10] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028490.html [11] https://review.opendev.org/c/openstack/governance/+/836888 [12] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [13] https://etherpad.opendev.org/p/zuul-config-error-openstack [14] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html [15] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [16] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From sunny at openstack.org Fri May 20 16:23:27 2022 From: sunny at openstack.org (Sunny Cai) Date: Fri, 20 May 2022 09:23:27 -0700 Subject: Call for OpenStack projects virtual updates In-Reply-To: <5d6721a4-7e29-458a-a275-09223b3d5ecb@Spark> References: <261c32d7-bf30-4573-943d-811964c0e9e8@Spark> <5d6721a4-7e29-458a-a275-09223b3d5ecb@Spark> Message-ID: Hi all, I?d like to remind all OpenStack PTLs to submit your project recordings by?Friday, May 27?so we can promote them at the upcoming Berlin Summit and add them on the project navigator. Please let me know if you have any questions! Thanks, Sunny Cai OpenInfra Foundation Marketing & Community On May 9, 2022, 10:27 AM -0700, Sunny Cai , wrote: > Hi everyone, > > As the Berlin Summit is approaching in June, we are collecting project update recordings from each OpenInfa community to showcase what each project has accomplished in the past year/release. Please let me know if you?re interested in doing a prerecorded video of your project?s most recent updates. We will post all recordings on the OpenInfra Foundation YouTube channel and promote them at the Summit. Recordings will also be posted in the project navigator. > > We recommend the recordings to be less than 10 minutes long. If you would like to present with slides, here is a slide deck template if you need [1]. > > If you can submit your project recording to me by?Friday,?May 27, we?d love to promote them at the upcoming Berlin Summit. If you prefer to submit it after the Summit, I?ll send out another reminder on June 15 to collect any reminding recordings. > > Please let me know if you have any questions. > > [1]?https://docs.google.com/presentation/d/1SlWayfGc9CYAsKnS43UxVjO8NYr1pR-0Gi4R6bLCjUs/edit?usp=sharing > > Thanks, > > Sunny Cai > OpenInfra Foundation > Marketing & Community > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri May 20 22:57:26 2022 From: zigo at debian.org (Thomas Goirand) Date: Sat, 21 May 2022 00:57:26 +0200 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> Message-ID: <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> On 5/20/22 19:42, Ghanshyam Mann wrote: > * Joining info Join with Google Meet = meet.google.com/gwi-cphb-rya What would it take to make everyone switch to free software? We're moving from one non-free platform (zoom) to another (google meet), even if we have Jitsi that works very well... :/ Cheers, Thomas Goirand (zigo) From gmann at ghanshyammann.com Sat May 21 01:13:52 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 20 May 2022 20:13:52 -0500 Subject: [qa][requirements] Tempest Gate blocked for py3.6|7 support Message-ID: <180e42d8e9c.d7f7f37622699.5997519070259133028@ghanshyammann.com> Hello Everyone, As we know, in Zed cycle we have dropped the support of py3.6|7 from OpenStack which is why projects, lib like oslo started dropping it and made new releases. For example, oslo.log 5.0.0 does not support py3.6|7. Tempest being branchless and still supporting stable/victoria onwards stable branches thought of keeping the py36|7 support. But with the oslo dropped py3.6|7 and upper constraint has been updated to the oslo lib latest version made Tempest unit py3.6|7 test jobs failed. - https://bugs.launchpad.net/tempest/+bug/1975036 We have two options here: 1. requirements repo maintain different constraints for py3.6|7 and py3.8|9 which fix the Tempest py3.6| jobs and we can keep the python3.6|7 support in Tempest. Example: oslo.log which fixed the gate[1] but we might need to do the same for Tempest's other deps - https://review.opendev.org/c/openstack/requirements/+/842820 2. Drop the support of py3.6|7 from Tempest If the requirement team is not ok with the above solution then we can think of dropping the py3.6|7 support from Tempest too. This will stop Tempest to run on py3.6|7 but it will not block Tempest to test the OpenStack running on py3.6|7 as that can be done by running the Tempest in virtual env. Opinion? NOTE: Until we figure this out, I have proposed to temporarily stop running py3.6|7 on tempest gate, (especially to get c9s volume detach failure fix to merged otherwise that will block other projects gate too) - https://review.opendev.org/c/openstack/tempest/+/842821 [1] https://review.opendev.org/c/openstack/tempest/+/842819 -gmann From emccormick at cirrusseven.com Sat May 21 01:56:27 2022 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 20 May 2022 21:56:27 -0400 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> Message-ID: On Fri, May 20, 2022, 7:03 PM Thomas Goirand wrote: > On 5/20/22 19:42, Ghanshyam Mann wrote: > > * Joining info Join with Google Meet = meet.google.com/gwi-cphb-rya > > What would it take to make everyone switch to free software? We're > moving from one non-free platform (zoom) to another (google meet), even > if we have Jitsi that works very well... :/ > Jitsi degrades beyond 16 or so interactive users. I love it and wish it well, but it's just not there yet. If the situation has changed in recent months, great then let's take a look. > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat May 21 04:07:15 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 20 May 2022 23:07:15 -0500 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> Message-ID: <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> ---- On Fri, 20 May 2022 20:56:27 -0500 Erik McCormick wrote ---- > > On Fri, May 20, 2022, 7:03 PM Thomas Goirand wrote: > On 5/20/22 19:42, Ghanshyam Mann wrote: > > * Joining info Join with Google Meet = meet.google.com/gwi-cphb-rya > > What would it take to make everyone switch to free software? We're > moving from one non-free platform (zoom) to another (google meet), even > if we have Jitsi that works very well... :/ > > Jitsi degrades beyond 16 or so interactive users. I love it and wish it well, but it's just not there yet. If the situation has changed in recent months, great then let's take a look. Yeah, we try to use that as our first preference but it did not work as expected. In a recent usage in RBAC discussion a few weeks ago, many attendees complained about audio, joining issues or so. That is why we are using non-free tooling. @zigo, If you have tried any free and stable platforms for video calls, please suggest and we will love to use that. -gmann > > > Cheers, > > Thomas Goirand (zigo) > > From andrew at etc.gen.nz Sat May 21 04:29:35 2022 From: andrew at etc.gen.nz (Andrew Ruthven) Date: Sat, 21 May 2022 16:29:35 +1200 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> Message-ID: <7848ecb68f5d5dae64c7101b73d95c40374e5ece.camel@etc.gen.nz> On Fri, 2022-05-20 at 23:07 -0500, Ghanshyam Mann wrote: > @zigo, If you have tried any free and stable platforms for video > calls, please suggest and we will love > to use that. Hey, Here at Catalyst Cloud we use Big Blue Button - ?https://bigbluebutton.org/?- which we host ourselves (on our OpenStack public cloud!). We regularly have 100 people in meetings. You can try BBB on their website, although I'm not sure what the demo service is like. I'm sure that Catalyst could host a meeting if the demo service isn't suitable. Cheers, Andrew -- Andrew Ruthven, Wellington, New Zealand andrew at etc.gen.nz | Catalyst Cloud: | This space intentionally left blank https://catalystcloud.nz | -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Sat May 21 05:38:54 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sat, 21 May 2022 11:08:54 +0530 Subject: updating the overcloud domain in undercloud.conf | Wallaby | Tripleo Message-ID: Hi, I am currently deploying openstack wallaby using tripleo method, and my overcloud is up, now i want to change the overcloud domain specified in the udercloud domain and then redeploy the overcloud again. So if I change the undercloud.conf to the new domain and run openstack undercloud install, will it affect the already inspected nodes, provisioned network, ceph and nodes? Do I have to do introspection and other activities again? My plan was to : 1. change the domain in undercloud.conf 2. run openstack undercloud install (hoping it doesn't affect anything else and just updates the domain name) 3. redeploy overcloud stack only using openstack overcloud deploy (no introspection, no node provision and other activities ). With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Russell.Stather at ignitiongroup.co.za Sat May 21 08:48:54 2022 From: Russell.Stather at ignitiongroup.co.za (Russell Stather) Date: Sat, 21 May 2022 08:48:54 +0000 Subject: Issue creating octavia loadbalancer Message-ID: Hi Trying to create a load balancer. Get the following error. igadmin at ig-umh-maas:~$ openstack loadbalancer create -f value -c vip_port_id --name lb1 --vip-subnet-id 66690e99-44b2-4233-8f16-d539c8a05090 Provider 'amphora' reports error: Port security must be enabled on the VIP network. (HTTP 500) (Request-ID: req-642a4b4d-0d45-4a08-ba47-d4232a350703) igadmin at ig-umh-maas:~$ !1067 openstack network set --enable-port-security int_net BadRequestException: 400: Client Error for url: https://10.0.110.231:9696/v2.0/networks/be5f8494-1964-466b-a47a-8e5430976009, Unrecognized attribute(s) 'port_security_enabled' Anyone come across this issue? Thanks Russell -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat May 21 15:33:13 2022 From: zigo at debian.org (Thomas Goirand) Date: Sat, 21 May 2022 17:33:13 +0200 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> Message-ID: <94458541-bccb-4fe8-7ad4-93bdc14065b5@debian.org> On 5/21/22 06:07, Ghanshyam Mann wrote: > ---- On Fri, 20 May 2022 20:56:27 -0500 Erik McCormick wrote ---- > > > > On Fri, May 20, 2022, 7:03 PM Thomas Goirand wrote: > > On 5/20/22 19:42, Ghanshyam Mann wrote: > > > * Joining info Join with Google Meet = meet.google.com/gwi-cphb-rya > > > > What would it take to make everyone switch to free software? We're > > moving from one non-free platform (zoom) to another (google meet), even > > if we have Jitsi that works very well... :/ > > > > Jitsi degrades beyond 16 or so interactive users. I love it and wish it well, but it's just not there yet. If the situation has changed in recent months, great then let's take a look. > > Yeah, we try to use that as our first preference but it did not work as expected. In a recent usage in RBAC discussion a few weeks ago, > many attendees complained about audio, joining issues or so. That is why we are using non-free tooling. > > @zigo, If you have tried any free and stable platforms for video calls, please suggest and we will love > to use that. > > -gmann As I understand, the point isn't to make more than 16 persons talk in the meeting. Let me know if I'm wrong. In such case, a setup similar to Debconf (which used Jitsi for Debconf 2020 and 2021) can be used. You get just a few people in the meeting, and everyone else just listens to the broacast (ie: a regular online video in your browser or VLC). The full setup is well documented [1], and we have free software ansible scripts [2]. This allows thousands of attendees, plus recording and reviewing of the videos. I very much would love if there was some efforts put in this direction (or some similar setup, as long as it's fully free). Hoping this helps, Cheers, Thomas Goirand (zigo) [1] https://video.debconf.org [2] https://salsa.debian.org/debconf-video-team From mthode at mthode.org Sat May 21 16:23:10 2022 From: mthode at mthode.org (Matthew Thode) Date: Sat, 21 May 2022 11:23:10 -0500 Subject: [qa][requirements] Tempest Gate blocked for py3.6|7 support In-Reply-To: <180e42d8e9c.d7f7f37622699.5997519070259133028@ghanshyammann.com> References: <180e42d8e9c.d7f7f37622699.5997519070259133028@ghanshyammann.com> Message-ID: <20220521162310.z5csf5gytmtkenaa@mthode.org> On 22-05-20 20:13:52, Ghanshyam Mann wrote: > Hello Everyone, > > As we know, in Zed cycle we have dropped the support of py3.6|7 from OpenStack which is why > projects, lib like oslo started dropping it and made new releases. For example, oslo.log 5.0.0 does not > support py3.6|7. > > Tempest being branchless and still supporting stable/victoria onwards stable branches thought of > keeping the py36|7 support. But with the oslo dropped py3.6|7 and upper constraint has been updated > to the oslo lib latest version made Tempest unit py3.6|7 test jobs failed. > > - https://bugs.launchpad.net/tempest/+bug/1975036 > > We have two options here: > > 1. requirements repo maintain different constraints for py3.6|7 and py3.8|9 which fix the Tempest py3.6| jobs > and we can keep the python3.6|7 support in Tempest. Example: oslo.log which fixed the gate[1] but we might > need to do the same for Tempest's other deps > - https://review.opendev.org/c/openstack/requirements/+/842820 > If we go this route, I think I'd like to have a separate file per python version. At that point 'unmaintained' versions of python/constraints could have their maintanece migrated to another team if needed. Also, the targets that are not for the current development cycle should have a comment stating such at the top of the file. A problem with this is the sprawl of tests that could be needed. > 2. Drop the support of py3.6|7 from Tempest > If the requirement team is not ok with the above solution then we can think of dropping the py3.6|7 support > from Tempest too. This will stop Tempest to run on py3.6|7 but it will not block Tempest to test the OpenStack > running on py3.6|7 as that can be done by running the Tempest in virtual env. > One option is to generate th 36/37 constraints and putting the file in the tempest repo. > Opinion? > > NOTE: Until we figure this out, I have proposed to temporarily stop running py3.6|7 on tempest gate, (especially > to get c9s volume detach failure fix to merged otherwise that will block other projects gate too) > > - https://review.opendev.org/c/openstack/tempest/+/842821 > > [1] https://review.opendev.org/c/openstack/tempest/+/842819 > > -gmann > -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From oliver.weinmann at me.com Sat May 21 17:11:55 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sat, 21 May 2022 19:11:55 +0200 Subject: Magnum CSI Cinder Plugin broken in Yoga? In-Reply-To: References: Message-ID: Hi Ammad, sorry for the late response. I reinstalled my Yoga cluster and now all seems to be fine: kubectl get pods -n kube-system | grep -i csi csi-cinder-controllerplugin-0??????????????? 5/5???? Running 0????????? 4h50m csi-cinder-nodeplugin-b7pcq????????????????? 2/2???? Running 0????????? 4h50m csi-cinder-nodeplugin-bd5sb????????????????? 2/2???? Running 0????????? 4h46m csi-cinder-nodeplugin-kghsb????????????????? 2/2???? Running 0????????? 4h45m Cheers, Oliver Am 12.05.2022 um 07:04 schrieb Oliver Weinmann: > kubectl get pods -n kube-system | grep -i csi From johnsomor at gmail.com Sat May 21 18:37:11 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Sat, 21 May 2022 11:37:11 -0700 Subject: [neutron] Re: Issue creating octavia loadbalancer In-Reply-To: References: Message-ID: Hi Russell, The error from Octavia is saying that the "port_security" extension is not enabled in neutron. This is required to allow Octavia to manage security groups and create a secondary IP for the VIP. You can confirm that your neutron is missing the extension by running this command: openstack extension list --network|grep port-security The configuration setting is documented here: https://docs.openstack.org/neutron/yoga/configuration/ml2-conf.html#ml2.extension_drivers There is an example configuration file from the neutron test jobs here: https://zuul.opendev.org/t/openstack/build/29a7c6b716db4bb08471cc27c4baef22/log/controller/logs/etc/neutron/plugins/ml2/ml2_conf.ini#156 Michael On Sat, May 21, 2022 at 1:57 AM Russell Stather wrote: > > Hi > > Trying to create a load balancer. Get the following error. > > igadmin at ig-umh-maas:~$ openstack loadbalancer create -f value -c vip_port_id --name lb1 --vip-subnet-id 66690e99-44b2-4233-8f16-d539c8a05090 > Provider 'amphora' reports error: Port security must be enabled on the VIP network. (HTTP 500) (Request-ID: req-642a4b4d-0d45-4a08-ba47-d4232a350703) > igadmin at ig-umh-maas:~$ !1067 > openstack network set --enable-port-security int_net > BadRequestException: 400: Client Error for url: https://10.0.110.231:9696/v2.0/networks/be5f8494-1964-466b-a47a-8e5430976009, Unrecognized attribute(s) 'port_security_enabled' > > Anyone come across this issue? > > Thanks > > Russell From hanguangyu2 at gmail.com Sun May 22 09:27:00 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Sun, 22 May 2022 17:27:00 +0800 Subject: [dev][nova] How to add a column in table of Nova Database Message-ID: Hi, I'm a beginner in OpenStack development. I would like to try modifying by adding a property to the 'Instances' database table. But I didn't find description in the documentation of the data model mechanism and how to extend the database. https://docs.openstack.org/nova/latest/ Now, I know that it involves versined object model and alembic. My question is: What is the process of adding a column to a table in the database? Could someone show me the process of modifying a database table or recommend me the relevant documentation Best wishes, Han From Russell.Stather at ignitiongroup.co.za Sun May 22 12:41:34 2022 From: Russell.Stather at ignitiongroup.co.za (Russell Stather) Date: Sun, 22 May 2022 12:41:34 +0000 Subject: [neutron] Re: Issue creating octavia loadbalancer In-Reply-To: References: Message-ID: Hi Thanks, I added the config in the file and I have now got past that hurdle :) Thanks Russell ________________________________ From: Michael Johnson Sent: 21 May 2022 20:37 To: Russell Stather Cc: openstack-discuss at lists.openstack.org Subject: [neutron] Re: Issue creating octavia loadbalancer Hi Russell, The error from Octavia is saying that the "port_security" extension is not enabled in neutron. This is required to allow Octavia to manage security groups and create a secondary IP for the VIP. You can confirm that your neutron is missing the extension by running this command: openstack extension list --network|grep port-security The configuration setting is documented here: https://docs.openstack.org/neutron/yoga/configuration/ml2-conf.html#ml2.extension_drivers There is an example configuration file from the neutron test jobs here: https://zuul.opendev.org/t/openstack/build/29a7c6b716db4bb08471cc27c4baef22/log/controller/logs/etc/neutron/plugins/ml2/ml2_conf.ini#156 Michael On Sat, May 21, 2022 at 1:57 AM Russell Stather wrote: > > Hi > > Trying to create a load balancer. Get the following error. > > igadmin at ig-umh-maas:~$ openstack loadbalancer create -f value -c vip_port_id --name lb1 --vip-subnet-id 66690e99-44b2-4233-8f16-d539c8a05090 > Provider 'amphora' reports error: Port security must be enabled on the VIP network. (HTTP 500) (Request-ID: req-642a4b4d-0d45-4a08-ba47-d4232a350703) > igadmin at ig-umh-maas:~$ !1067 > openstack network set --enable-port-security int_net > BadRequestException: 400: Client Error for url: https://10.0.110.231:9696/v2.0/networks/be5f8494-1964-466b-a47a-8e5430976009, Unrecognized attribute(s) 'port_security_enabled' > > Anyone come across this issue? > > Thanks > > Russell -------------- next part -------------- An HTML attachment was scrubbed... URL: From Russell.Stather at ignitiongroup.co.za Sun May 22 12:42:57 2022 From: Russell.Stather at ignitiongroup.co.za (Russell Stather) Date: Sun, 22 May 2022 12:42:57 +0000 Subject: Octavia Amphora staying in PENDING_CREATE forever. Message-ID: I have uploaded an amphora image, but it sits in pending_create forever. Any ideas what it could be waiting for? Any ideas appreciated. Russell igadmin at ig-umh-maas:~$ openstack loadbalancer show lb1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | availability_zone | None | | created_at | 2022-05-22T12:20:34 | | description | | | flavor_id | None | | id | 66cd63ca-9fb2-42c9-b393-18cb3662b530 | | listeners | | | name | lb1 | | operating_status | OFFLINE | | pools | | | project_id | bfb1cf98939f4885b908f8667a66907e | | provider | amphora | | provisioning_status | PENDING_CREATE | | updated_at | None | | vip_address | 192.168.0.87 | | vip_network_id | be5f8494-1964-466b-a47a-8e5430976009 | | vip_port_id | 00dcde13-7227-429f-b0e8-dc3af470f1ae | | vip_qos_policy_id | None | | vip_subnet_id | 66690e99-44b2-4233-8f16-d539c8a05090 | | tags | | +---------------------+--------------------------------------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Sun May 22 14:18:36 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sun, 22 May 2022 16:18:36 +0200 Subject: Octavia Amphora staying in PENDING_CREATE forever. In-Reply-To: References: Message-ID: <63E98A61-555B-4534-9A64-030502CAFB85@me.com> Hi Russel, Usually this happened to me when the Octavia network was not reachable from the controller nodes. Can you run: Openstack loadbalancer amphora list This should show you the assigned ip of the amphora instance. Try to ping this from your controller nodes. Von meinem iPhone gesendet > Am 22.05.2022 um 14:45 schrieb Russell Stather : > > ? > > I have uploaded an amphora image, but it sits in pending_create forever. Any ideas what it could be waiting for? > > Any ideas appreciated. > > Russell > > igadmin at ig-umh-maas:~$ openstack loadbalancer show lb1 > +---------------------+--------------------------------------+ > | Field | Value | > +---------------------+--------------------------------------+ > | admin_state_up | True | > | availability_zone | None | > | created_at | 2022-05-22T12:20:34 | > | description | | > | flavor_id | None | > | id | 66cd63ca-9fb2-42c9-b393-18cb3662b530 | > | listeners | | > | name | lb1 | > | operating_status | OFFLINE | > | pools | | > | project_id | bfb1cf98939f4885b908f8667a66907e | > | provider | amphora | > | provisioning_status | PENDING_CREATE | > | updated_at | None | > | vip_address | 192.168.0.87 | > | vip_network_id | be5f8494-1964-466b-a47a-8e5430976009 | > | vip_port_id | 00dcde13-7227-429f-b0e8-dc3af470f1ae | > | vip_qos_policy_id | None | > | vip_subnet_id | 66690e99-44b2-4233-8f16-d539c8a05090 | > | tags | | > +---------------------+--------------------------------------+ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Russell.Stather at ignitiongroup.co.za Sun May 22 14:54:14 2022 From: Russell.Stather at ignitiongroup.co.za (Russell Stather) Date: Sun, 22 May 2022 14:54:14 +0000 Subject: Octavia Amphora staying in PENDING_CREATE forever. In-Reply-To: <63E98A61-555B-4534-9A64-030502CAFB85@me.com> References: <63E98A61-555B-4534-9A64-030502CAFB85@me.com> Message-ID: Hi igadmin at ig-umh-maas:~$ openstack loadbalancer amphora list +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ | id | loadbalancer_id | status | role | lb_network_ip | ha_ip | +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ | 5364e993-0f3b-477f-ac03-07fb6767480f | 962204f0-516a-4bc5-886e-6ed27f98efad | BOOTING | None | fc00:e3bf:6573:9272:f816:3eff:fe57:6acf | None | +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ This gives me an IPv6 address. What do you mean by the controller nodes? The node running octavia itself? ________________________________ From: Oliver Weinmann Sent: 22 May 2022 16:18 To: Russell Stather Cc: openstack-discuss at lists.openstack.org Subject: Re: Octavia Amphora staying in PENDING_CREATE forever. Hi Russel, Usually this happened to me when the Octavia network was not reachable from the controller nodes. Can you run: Openstack loadbalancer amphora list This should show you the assigned ip of the amphora instance. Try to ping this from your controller nodes. Von meinem iPhone gesendet Am 22.05.2022 um 14:45 schrieb Russell Stather : ? I have uploaded an amphora image, but it sits in pending_create forever. Any ideas what it could be waiting for? Any ideas appreciated. Russell igadmin at ig-umh-maas:~$ openstack loadbalancer show lb1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | availability_zone | None | | created_at | 2022-05-22T12:20:34 | | description | | | flavor_id | None | | id | 66cd63ca-9fb2-42c9-b393-18cb3662b530 | | listeners | | | name | lb1 | | operating_status | OFFLINE | | pools | | | project_id | bfb1cf98939f4885b908f8667a66907e | | provider | amphora | | provisioning_status | PENDING_CREATE | | updated_at | None | | vip_address | 192.168.0.87 | | vip_network_id | be5f8494-1964-466b-a47a-8e5430976009 | | vip_port_id | 00dcde13-7227-429f-b0e8-dc3af470f1ae | | vip_qos_policy_id | None | | vip_subnet_id | 66690e99-44b2-4233-8f16-d539c8a05090 | | tags | | +---------------------+--------------------------------------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun May 22 16:43:20 2022 From: zigo at debian.org (Thomas Goirand) Date: Sun, 22 May 2022 18:43:20 +0200 Subject: Octavia Amphora staying in PENDING_CREATE forever. In-Reply-To: References: <63E98A61-555B-4534-9A64-030502CAFB85@me.com> Message-ID: <8c773219-129e-4d0e-fa9a-cf1ff698ccdc@debian.org> On 5/22/22 16:54, Russell Stather wrote: > Hi > > igadmin at ig-umh-maas:~$ openstack loadbalancer amphora list > +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ > | id ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | loadbalancer_id > ? ? ?| status ?| role | lb_network_ip ? ? ? ? ? ? ? ? ? ? ? ? ? | ha_ip | > +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ > | 5364e993-0f3b-477f-ac03-07fb6767480f | > 962204f0-516a-4bc5-886e-6ed27f98efad | BOOTING | None | > fc00:e3bf:6573:9272:f816:3eff:fe57:6acf | None ?| > +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ > > This gives me an IPv6 address. What do you mean by the controller nodes? > The node running octavia itself? OpenStack has no "controller" but this therm is usually used for the servers running the API and workers. In this case, you want the nodes running octavia-worker. In the logs of the workers, you should be able to see that it cannot ssh the amphora VMs. The IPv6 that you saw is the ha_ip, ie the VRRP port. This is *not* the IP of the amphora VMs that are booting. These are supposed to be in "loadbalancer_ip". However, you have nothing in there. So probably you haven't configured Octavia correctly. Did you create a network especially for octavia, and did you write its ID in /etc/octavia/octavia.conf? Also, did you: - create an ssh key for Octavia? - create a PKI for Octavia? I created this script for the Octavia PKI, that you can simply run on one controller, and then copy the certs in the other nodes running the Octavia services: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-certs This script can be used (though you may want to customize it, especially the IP addresses, vlan, etc.) to create the ssh key, networking, etc.: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-amphora-secgroups-sshkey-lbrole-and-network I hope this helps, Cheers, Thomas Goirand (zigo) From oliver.weinmann at me.com Sun May 22 18:01:15 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sun, 22 May 2022 20:01:15 +0200 Subject: Octavia Amphora staying in PENDING_CREATE forever. In-Reply-To: <8c773219-129e-4d0e-fa9a-cf1ff698ccdc@debian.org> References: <8c773219-129e-4d0e-fa9a-cf1ff698ccdc@debian.org> Message-ID: <242B88C0-7D8F-4DCD-ACE8-C24AEC1C39BD@me.com> Sorry I meant control nodes. This term is used in kolla-ansible. I never deployed Openstack from scratch. I always used some tool like packstack, tripleo or now kolla-ansible. To me it seems a lot easier as kolla-ansible can do all the magic. You simply enable Octavia, set some parameters and you are set. Von meinem iPhone gesendet > Am 22.05.2022 um 18:47 schrieb Thomas Goirand : > > ?On 5/22/22 16:54, Russell Stather wrote: >> Hi >> igadmin at ig-umh-maas:~$ openstack loadbalancer amphora list >> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ >> | id | loadbalancer_id | status | role | lb_network_ip | ha_ip | >> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ >> | 5364e993-0f3b-477f-ac03-07fb6767480f | 962204f0-516a-4bc5-886e-6ed27f98efad | BOOTING | None | fc00:e3bf:6573:9272:f816:3eff:fe57:6acf | None | >> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ >> This gives me an IPv6 address. What do you mean by the controller nodes? The node running octavia itself? > > OpenStack has no "controller" but this therm is usually used for the servers running the API and workers. > > In this case, you want the nodes running octavia-worker. In the logs of the workers, you should be able to see that it cannot ssh the amphora VMs. > > The IPv6 that you saw is the ha_ip, ie the VRRP port. This is *not* the IP of the amphora VMs that are booting. These are supposed to be in "loadbalancer_ip". However, you have nothing in there. So probably you haven't configured Octavia correctly. > > Did you create a network especially for octavia, and did you write its ID in /etc/octavia/octavia.conf? > > Also, did you: > - create an ssh key for Octavia? > - create a PKI for Octavia? > > I created this script for the Octavia PKI, that you can simply run on one controller, and then copy the certs in the other nodes running the Octavia services: > https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-certs > > This script can be used (though you may want to customize it, especially the IP addresses, vlan, etc.) to create the ssh key, networking, etc.: > > https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-amphora-secgroups-sshkey-lbrole-and-network > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) > From Russell.Stather at ignitiongroup.co.za Sun May 22 18:34:00 2022 From: Russell.Stather at ignitiongroup.co.za (Russell Stather) Date: Sun, 22 May 2022 18:34:00 +0000 Subject: Octavia Amphora staying in PENDING_CREATE forever. In-Reply-To: <242B88C0-7D8F-4DCD-ACE8-C24AEC1C39BD@me.com> References: <8c773219-129e-4d0e-fa9a-cf1ff698ccdc@debian.org> <242B88C0-7D8F-4DCD-ACE8-C24AEC1C39BD@me.com> Message-ID: aha, your script helped a lot ? I see that the lb management lan needs to be external, and routable from the machines running openstack control nodes. I need my network guys to allocate an extra external network, so I can put that as the lb management network. I'll try tomorrow and let you know if I succeed. ________________________________ From: Oliver Weinmann Sent: 22 May 2022 20:01 To: Thomas Goirand Cc: openstack-discuss at lists.openstack.org Subject: Re: Octavia Amphora staying in PENDING_CREATE forever. Sorry I meant control nodes. This term is used in kolla-ansible. I never deployed Openstack from scratch. I always used some tool like packstack, tripleo or now kolla-ansible. To me it seems a lot easier as kolla-ansible can do all the magic. You simply enable Octavia, set some parameters and you are set. Von meinem iPhone gesendet > Am 22.05.2022 um 18:47 schrieb Thomas Goirand : > > ?On 5/22/22 16:54, Russell Stather wrote: >> Hi >> igadmin at ig-umh-maas:~$ openstack loadbalancer amphora list >> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ >> | id | loadbalancer_id | status | role | lb_network_ip | ha_ip | >> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ >> | 5364e993-0f3b-477f-ac03-07fb6767480f | 962204f0-516a-4bc5-886e-6ed27f98efad | BOOTING | None | fc00:e3bf:6573:9272:f816:3eff:fe57:6acf | None | >> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ >> This gives me an IPv6 address. What do you mean by the controller nodes? The node running octavia itself? > > OpenStack has no "controller" but this therm is usually used for the servers running the API and workers. > > In this case, you want the nodes running octavia-worker. In the logs of the workers, you should be able to see that it cannot ssh the amphora VMs. > > The IPv6 that you saw is the ha_ip, ie the VRRP port. This is *not* the IP of the amphora VMs that are booting. These are supposed to be in "loadbalancer_ip". However, you have nothing in there. So probably you haven't configured Octavia correctly. > > Did you create a network especially for octavia, and did you write its ID in /etc/octavia/octavia.conf? > > Also, did you: > - create an ssh key for Octavia? > - create a PKI for Octavia? > > I created this script for the Octavia PKI, that you can simply run on one controller, and then copy the certs in the other nodes running the Octavia services: > https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-certs > > This script can be used (though you may want to customize it, especially the IP addresses, vlan, etc.) to create the ssh key, networking, etc.: > > https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-amphora-secgroups-sshkey-lbrole-and-network > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Sun May 22 22:37:51 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Sun, 22 May 2022 15:37:51 -0700 Subject: Octavia Amphora staying in PENDING_CREATE forever. In-Reply-To: References: <8c773219-129e-4d0e-fa9a-cf1ff698ccdc@debian.org> <242B88C0-7D8F-4DCD-ACE8-C24AEC1C39BD@me.com> Message-ID: Also an FYI, they will not stay in PENDING_* forever. They are only in that state when a controller is working on the LB, like retrying to connect. The worker process logs will show the retrying warning messages that point to the issue. After your configured retry timeouts expire, the controller will mark the LB in provisioning status of ERROR when it gives up retrying. At that point you can delete it or failover the load balancer to try again. I don't know what the timeouts are configured for when deployed with kolla, but you might adjust them for your organization's preference (i.e retry a really long time or fail quickly). Michael On Sun, May 22, 2022 at 11:41 AM Russell Stather wrote: > > aha, your script helped a lot ? > > I see that the lb management lan needs to be external, and routable from the machines running openstack control nodes. > > I need my network guys to allocate an extra external network, so I can put that as the lb management network. I'll try tomorrow and let you know if I succeed. > ________________________________ > From: Oliver Weinmann > Sent: 22 May 2022 20:01 > To: Thomas Goirand > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Octavia Amphora staying in PENDING_CREATE forever. > > Sorry I meant control nodes. This term is used in kolla-ansible. I never deployed Openstack from scratch. I always used some tool like packstack, tripleo or now kolla-ansible. To me it seems a lot easier as kolla-ansible can do all the magic. You simply enable Octavia, set some parameters and you are set. > > Von meinem iPhone gesendet > > > Am 22.05.2022 um 18:47 schrieb Thomas Goirand : > > > > ?On 5/22/22 16:54, Russell Stather wrote: > >> Hi > >> igadmin at ig-umh-maas:~$ openstack loadbalancer amphora list > >> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ > >> | id | loadbalancer_id | status | role | lb_network_ip | ha_ip | > >> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ > >> | 5364e993-0f3b-477f-ac03-07fb6767480f | 962204f0-516a-4bc5-886e-6ed27f98efad | BOOTING | None | fc00:e3bf:6573:9272:f816:3eff:fe57:6acf | None | > >> +--------------------------------------+--------------------------------------+---------+------+-----------------------------------------+-------+ > >> This gives me an IPv6 address. What do you mean by the controller nodes? The node running octavia itself? > > > > OpenStack has no "controller" but this therm is usually used for the servers running the API and workers. > > > > In this case, you want the nodes running octavia-worker. In the logs of the workers, you should be able to see that it cannot ssh the amphora VMs. > > > > The IPv6 that you saw is the ha_ip, ie the VRRP port. This is *not* the IP of the amphora VMs that are booting. These are supposed to be in "loadbalancer_ip". However, you have nothing in there. So probably you haven't configured Octavia correctly. > > > > Did you create a network especially for octavia, and did you write its ID in /etc/octavia/octavia.conf? > > > > Also, did you: > > - create an ssh key for Octavia? > > - create a PKI for Octavia? > > > > I created this script for the Octavia PKI, that you can simply run on one controller, and then copy the certs in the other nodes running the Octavia services: > > https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-certs > > > > This script can be used (though you may want to customize it, especially the IP addresses, vlan, etc.) to create the ssh key, networking, etc.: > > > > https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/yoga/utils/usr/bin/oci-octavia-amphora-secgroups-sshkey-lbrole-and-network > > > > I hope this helps, > > Cheers, > > > > Thomas Goirand (zigo) > > > From skaplons at redhat.com Mon May 23 06:32:52 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 23 May 2022 08:32:52 +0200 Subject: [neutron] Bug deputy report - week of May 16th Message-ID: <1739410.UJQ5dUPfrc@p1> Hi, I was Neutron bug deputy last week. It was pretty busy week this time. Below is summary of new bugs opened during that time: ## High * https://bugs.launchpad.net/neutron/+bug/1973783 - [devstack] Segment plugin reports Traceback as placement client not configured - assigned to Yatin, in progress * https://bugs.launchpad.net/neutron/+bug/1974144 - [sqlalchemy-20] Missing DB context in "_get_dvr_hosts_for_subnets" - assigned to Rodolfo, in progress * https://bugs.launchpad.net/neutron/+bug/1974142 - [sqlalchemy-20] Missing DB context in "_update_subnet_postcommit" - assigned to Rodolfo, in progress * https://bugs.launchpad.net/neutron/+bug/1974057 - [neutron-dynamic-routing] Plugin RPC queue should be consumed by RPC workers - assigned to Renat Nurgaliyev, in progress ## Medium * https://bugs.launchpad.net/neutron/+bug/1973726 - [DB] "SubnetPool" exact match queries to non-indexed columns - *needs assignment* * https://bugs.launchpad.net/neutron/+bug/1973765 - LB creation failed due to address already in use - assigned to Fernando, fix proposed https://review.opendev.org/c/openstack/ovn-octavia-provider/+/842107 * https://bugs.launchpad.net/neutron/+bug/1974183 - Neutron allows creation of trunks for VIF_TYPE_HW_VEB ports - *needs assignment* * https://bugs.launchpad.net/neutron/+bug/1974149 - [FT] Test "test_agent_updated_at_use_nb_cfg_timestamp" failing - assigned to Rodolfo, in progress * https://bugs.launchpad.net/neutron/+bug/1974052 - Load Balancer remains in PENDING_CREATE - assigned to Fernando Royo, in progress ## Low * https://bugs.launchpad.net/neutron/+bug/1973656 - meaning of option "router_auto_schedule" is ambiguous - *needs assignment *but I think we need to discuss about it in the team meeting first to decide what naming/behaviour we really want to have, * https://bugs.launchpad.net/neutron/+bug/1973731 - [UT] Error output in "test_metadata_port_on_network_delete" - assigned to Rodolfo * https://bugs.launchpad.net/neutron/+bug/1973780 - PEP 632 ? Deprecate distutils module - assigned to Rodolfo, fixes already proposed https://review.opendev.org/c/openstack/neutron/+/842133 and https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/842135 ## Whishlist (RFEs) * https://bugs.launchpad.net/neutron/+bug/1973487 - [RFE] Allow setting --dst-port for all port based protocols at once And that's all. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From sharath.madhava at gmail.com Mon May 23 06:33:48 2022 From: sharath.madhava at gmail.com (Sharath Ck) Date: Mon, 23 May 2022 12:03:48 +0530 Subject: [keystone][swift] audit logs In-Reply-To: References: <20220519102314.6c8e6070@niphredil.zaitcev.lan> Message-ID: Hi Pete, everyone, Kindly confirm the audit support for Swift. If there is a document with a support matrix for keystone audit middleware, It will help a lot. Kindly point to any supporting document. Regards, Sharath On Thu, May 19, 2022 at 8:57 PM Sharath Ck wrote: > Hi Pete, > > That?s correct. Audit map file path is picked from proxy_server.conf but > notification details are not. Is this a known issue? Or Audit is not > supported in Swift ? > > Regards, > Sharath > > On Thu, 19 May 2022 at 8:53 PM, Pete Zaitcev wrote: > >> I looked briefly at keystonemiddleware.audit here >> >> https://github.com/openstack/keystonemiddleware/tree/master/keystonemiddleware/audit >> >> And I highly doubt that it can work in Swift's pipeline. >> For one thing, it gets its configuration with oslo_config, >> and I don't know if that's compatible. >> >> -- Pete >> >> On Wed, 18 May 2022 13:59:50 +0530 >> Sharath Ck wrote: >> >> > Hi, >> > >> > I am currently trying to add keystone audit middleware in Swift. >> Middleware >> > is managed in swift proxy server, hence I have added the audit filter in >> > proxy server conf and have mentioned audit_middleware_notifications >> driver >> > as log in swift.conf . >> > I can see REST API call flow reaching audit middleware and constructing >> the >> > audit event with minimal data as Swift is not loading service catalog >> > information. But the audit event is not getting notified as per >> > audit_middleware_notifications. I tried adding >> oslo_messaging_notifications >> > with the driver as log, but audit events are not getting notified. >> > >> > Below are the changes in swift_proxy_server container, >> > >> > proxy-server.conf >> > >> > [pipeline:main] >> > pipeline = catch_errors gatekeeper healthcheck cache container_sync bulk >> > tempurl ratelimit formpost authtoken keystoneauth audit container_quotas >> > account_quotas slo dlo keymaster encryption proxy-server >> > >> > [filter:audit] >> > paste.filter_factory = keystonemiddleware.audit:filter_factory >> > audit_map_file = /etc/swift/api_audit_map.conf >> > >> > swift.conf >> > >> > [oslo_messaging_notifications] >> > driver = log >> > >> > [audit_middleware_notifications] >> > driver = log >> > >> > Kindly confirm whether the configuration changes are enough or need more >> > changes. >> > >> > Regards, >> > Sharath >> >> -- > Regards, > Sharath > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon May 23 08:10:09 2022 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 23 May 2022 10:10:09 +0200 Subject: [largescale-sig] Next meeting: May 25th, 15utc Message-ID: Hi everyone, The Large Scale SIG will be meeting this Wednesday in #openstack-operators on OFTC IRC, at 15UTC. You can check how that time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20220525T15 Feel free to add topics to the agenda: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From smooney at redhat.com Mon May 23 09:54:23 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 23 May 2022 10:54:23 +0100 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <94458541-bccb-4fe8-7ad4-93bdc14065b5@debian.org> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> <94458541-bccb-4fe8-7ad4-93bdc14065b5@debian.org> Message-ID: <8642f998298488fea7a490bcd3345dde07e524ca.camel@redhat.com> On Sat, 2022-05-21 at 17:33 +0200, Thomas Goirand wrote: > On 5/21/22 06:07, Ghanshyam Mann wrote: > > ---- On Fri, 20 May 2022 20:56:27 -0500 Erik McCormick wrote ---- > > > > > > On Fri, May 20, 2022, 7:03 PM Thomas Goirand wrote: > > > On 5/20/22 19:42, Ghanshyam Mann wrote: > > > > * Joining info Join with Google Meet = meet.google.com/gwi-cphb-rya > > > > > > What would it take to make everyone switch to free software? We're > > > moving from one non-free platform (zoom) to another (google meet), even > > > if we have Jitsi that works very well... :/ > > > > > > Jitsi degrades beyond 16 or so interactive users. I love it and wish it well, but it's just not there yet. If the situation has changed in recent months, great then let's take a look. > > > > Yeah, we try to use that as our first preference but it did not work as expected. In a recent usage in RBAC discussion a few weeks ago, > > many attendees complained about audio, joining issues or so. That is why we are using non-free tooling. > > > > @zigo, If you have tried any free and stable platforms for video calls, please suggest and we will love > > to use that. > > > > -gmann > > As I understand, the point isn't to make more than 16 persons talk in > the meeting. Let me know if I'm wrong. > > In such case, a setup similar to Debconf (which used Jitsi for Debconf > 2020 and 2021) can be used. You get just a few people in the meeting, > and everyone else just listens to the broacast (ie: a regular online > video in your browser or VLC). in general whe we have video meeting for thinkgs like rbac discussions we do want to allow most or all attendes to be able to talk so using vlc to have everyone else just listent would fail that requireemtn in general. if it was a presentation sure but for a meeting that would not really be useful > > The full setup is well documented [1], and we have free software ansible > scripts [2]. This allows thousands of attendees, plus recording and > reviewing of the videos. > > I very much would love if there was some efforts put in this direction > (or some similar setup, as long as it's fully free). > > Hoping this helps, > Cheers, > > Thomas Goirand (zigo) > > [1] https://video.debconf.org > [2] https://salsa.debian.org/debconf-video-team > From swogatpradhan22 at gmail.com Mon May 23 05:56:46 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Mon, 23 May 2022 11:26:46 +0530 Subject: SSL verify failed | Overcloud deploy step 4 | Wallaby | tripleo Message-ID: Hi, I am currently stuck in deploy step 4 with the following error: FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} | error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover available identity versions when contacting https://overcloud.bdxworld.com:13000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 677, in urlopen\n chunked=chunked,\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 381, in make_request\n self._validate_conn(conn)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 978, in _validate_conn\n conn.connect()\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 371, in connect\n ssl_context=context,\n File \"/usr/lib/python3.6/site-packages/urllib3/util/ssl.py\", line 384, in ssl_wrap_socket\n return context.wrap_socket(sock, server_hostname=server_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", line 365, in wrap_socket\n context=self, _session=session)\n File \"/usr/lib64/python3.6/ssl.py\", line 776, in __init\n self.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 1036, in do_handshake\n self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 648, in do_handshake\n self._sslobj.do_handshake()\nssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in send\n timeout=timeout\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 727, in urlopen\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n File \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 439, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, in _send_request\n resp = self.session.request(method, url, *kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 542, in request\n resp = self.send(prep, **send_kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 655, in send\n r = adapter.send(request, **kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 138, in _do_create_plugin\n authenticated=False)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 610, in get_discovery\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, in __init_\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, in get\n return self.request(url, 'GET', **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in request\n resp = send(**kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, in _send_request\n raise exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://overcloud.bdxworld.com:13000: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 185, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 181, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 407, in __call_\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, in get\n return self.request(url, 'GET', **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in request\n resp = send(**kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, in _send_request\n raise exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://overcloud.bdxworld.com:13000: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 185, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 181, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 407, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 141, in run\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 517, in search_services\n services = self.list_services()\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 492, in list_services\n if self._is_client_version('identity', 2):\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 460, in _is_client_version\n client = getattr(self, client_name)\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 32, in _identity_client\n 'identity', min_version=2, max_version='3.latest')\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in get_endpoint\n return self.session.get_endpoint(auth or self.auth, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 380, in get_endpoint\n allow_version_hack=allow_version_hack, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 206, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 161, in _do_create_plugin\n 'auth_url is correct. %s' % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. SSL exception connecting to https://overcloud.bdxworld.com:13000: HTTPSConnectionPool(host=' overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} *Catalog list:* +-----------+----------------+--------------------------------------------------------------------------------------+ | Name | Type | Endpoints | +-----------+----------------+--------------------------------------------------------------------------------------+ | aodh | alarming | regionOne | | | | internal: http://172.25.201.250:8042 | | | | regionOne | | | | public: http://172.25.201.150:8042 | | | | regionOne | | | | admin: http://172.25.201.250:8042 | | | | | | placement | placement | regionOne | | | | admin: http://172.25.201.250:8778/placement | | | | regionOne | | | | internal: http://172.25.201.250:8778/placement | | | | regionOne | | | | public: http://172.25.201.150:8778/placement | | | | | | gnocchi | metric | regionOne | | | | public: http://172.25.201.150:8041 | | | | regionOne | | | | internal: http://172.25.201.250:8041 | | | | regionOne | | | | admin: http://172.25.201.250:8041 | | | | | | glance | image | regionOne | | | | internal: http://172.25.201.250:9292 | | | | regionOne | | | | admin: http://172.25.201.250:9292 | | | | regionOne | | | | public: http://172.25.201.150:9292 | | | | | | keystone | identity | regionOne | | | | internal: http://172.25.201.250:5000 | | | | regionOne | | | | admin: http://172.25.201.250:35357 | | | | regionOne | | | | public: https://overcloud.bdxworld.com:13000 | | | | | | heat-cfn | cloudformation | regionOne | | | | internal: http://172.25.201.250:8000/v1 | | | | regionOne | | | | public: http://172.25.201.150:8000/v1 | | | | regionOne | | | | admin: http://172.25.201.250:8000/v1 | | | | | | neutron | network | regionOne | | | | admin: http://172.25.201.250:9696 | | | | regionOne | | | | public: http://172.25.201.150:9696 | | | | regionOne | | | | internal: http://172.25.201.250:9696 | | | | | | heat | orchestration | regionOne | | | | internal: http://172.25.201.250:8004/v1/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | public: http://172.25.201.150:8004/v1/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | admin: http://172.25.201.250:8004/v1/5d922243077045c48fe4b075e386551b | | | | | | octavia | load-balancer | regionOne | | | | public: http://172.25.201.150:9876 | | | | regionOne | | | | admin: http://172.25.201.250:9876 | | | | regionOne | | | | internal: http://172.25.201.250:9876 | | | | | | cinderv3 | volumev3 | regionOne | | | | internal: http://172.25.201.250:8776/v3/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | public: http://172.25.201.150:8776/v3/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | admin: http://172.25.201.250:8776/v3/5d922243077045c48fe4b075e386551b | | | | | | swift | object-store | regionOne | | | | public: http://172.25.201.150:8080/swift/v1/AUTH_5d922243077045c48fe4b075e386551b | | | | regionOne | | | | admin: http://172.25.202.50:8080/swift/v1/AUTH_5d922243077045c48fe4b075e386551b | | | | regionOne | | | | internal: http://172.25.202.50:8080/swift/v1/AUTH_5d922243077045c48fe4b075e386551b | | | | | | nova | compute | regionOne | | | | admin: http://172.25.201.250:8774/v2.1 | | | | regionOne | | | | internal: http://172.25.201.250:8774/v2.1 | | | | regionOne | | | | public: http://172.25.201.150:8774/v2.1 | | | | *Host entries:* [stack at hkg2director ~]$ cat /etc/hosts # START_HOST_ENTRIES_FOR_STACK: overcloud 172.25.201.68 hkg2director.ctlplane.bdxworld.com hkg2director.ctlplane hkg2director.ctlplane.bdxcloud.bdxworld.com 172.25.201.91 overcloud.ctlplane.bdxworld.com 172.25.202.50 overcloud.storage.bdxworld.com 172.25.202.90 overcloud.storagemgmt.bdxworld.com 172.25.201.250 overcloud.internalapi.bdxworld.com 172.25.201.150 overcloud.bdxworld.com 172.25.201.212 overcloud-controller-0.bdxworld.com overcloud-controller-0 172.25.202.13 overcloud-controller-0.storage.bdxworld.com overcloud-controller-0.storage 172.25.202.75 overcloud-controller-0.storagemgmt.bdxworld.com overcloud-controller-0.storagemgmt 172.25.201.212 overcloud-controller-0.internalapi.bdxworld.com overcloud-controller-0.internalapi 172.25.202.143 overcloud-controller-0.tenant.bdxworld.com overcloud-controller-0.tenant 172.25.201.144 overcloud-controller-0.external.bdxworld.com overcloud-controller-0.external 172.25.201.106 overcloud-controller-0.ctlplane.bdxworld.com overcloud-controller-0.ctlplane 172.25.201.205 overcloud-controller-1.bdxworld.com overcloud-controller-1 172.25.202.18 overcloud-controller-1.storage.bdxworld.com overcloud-controller-1.storage 172.25.202.76 overcloud-controller-1.storagemgmt.bdxworld.com overcloud-controller-1.storagemgmt 172.25.201.205 overcloud-controller-1.internalapi.bdxworld.com overcloud-controller-1.internalapi 172.25.202.142 overcloud-controller-1.tenant.bdxworld.com overcloud-controller-1.tenant 172.25.201.149 overcloud-controller-1.external.bdxworld.com overcloud-controller-1.external 172.25.201.105 overcloud-controller-1.ctlplane.bdxworld.com overcloud-controller-1.ctlplane 172.25.201.201 overcloud-controller-2.bdxworld.com overcloud-controller-2 172.25.202.12 overcloud-controller-2.storage.bdxworld.com overcloud-controller-2.storage 172.25.202.74 overcloud-controller-2.storagemgmt.bdxworld.com overcloud-controller-2.storagemgmt 172.25.201.201 overcloud-controller-2.internalapi.bdxworld.com overcloud-controller-2.internalapi 172.25.202.149 overcloud-controller-2.tenant.bdxworld.com overcloud-controller-2.tenant 172.25.201.139 overcloud-controller-2.external.bdxworld.com overcloud-controller-2.external 172.25.201.97 overcloud-controller-2.ctlplane.bdxworld.com overcloud-controller-2.ctlplane 172.25.201.209 overcloud-controller-no-ceph-3.bdxworld.com overcloud-controller-no-ceph-3 172.25.202.17 overcloud-controller-no-ceph-3.storage.bdxworld.com overcloud-controller-no-ceph-3.storage 172.25.202.79 overcloud-controller-no-ceph-3.storagemgmt.bdxworld.com overcloud-controller-no-ceph-3.storagemgmt 172.25.201.209 overcloud-controller-no-ceph-3.internalapi.bdxworld.com overcloud-controller-no-ceph-3.internalapi 172.25.202.135 overcloud-controller-no-ceph-3.tenant.bdxworld.com overcloud-controller-no-ceph-3.tenant 172.25.201.137 overcloud-controller-no-ceph-3.external.bdxworld.com overcloud-controller-no-ceph-3.external 172.25.201.103 overcloud-controller-no-ceph-3.ctlplane.bdxworld.com overcloud-controller-no-ceph-3.ctlplane 172.25.201.202 overcloud-novacompute-0.bdxworld.com overcloud-novacompute-0 172.25.202.19 overcloud-novacompute-0.storage.bdxworld.com overcloud-novacompute-0.storage 172.25.201.202 overcloud-novacompute-0.internalapi.bdxworld.com overcloud-novacompute-0.internalapi 172.25.202.140 overcloud-novacompute-0.tenant.bdxworld.com overcloud-novacompute-0.tenant 172.25.201.107 overcloud-novacompute-0.ctlplane.bdxworld.com overcloud-novacompute-0.ctlplane 172.25.201.207 overcloud-novacompute-1.bdxworld.com overcloud-novacompute-1 172.25.202.15 overcloud-novacompute-1.storage.bdxworld.com overcloud-novacompute-1.storage 172.25.201.207 overcloud-novacompute-1.internalapi.bdxworld.com overcloud-novacompute-1.internalapi 172.25.202.144 overcloud-novacompute-1.tenant.bdxworld.com overcloud-novacompute-1.tenant 172.25.201.112 overcloud-novacompute-1.ctlplane.bdxworld.com overcloud-novacompute-1.ctlplane 172.25.201.200 overcloud-novacompute-2.bdxworld.com overcloud-novacompute-2 172.25.202.20 overcloud-novacompute-2.storage.bdxworld.com overcloud-novacompute-2.storage 172.25.201.200 overcloud-novacompute-2.internalapi.bdxworld.com overcloud-novacompute-2.internalapi 172.25.202.138 overcloud-novacompute-2.tenant.bdxworld.com overcloud-novacompute-2.tenant 172.25.201.100 overcloud-novacompute-2.ctlplane.bdxworld.com overcloud-novacompute-2.ctlplane 172.25.201.199 overcloud-novacompute-3.bdxworld.com overcloud-novacompute-3 172.25.202.9 overcloud-novacompute-3.storage.bdxworld.com overcloud-novacompute-3.storage 172.25.201.199 overcloud-novacompute-3.internalapi.bdxworld.com overcloud-novacompute-3.internalapi 172.25.202.141 overcloud-novacompute-3.tenant.bdxworld.com overcloud-novacompute-3.tenant 172.25.201.104 overcloud-novacompute-3.ctlplane.bdxworld.com overcloud-novacompute-3.ctlplane 172.25.201.198 overcloud-novacompute-4.bdxworld.com overcloud-novacompute-4 172.25.202.16 overcloud-novacompute-4.storage.bdxworld.com overcloud-novacompute-4.storage 172.25.201.198 overcloud-novacompute-4.internalapi.bdxworld.com overcloud-novacompute-4.internalapi 172.25.202.139 overcloud-novacompute-4.tenant.bdxworld.com overcloud-novacompute-4.tenant 172.25.201.109 overcloud-novacompute-4.ctlplane.bdxworld.com overcloud-novacompute-4.ctlplane 172.25.202.14 overcloud-cephstorage-0.bdxworld.com overcloud-cephstorage-0 172.25.202.14 overcloud-cephstorage-0.storage.bdxworld.com overcloud-cephstorage-0.storage 172.25.202.72 overcloud-cephstorage-0.storagemgmt.bdxworld.com overcloud-cephstorage-0.storagemgmt 172.25.201.101 overcloud-cephstorage-0.ctlplane.bdxworld.com overcloud-cephstorage-0.ctlplane 172.25.202.6 overcloud-cephstorage-1.bdxworld.com overcloud-cephstorage-1 172.25.202.6 overcloud-cephstorage-1.storage.bdxworld.com overcloud-cephstorage-1.storage 172.25.202.78 overcloud-cephstorage-1.storagemgmt.bdxworld.com overcloud-cephstorage-1.storagemgmt 172.25.201.92 overcloud-cephstorage-1.ctlplane.bdxworld.com overcloud-cephstorage-1.ctlplane 172.25.202.11 overcloud-cephstorage-2.bdxworld.com overcloud-cephstorage-2 172.25.202.11 overcloud-cephstorage-2.storage.bdxworld.com overcloud-cephstorage-2.storage 172.25.202.85 overcloud-cephstorage-2.storagemgmt.bdxworld.com overcloud-cephstorage-2.storagemgmt 172.25.201.111 overcloud-cephstorage-2.ctlplane.bdxworld.com overcloud-cephstorage-2.ctlplane 172.25.202.10 overcloud-cephstorage-3.bdxworld.com overcloud-cephstorage-3 172.25.202.10 overcloud-cephstorage-3.storage.bdxworld.com overcloud-cephstorage-3.storage 172.25.202.87 overcloud-cephstorage-3.storagemgmt.bdxworld.com overcloud-cephstorage-3.storagemgmt 172.25.201.96 overcloud-cephstorage-3.ctlplane.bdxworld.com overcloud-cephstorage-3.ctlplane 172.25.201.68 hkg2director.ctlplane.bdxworld.com hkg2director.ctlplane hkg2director.ctlplane.bdxcloud.bdxworld.com # END_HOST_ENTRIES_FOR_STACK: overcloud # START_HOST_ENTRIES_FOR_STACK: undercloud 172.25.201.68 hkg2director.bdxworld.com hkg2director 172.25.201.68 hkg2director.external.bdxworld.com hkg2director.external 172.25.201.68 hkg2director.ctlplane.bdxworld.com hkg2director.ctlplane 172.25.201.68 bdxworld.com 172.25.201.68 bdxworld.com # END_HOST_ENTRIES_FOR_STACK: undercloud 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.25.201.68 hkg2director.bdxworld.com hkg2director bdxworld.com NOTE: my overcloud nodes are in the DMZ. Any help would be appreciated. With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Mon May 23 06:44:42 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Mon, 23 May 2022 12:14:42 +0530 Subject: SSL verify failed | Overcloud deploy step 4 | Wallaby | tripleo Message-ID: Hi, I am currently stuck in deploy step 4 with the following error: FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} | error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover available identity versions when contacting https://overcloud.bdxworld.com:13000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 677, in urlopen\n chunked=chunked,\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 381, in make_request\n self._validate_conn(conn)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 978, in _validate_conn\n conn.connect()\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 371, in connect\n ssl_context=context,\n File \"/usr/lib/python3.6/site-packages/urllib3/util/ssl.py\", line 384, in ssl_wrap_socket\n return context.wrap_socket(sock, server_hostname=server_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", line 365, in wrap_socket\n context=self, _session=session)\n File \"/usr/lib64/python3.6/ssl.py\", line 776, in __init\n self.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 1036, in do_handshake\n self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 648, in do_handshake\n self._sslobj.do_handshake()\nssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897) Attached details: With regrads, Swogat pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} | error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover available identity versions when contacting https://overcloud.bdxworld.com:13000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 677, in urlopen\n chunked=chunked,\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 381, in make_request\n self._validate_conn(conn)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 978, in _validate_conn\n conn.connect()\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 371, in connect\n ssl_context=context,\n File \"/usr/lib/python3.6/site-packages/urllib3/util/ssl.py\", line 384, in ssl_wrap_socket\n return context.wrap_socket(sock, server_hostname=server_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", line 365, in wrap_socket\n context=self, _session=session)\n File \"/usr/lib64/python3.6/ssl.py\", line 776, in __init\n self.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 1036, in do_handshake\n self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 648, in do_handshake\n self._sslobj.do_handshake()\nssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in send\n timeout=timeout\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 727, in urlopen\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n File \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 439, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, in _send_request\n resp = self.session.request(method, url, *kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 542, in request\n resp = self.send(prep, **send_kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 655, in send\n r = adapter.send(request, **kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 138, in _do_create_plugin\n authenticated=False)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 610, in get_discovery\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, in __init_\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, in get\n return self.request(url, 'GET', **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in request\n resp = send(**kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, in _send_request\n raise exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://overcloud.bdxworld.com:13000: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 185, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 181, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 407, in __call_\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, in get\n return self.request(url, 'GET', **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in request\n resp = send(**kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, in _send_request\n raise exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://overcloud.bdxworld.com:13000: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 185, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 181, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 407, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 141, in run\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 517, in search_services\n services = self.list_services()\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 492, in list_services\n if self._is_client_version('identity', 2):\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 460, in _is_client_version\n client = getattr(self, client_name)\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 32, in _identity_client\n 'identity', min_version=2, max_version='3.latest')\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in get_endpoint\n return self.session.get_endpoint(auth or self.auth, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 380, in get_endpoint\n allow_version_hack=allow_version_hack, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 206, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 161, in _do_create_plugin\n 'auth_url is correct. %s' % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. SSL exception connecting to https://overcloud.bdxworld.com:13000: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} Catalog list: +-----------+----------------+--------------------------------------------------------------------------------------+ | Name | Type | Endpoints | +-----------+----------------+--------------------------------------------------------------------------------------+ | aodh | alarming | regionOne | | | | internal: http://172.25.201.250:8042 | | | | regionOne | | | | public: http://172.25.201.150:8042 | | | | regionOne | | | | admin: http://172.25.201.250:8042 | | | | | | placement | placement | regionOne | | | | admin: http://172.25.201.250:8778/placement | | | | regionOne | | | | internal: http://172.25.201.250:8778/placement | | | | regionOne | | | | public: http://172.25.201.150:8778/placement | | | | | | gnocchi | metric | regionOne | | | | public: http://172.25.201.150:8041 | | | | regionOne | | | | internal: http://172.25.201.250:8041 | | | | regionOne | | | | admin: http://172.25.201.250:8041 | | | | | | glance | image | regionOne | | | | internal: http://172.25.201.250:9292 | | | | regionOne | | | | admin: http://172.25.201.250:9292 | | | | regionOne | | | | public: http://172.25.201.150:9292 | | | | | | keystone | identity | regionOne | | | | internal: http://172.25.201.250:5000 | | | | regionOne | | | | admin: http://172.25.201.250:35357 | | | | regionOne | | | | public: https://overcloud.bdxworld.com:13000 | | | | | | heat-cfn | cloudformation | regionOne | | | | internal: http://172.25.201.250:8000/v1 | | | | regionOne | | | | public: http://172.25.201.150:8000/v1 | | | | regionOne | | | | admin: http://172.25.201.250:8000/v1 | | | | | | neutron | network | regionOne | | | | admin: http://172.25.201.250:9696 | | | | regionOne | | | | public: http://172.25.201.150:9696 | | | | regionOne | | | | internal: http://172.25.201.250:9696 | | | | | | heat | orchestration | regionOne | | | | internal: http://172.25.201.250:8004/v1/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | public: http://172.25.201.150:8004/v1/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | admin: http://172.25.201.250:8004/v1/5d922243077045c48fe4b075e386551b | | | | | | octavia | load-balancer | regionOne | | | | public: http://172.25.201.150:9876 | | | | regionOne | | | | admin: http://172.25.201.250:9876 | | | | regionOne | | | | internal: http://172.25.201.250:9876 | | | | | | cinderv3 | volumev3 | regionOne | | | | internal: http://172.25.201.250:8776/v3/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | public: http://172.25.201.150:8776/v3/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | admin: http://172.25.201.250:8776/v3/5d922243077045c48fe4b075e386551b | | | | | | swift | object-store | regionOne | | | | public: http://172.25.201.150:8080/swift/v1/AUTH_5d922243077045c48fe4b075e386551b | | | | regionOne | | | | admin: http://172.25.202.50:8080/swift/v1/AUTH_5d922243077045c48fe4b075e386551b | | | | regionOne | | | | internal: http://172.25.202.50:8080/swift/v1/AUTH_5d922243077045c48fe4b075e386551b | | | | | | nova | compute | regionOne | | | | admin: http://172.25.201.250:8774/v2.1 | | | | regionOne | | | | internal: http://172.25.201.250:8774/v2.1 | | | | regionOne | | | | public: http://172.25.201.150:8774/v2.1 | | | | Host entries: [stack at hkg2director ~]$ cat /etc/hosts # START_HOST_ENTRIES_FOR_STACK: overcloud 172.25.201.68 hkg2director.ctlplane.bdxworld.com hkg2director.ctlplane hkg2director.ctlplane.bdxcloud.bdxworld.com 172.25.201.91 overcloud.ctlplane.bdxworld.com 172.25.202.50 overcloud.storage.bdxworld.com 172.25.202.90 overcloud.storagemgmt.bdxworld.com 172.25.201.250 overcloud.internalapi.bdxworld.com 172.25.201.150 overcloud.bdxworld.com 172.25.201.212 overcloud-controller-0.bdxworld.com overcloud-controller-0 172.25.202.13 overcloud-controller-0.storage.bdxworld.com overcloud-controller-0.storage 172.25.202.75 overcloud-controller-0.storagemgmt.bdxworld.com overcloud-controller-0.storagemgmt 172.25.201.212 overcloud-controller-0.internalapi.bdxworld.com overcloud-controller-0.internalapi 172.25.202.143 overcloud-controller-0.tenant.bdxworld.com overcloud-controller-0.tenant 172.25.201.144 overcloud-controller-0.external.bdxworld.com overcloud-controller-0.external 172.25.201.106 overcloud-controller-0.ctlplane.bdxworld.com overcloud-controller-0.ctlplane 172.25.201.205 overcloud-controller-1.bdxworld.com overcloud-controller-1 172.25.202.18 overcloud-controller-1.storage.bdxworld.com overcloud-controller-1.storage 172.25.202.76 overcloud-controller-1.storagemgmt.bdxworld.com overcloud-controller-1.storagemgmt 172.25.201.205 overcloud-controller-1.internalapi.bdxworld.com overcloud-controller-1.internalapi 172.25.202.142 overcloud-controller-1.tenant.bdxworld.com overcloud-controller-1.tenant 172.25.201.149 overcloud-controller-1.external.bdxworld.com overcloud-controller-1.external 172.25.201.105 overcloud-controller-1.ctlplane.bdxworld.com overcloud-controller-1.ctlplane 172.25.201.201 overcloud-controller-2.bdxworld.com overcloud-controller-2 172.25.202.12 overcloud-controller-2.storage.bdxworld.com overcloud-controller-2.storage 172.25.202.74 overcloud-controller-2.storagemgmt.bdxworld.com overcloud-controller-2.storagemgmt 172.25.201.201 overcloud-controller-2.internalapi.bdxworld.com overcloud-controller-2.internalapi 172.25.202.149 overcloud-controller-2.tenant.bdxworld.com overcloud-controller-2.tenant 172.25.201.139 overcloud-controller-2.external.bdxworld.com overcloud-controller-2.external 172.25.201.97 overcloud-controller-2.ctlplane.bdxworld.com overcloud-controller-2.ctlplane 172.25.201.209 overcloud-controller-no-ceph-3.bdxworld.com overcloud-controller-no-ceph-3 172.25.202.17 overcloud-controller-no-ceph-3.storage.bdxworld.com overcloud-controller-no-ceph-3.storage 172.25.202.79 overcloud-controller-no-ceph-3.storagemgmt.bdxworld.com overcloud-controller-no-ceph-3.storagemgmt 172.25.201.209 overcloud-controller-no-ceph-3.internalapi.bdxworld.com overcloud-controller-no-ceph-3.internalapi 172.25.202.135 overcloud-controller-no-ceph-3.tenant.bdxworld.com overcloud-controller-no-ceph-3.tenant 172.25.201.137 overcloud-controller-no-ceph-3.external.bdxworld.com overcloud-controller-no-ceph-3.external 172.25.201.103 overcloud-controller-no-ceph-3.ctlplane.bdxworld.com overcloud-controller-no-ceph-3.ctlplane 172.25.201.202 overcloud-novacompute-0.bdxworld.com overcloud-novacompute-0 172.25.202.19 overcloud-novacompute-0.storage.bdxworld.com overcloud-novacompute-0.storage 172.25.201.202 overcloud-novacompute-0.internalapi.bdxworld.com overcloud-novacompute-0.internalapi 172.25.202.140 overcloud-novacompute-0.tenant.bdxworld.com overcloud-novacompute-0.tenant 172.25.201.107 overcloud-novacompute-0.ctlplane.bdxworld.com overcloud-novacompute-0.ctlplane 172.25.201.207 overcloud-novacompute-1.bdxworld.com overcloud-novacompute-1 172.25.202.15 overcloud-novacompute-1.storage.bdxworld.com overcloud-novacompute-1.storage 172.25.201.207 overcloud-novacompute-1.internalapi.bdxworld.com overcloud-novacompute-1.internalapi 172.25.202.144 overcloud-novacompute-1.tenant.bdxworld.com overcloud-novacompute-1.tenant 172.25.201.112 overcloud-novacompute-1.ctlplane.bdxworld.com overcloud-novacompute-1.ctlplane 172.25.201.200 overcloud-novacompute-2.bdxworld.com overcloud-novacompute-2 172.25.202.20 overcloud-novacompute-2.storage.bdxworld.com overcloud-novacompute-2.storage 172.25.201.200 overcloud-novacompute-2.internalapi.bdxworld.com overcloud-novacompute-2.internalapi 172.25.202.138 overcloud-novacompute-2.tenant.bdxworld.com overcloud-novacompute-2.tenant 172.25.201.100 overcloud-novacompute-2.ctlplane.bdxworld.com overcloud-novacompute-2.ctlplane 172.25.201.199 overcloud-novacompute-3.bdxworld.com overcloud-novacompute-3 172.25.202.9 overcloud-novacompute-3.storage.bdxworld.com overcloud-novacompute-3.storage 172.25.201.199 overcloud-novacompute-3.internalapi.bdxworld.com overcloud-novacompute-3.internalapi 172.25.202.141 overcloud-novacompute-3.tenant.bdxworld.com overcloud-novacompute-3.tenant 172.25.201.104 overcloud-novacompute-3.ctlplane.bdxworld.com overcloud-novacompute-3.ctlplane 172.25.201.198 overcloud-novacompute-4.bdxworld.com overcloud-novacompute-4 172.25.202.16 overcloud-novacompute-4.storage.bdxworld.com overcloud-novacompute-4.storage 172.25.201.198 overcloud-novacompute-4.internalapi.bdxworld.com overcloud-novacompute-4.internalapi 172.25.202.139 overcloud-novacompute-4.tenant.bdxworld.com overcloud-novacompute-4.tenant 172.25.201.109 overcloud-novacompute-4.ctlplane.bdxworld.com overcloud-novacompute-4.ctlplane 172.25.202.14 overcloud-cephstorage-0.bdxworld.com overcloud-cephstorage-0 172.25.202.14 overcloud-cephstorage-0.storage.bdxworld.com overcloud-cephstorage-0.storage 172.25.202.72 overcloud-cephstorage-0.storagemgmt.bdxworld.com overcloud-cephstorage-0.storagemgmt 172.25.201.101 overcloud-cephstorage-0.ctlplane.bdxworld.com overcloud-cephstorage-0.ctlplane 172.25.202.6 overcloud-cephstorage-1.bdxworld.com overcloud-cephstorage-1 172.25.202.6 overcloud-cephstorage-1.storage.bdxworld.com overcloud-cephstorage-1.storage 172.25.202.78 overcloud-cephstorage-1.storagemgmt.bdxworld.com overcloud-cephstorage-1.storagemgmt 172.25.201.92 overcloud-cephstorage-1.ctlplane.bdxworld.com overcloud-cephstorage-1.ctlplane 172.25.202.11 overcloud-cephstorage-2.bdxworld.com overcloud-cephstorage-2 172.25.202.11 overcloud-cephstorage-2.storage.bdxworld.com overcloud-cephstorage-2.storage 172.25.202.85 overcloud-cephstorage-2.storagemgmt.bdxworld.com overcloud-cephstorage-2.storagemgmt 172.25.201.111 overcloud-cephstorage-2.ctlplane.bdxworld.com overcloud-cephstorage-2.ctlplane 172.25.202.10 overcloud-cephstorage-3.bdxworld.com overcloud-cephstorage-3 172.25.202.10 overcloud-cephstorage-3.storage.bdxworld.com overcloud-cephstorage-3.storage 172.25.202.87 overcloud-cephstorage-3.storagemgmt.bdxworld.com overcloud-cephstorage-3.storagemgmt 172.25.201.96 overcloud-cephstorage-3.ctlplane.bdxworld.com overcloud-cephstorage-3.ctlplane 172.25.201.68 hkg2director.ctlplane.bdxworld.com hkg2director.ctlplane hkg2director.ctlplane.bdxcloud.bdxworld.com # END_HOST_ENTRIES_FOR_STACK: overcloud # START_HOST_ENTRIES_FOR_STACK: undercloud 172.25.201.68 hkg2director.bdxworld.com hkg2director 172.25.201.68 hkg2director.external.bdxworld.com hkg2director.external 172.25.201.68 hkg2director.ctlplane.bdxworld.com hkg2director.ctlplane 172.25.201.68 bdxworld.com 172.25.201.68 bdxworld.com # END_HOST_ENTRIES_FOR_STACK: undercloud 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.25.201.68 hkg2director.bdxworld.com hkg2director bdxworld.com From swogatpradhan22 at gmail.com Mon May 23 13:06:19 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Mon, 23 May 2022 18:36:19 +0530 Subject: SSL verify failed | Overcloud deploy step 4 | Wallaby | tripleo Message-ID: Hi, I am facing the below issue in step4 ' TASK | Clean up legacy Cinder keystone catalog entries': FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv2', 'service_type': 'volumev2'} | error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": "item", "changed": false, "cinder_api_service": 0, "item": {"service_name": "cinderv2", "service_type": "volumev2"}, "module_stderr": "Failed to discover available identity versions when contacting https://overcloud.bdxworld.com:13000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 677, in urlopen\n chunked=chunked,\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 381, in _make_request\n self._validate_conn(conn)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 978, in _validate_conn\n conn.connect()\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 371, in connect\n ssl_context=context,\n File \"/usr/lib/python3.6/site-packages/urllib3/util/ssl_.py\", line 384, in ssl_wrap_socket\n return context.wrap_socket(sock, server_hostname=server_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", line 365, in wrap_socket\n _context=self, _session=session)\n File \"/usr/lib64/python3.6/ssl.py\", line 776, in __init__\n self.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 1036, in do_handshake\n self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 648, in do_handshake\n self._sslobj.do_handshake()\nssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897) Can you please suggest a workaround? With regards, Swogat pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} | error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover available identity versions when contacting https://overcloud.bdxworld.com:13000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 677, in urlopen\n chunked=chunked,\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 381, in make_request\n self._validate_conn(conn)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 978, in _validate_conn\n conn.connect()\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 371, in connect\n ssl_context=context,\n File \"/usr/lib/python3.6/site-packages/urllib3/util/ssl.py\", line 384, in ssl_wrap_socket\n return context.wrap_socket(sock, server_hostname=server_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", line 365, in wrap_socket\n context=self, _session=session)\n File \"/usr/lib64/python3.6/ssl.py\", line 776, in __init\n self.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 1036, in do_handshake\n self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 648, in do_handshake\n self._sslobj.do_handshake()\nssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in send\n timeout=timeout\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 727, in urlopen\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n File \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 439, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, in _send_request\n resp = self.session.request(method, url, *kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 542, in request\n resp = self.send(prep, **send_kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 655, in send\n r = adapter.send(request, **kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 138, in _do_create_plugin\n authenticated=False)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 610, in get_discovery\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, in __init_\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, in get\n return self.request(url, 'GET', **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in request\n resp = send(**kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, in _send_request\n raise exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://overcloud.bdxworld.com:13000: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 185, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 181, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 407, in __call_\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, in get\n return self.request(url, 'GET', **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in request\n resp = send(**kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, in _send_request\n raise exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://overcloud.bdxworld.com:13000: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 185, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 181, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 407, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_yfxm9irl/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 141, in run\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 517, in search_services\n services = self.list_services()\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 492, in list_services\n if self._is_client_version('identity', 2):\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 460, in _is_client_version\n client = getattr(self, client_name)\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 32, in _identity_client\n 'identity', min_version=2, max_version='3.latest')\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in get_endpoint\n return self.session.get_endpoint(auth or self.auth, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 380, in get_endpoint\n allow_version_hack=allow_version_hack, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 206, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 161, in _do_create_plugin\n 'auth_url is correct. %s' % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. SSL exception connecting to https://overcloud.bdxworld.com:13000: HTTPSConnectionPool(host='overcloud.bdxworld.com', port=13000): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)'),))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} Catalog list: +-----------+----------------+--------------------------------------------------------------------------------------+ | Name | Type | Endpoints | +-----------+----------------+--------------------------------------------------------------------------------------+ | aodh | alarming | regionOne | | | | internal: http://172.25.201.250:8042 | | | | regionOne | | | | public: http://172.25.201.150:8042 | | | | regionOne | | | | admin: http://172.25.201.250:8042 | | | | | | placement | placement | regionOne | | | | admin: http://172.25.201.250:8778/placement | | | | regionOne | | | | internal: http://172.25.201.250:8778/placement | | | | regionOne | | | | public: http://172.25.201.150:8778/placement | | | | | | gnocchi | metric | regionOne | | | | public: http://172.25.201.150:8041 | | | | regionOne | | | | internal: http://172.25.201.250:8041 | | | | regionOne | | | | admin: http://172.25.201.250:8041 | | | | | | glance | image | regionOne | | | | internal: http://172.25.201.250:9292 | | | | regionOne | | | | admin: http://172.25.201.250:9292 | | | | regionOne | | | | public: http://172.25.201.150:9292 | | | | | | keystone | identity | regionOne | | | | internal: http://172.25.201.250:5000 | | | | regionOne | | | | admin: http://172.25.201.250:35357 | | | | regionOne | | | | public: https://overcloud.bdxworld.com:13000 | | | | | | heat-cfn | cloudformation | regionOne | | | | internal: http://172.25.201.250:8000/v1 | | | | regionOne | | | | public: http://172.25.201.150:8000/v1 | | | | regionOne | | | | admin: http://172.25.201.250:8000/v1 | | | | | | neutron | network | regionOne | | | | admin: http://172.25.201.250:9696 | | | | regionOne | | | | public: http://172.25.201.150:9696 | | | | regionOne | | | | internal: http://172.25.201.250:9696 | | | | | | heat | orchestration | regionOne | | | | internal: http://172.25.201.250:8004/v1/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | public: http://172.25.201.150:8004/v1/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | admin: http://172.25.201.250:8004/v1/5d922243077045c48fe4b075e386551b | | | | | | octavia | load-balancer | regionOne | | | | public: http://172.25.201.150:9876 | | | | regionOne | | | | admin: http://172.25.201.250:9876 | | | | regionOne | | | | internal: http://172.25.201.250:9876 | | | | | | cinderv3 | volumev3 | regionOne | | | | internal: http://172.25.201.250:8776/v3/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | public: http://172.25.201.150:8776/v3/5d922243077045c48fe4b075e386551b | | | | regionOne | | | | admin: http://172.25.201.250:8776/v3/5d922243077045c48fe4b075e386551b | | | | | | swift | object-store | regionOne | | | | public: http://172.25.201.150:8080/swift/v1/AUTH_5d922243077045c48fe4b075e386551b | | | | regionOne | | | | admin: http://172.25.202.50:8080/swift/v1/AUTH_5d922243077045c48fe4b075e386551b | | | | regionOne | | | | internal: http://172.25.202.50:8080/swift/v1/AUTH_5d922243077045c48fe4b075e386551b | | | | | | nova | compute | regionOne | | | | admin: http://172.25.201.250:8774/v2.1 | | | | regionOne | | | | internal: http://172.25.201.250:8774/v2.1 | | | | regionOne | | | | public: http://172.25.201.150:8774/v2.1 | | | | Host entries: [stack at hkg2director ~]$ cat /etc/hosts # START_HOST_ENTRIES_FOR_STACK: overcloud 172.25.201.68 hkg2director.ctlplane.bdxworld.com hkg2director.ctlplane hkg2director.ctlplane.bdxcloud.bdxworld.com 172.25.201.91 overcloud.ctlplane.bdxworld.com 172.25.202.50 overcloud.storage.bdxworld.com 172.25.202.90 overcloud.storagemgmt.bdxworld.com 172.25.201.250 overcloud.internalapi.bdxworld.com 172.25.201.150 overcloud.bdxworld.com 172.25.201.212 overcloud-controller-0.bdxworld.com overcloud-controller-0 172.25.202.13 overcloud-controller-0.storage.bdxworld.com overcloud-controller-0.storage 172.25.202.75 overcloud-controller-0.storagemgmt.bdxworld.com overcloud-controller-0.storagemgmt 172.25.201.212 overcloud-controller-0.internalapi.bdxworld.com overcloud-controller-0.internalapi 172.25.202.143 overcloud-controller-0.tenant.bdxworld.com overcloud-controller-0.tenant 172.25.201.144 overcloud-controller-0.external.bdxworld.com overcloud-controller-0.external 172.25.201.106 overcloud-controller-0.ctlplane.bdxworld.com overcloud-controller-0.ctlplane 172.25.201.205 overcloud-controller-1.bdxworld.com overcloud-controller-1 172.25.202.18 overcloud-controller-1.storage.bdxworld.com overcloud-controller-1.storage 172.25.202.76 overcloud-controller-1.storagemgmt.bdxworld.com overcloud-controller-1.storagemgmt 172.25.201.205 overcloud-controller-1.internalapi.bdxworld.com overcloud-controller-1.internalapi 172.25.202.142 overcloud-controller-1.tenant.bdxworld.com overcloud-controller-1.tenant 172.25.201.149 overcloud-controller-1.external.bdxworld.com overcloud-controller-1.external 172.25.201.105 overcloud-controller-1.ctlplane.bdxworld.com overcloud-controller-1.ctlplane 172.25.201.201 overcloud-controller-2.bdxworld.com overcloud-controller-2 172.25.202.12 overcloud-controller-2.storage.bdxworld.com overcloud-controller-2.storage 172.25.202.74 overcloud-controller-2.storagemgmt.bdxworld.com overcloud-controller-2.storagemgmt 172.25.201.201 overcloud-controller-2.internalapi.bdxworld.com overcloud-controller-2.internalapi 172.25.202.149 overcloud-controller-2.tenant.bdxworld.com overcloud-controller-2.tenant 172.25.201.139 overcloud-controller-2.external.bdxworld.com overcloud-controller-2.external 172.25.201.97 overcloud-controller-2.ctlplane.bdxworld.com overcloud-controller-2.ctlplane 172.25.201.209 overcloud-controller-no-ceph-3.bdxworld.com overcloud-controller-no-ceph-3 172.25.202.17 overcloud-controller-no-ceph-3.storage.bdxworld.com overcloud-controller-no-ceph-3.storage 172.25.202.79 overcloud-controller-no-ceph-3.storagemgmt.bdxworld.com overcloud-controller-no-ceph-3.storagemgmt 172.25.201.209 overcloud-controller-no-ceph-3.internalapi.bdxworld.com overcloud-controller-no-ceph-3.internalapi 172.25.202.135 overcloud-controller-no-ceph-3.tenant.bdxworld.com overcloud-controller-no-ceph-3.tenant 172.25.201.137 overcloud-controller-no-ceph-3.external.bdxworld.com overcloud-controller-no-ceph-3.external 172.25.201.103 overcloud-controller-no-ceph-3.ctlplane.bdxworld.com overcloud-controller-no-ceph-3.ctlplane 172.25.201.202 overcloud-novacompute-0.bdxworld.com overcloud-novacompute-0 172.25.202.19 overcloud-novacompute-0.storage.bdxworld.com overcloud-novacompute-0.storage 172.25.201.202 overcloud-novacompute-0.internalapi.bdxworld.com overcloud-novacompute-0.internalapi 172.25.202.140 overcloud-novacompute-0.tenant.bdxworld.com overcloud-novacompute-0.tenant 172.25.201.107 overcloud-novacompute-0.ctlplane.bdxworld.com overcloud-novacompute-0.ctlplane 172.25.201.207 overcloud-novacompute-1.bdxworld.com overcloud-novacompute-1 172.25.202.15 overcloud-novacompute-1.storage.bdxworld.com overcloud-novacompute-1.storage 172.25.201.207 overcloud-novacompute-1.internalapi.bdxworld.com overcloud-novacompute-1.internalapi 172.25.202.144 overcloud-novacompute-1.tenant.bdxworld.com overcloud-novacompute-1.tenant 172.25.201.112 overcloud-novacompute-1.ctlplane.bdxworld.com overcloud-novacompute-1.ctlplane 172.25.201.200 overcloud-novacompute-2.bdxworld.com overcloud-novacompute-2 172.25.202.20 overcloud-novacompute-2.storage.bdxworld.com overcloud-novacompute-2.storage 172.25.201.200 overcloud-novacompute-2.internalapi.bdxworld.com overcloud-novacompute-2.internalapi 172.25.202.138 overcloud-novacompute-2.tenant.bdxworld.com overcloud-novacompute-2.tenant 172.25.201.100 overcloud-novacompute-2.ctlplane.bdxworld.com overcloud-novacompute-2.ctlplane 172.25.201.199 overcloud-novacompute-3.bdxworld.com overcloud-novacompute-3 172.25.202.9 overcloud-novacompute-3.storage.bdxworld.com overcloud-novacompute-3.storage 172.25.201.199 overcloud-novacompute-3.internalapi.bdxworld.com overcloud-novacompute-3.internalapi 172.25.202.141 overcloud-novacompute-3.tenant.bdxworld.com overcloud-novacompute-3.tenant 172.25.201.104 overcloud-novacompute-3.ctlplane.bdxworld.com overcloud-novacompute-3.ctlplane 172.25.201.198 overcloud-novacompute-4.bdxworld.com overcloud-novacompute-4 172.25.202.16 overcloud-novacompute-4.storage.bdxworld.com overcloud-novacompute-4.storage 172.25.201.198 overcloud-novacompute-4.internalapi.bdxworld.com overcloud-novacompute-4.internalapi 172.25.202.139 overcloud-novacompute-4.tenant.bdxworld.com overcloud-novacompute-4.tenant 172.25.201.109 overcloud-novacompute-4.ctlplane.bdxworld.com overcloud-novacompute-4.ctlplane 172.25.202.14 overcloud-cephstorage-0.bdxworld.com overcloud-cephstorage-0 172.25.202.14 overcloud-cephstorage-0.storage.bdxworld.com overcloud-cephstorage-0.storage 172.25.202.72 overcloud-cephstorage-0.storagemgmt.bdxworld.com overcloud-cephstorage-0.storagemgmt 172.25.201.101 overcloud-cephstorage-0.ctlplane.bdxworld.com overcloud-cephstorage-0.ctlplane 172.25.202.6 overcloud-cephstorage-1.bdxworld.com overcloud-cephstorage-1 172.25.202.6 overcloud-cephstorage-1.storage.bdxworld.com overcloud-cephstorage-1.storage 172.25.202.78 overcloud-cephstorage-1.storagemgmt.bdxworld.com overcloud-cephstorage-1.storagemgmt 172.25.201.92 overcloud-cephstorage-1.ctlplane.bdxworld.com overcloud-cephstorage-1.ctlplane 172.25.202.11 overcloud-cephstorage-2.bdxworld.com overcloud-cephstorage-2 172.25.202.11 overcloud-cephstorage-2.storage.bdxworld.com overcloud-cephstorage-2.storage 172.25.202.85 overcloud-cephstorage-2.storagemgmt.bdxworld.com overcloud-cephstorage-2.storagemgmt 172.25.201.111 overcloud-cephstorage-2.ctlplane.bdxworld.com overcloud-cephstorage-2.ctlplane 172.25.202.10 overcloud-cephstorage-3.bdxworld.com overcloud-cephstorage-3 172.25.202.10 overcloud-cephstorage-3.storage.bdxworld.com overcloud-cephstorage-3.storage 172.25.202.87 overcloud-cephstorage-3.storagemgmt.bdxworld.com overcloud-cephstorage-3.storagemgmt 172.25.201.96 overcloud-cephstorage-3.ctlplane.bdxworld.com overcloud-cephstorage-3.ctlplane 172.25.201.68 hkg2director.ctlplane.bdxworld.com hkg2director.ctlplane hkg2director.ctlplane.bdxcloud.bdxworld.com # END_HOST_ENTRIES_FOR_STACK: overcloud # START_HOST_ENTRIES_FOR_STACK: undercloud 172.25.201.68 hkg2director.bdxworld.com hkg2director 172.25.201.68 hkg2director.external.bdxworld.com hkg2director.external 172.25.201.68 hkg2director.ctlplane.bdxworld.com hkg2director.ctlplane 172.25.201.68 bdxworld.com 172.25.201.68 bdxworld.com # END_HOST_ENTRIES_FOR_STACK: undercloud 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.25.201.68 hkg2director.bdxworld.com hkg2director bdxworld.com From gmann at ghanshyammann.com Mon May 23 15:45:11 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 23 May 2022 10:45:11 -0500 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <8642f998298488fea7a490bcd3345dde07e524ca.camel@redhat.com> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> <94458541-bccb-4fe8-7ad4-93bdc14065b5@debian.org> <8642f998298488fea7a490bcd3345dde07e524ca.camel@redhat.com> Message-ID: <180f197fed2.b69f7f0d122307.5086605122825317883@ghanshyammann.com> ---- On Mon, 23 May 2022 04:54:23 -0500 Sean Mooney wrote ---- > On Sat, 2022-05-21 at 17:33 +0200, Thomas Goirand wrote: > > On 5/21/22 06:07, Ghanshyam Mann wrote: > > > ---- On Fri, 20 May 2022 20:56:27 -0500 Erik McCormick wrote ---- > > > > > > > > On Fri, May 20, 2022, 7:03 PM Thomas Goirand wrote: > > > > On 5/20/22 19:42, Ghanshyam Mann wrote: > > > > > * Joining info Join with Google Meet = meet.google.com/gwi-cphb-rya > > > > > > > > What would it take to make everyone switch to free software? We're > > > > moving from one non-free platform (zoom) to another (google meet), even > > > > if we have Jitsi that works very well... :/ > > > > > > > > Jitsi degrades beyond 16 or so interactive users. I love it and wish it well, but it's just not there yet. If the situation has changed in recent months, great then let's take a look. > > > > > > Yeah, we try to use that as our first preference but it did not work as expected. In a recent usage in RBAC discussion a few weeks ago, > > > many attendees complained about audio, joining issues or so. That is why we are using non-free tooling. > > > > > > @zigo, If you have tried any free and stable platforms for video calls, please suggest and we will love > > > to use that. > > > > > > -gmann > > > > As I understand, the point isn't to make more than 16 persons talk in > > the meeting. Let me know if I'm wrong. > > > > In such case, a setup similar to Debconf (which used Jitsi for Debconf > > 2020 and 2021) can be used. You get just a few people in the meeting, > > and everyone else just listens to the broacast (ie: a regular online > > video in your browser or VLC). > in general whe we have video meeting for thinkgs like rbac discussions we do want to allow most or all attendes to be able to talk so > using vlc to have everyone else just listent would fail that requireemtn in general. > > if it was a presentation sure but for a meeting that would not really be useful Yeah, these are more interactive meetings than presentation style. If anyone attending the the meeting I expect they will participate in the discussion sometimes or at least can/should be able to make comments when things are related to their projects/area or so. -gmann > > > > The full setup is well documented [1], and we have free software ansible > > scripts [2]. This allows thousands of attendees, plus recording and > > reviewing of the videos. > > > > I very much would love if there was some efforts put in this direction > > (or some similar setup, as long as it's fully free). > > > > Hoping this helps, > > Cheers, > > > > Thomas Goirand (zigo) > > > > [1] https://video.debconf.org > > [2] https://salsa.debian.org/debconf-video-team > > > > > From cboylan at sapwetik.org Mon May 23 15:53:39 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 23 May 2022 08:53:39 -0700 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <180f197fed2.b69f7f0d122307.5086605122825317883@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> <94458541-bccb-4fe8-7ad4-93bdc14065b5@debian.org> <8642f998298488fea7a490bcd3345dde07e524ca.camel@redhat.com> <180f197fed2.b69f7f0d122307.5086605122825317883@ghanshyammann.com> Message-ID: <4cc70187-07f9-421e-b45a-3f5150c48428@www.fastmail.com> On Mon, May 23, 2022, at 8:45 AM, Ghanshyam Mann wrote: > ---- On Mon, 23 May 2022 04:54:23 -0500 Sean Mooney > wrote ---- > > On Sat, 2022-05-21 at 17:33 +0200, Thomas Goirand wrote: > > > On 5/21/22 06:07, Ghanshyam Mann wrote: > > > > ---- On Fri, 20 May 2022 20:56:27 -0500 Erik McCormick > wrote ---- > > > > > > > > > > On Fri, May 20, 2022, 7:03 PM Thomas Goirand > wrote: > > > > > On 5/20/22 19:42, Ghanshyam Mann wrote: > > > > > > * Joining info Join with Google Meet = > meet.google.com/gwi-cphb-rya > > > > > > > > > > What would it take to make everyone switch to free software? > We're > > > > > moving from one non-free platform (zoom) to another (google > meet), even > > > > > if we have Jitsi that works very well... :/ > > > > > > > > > > Jitsi degrades beyond 16 or so interactive users. I love it > and wish it well, but it's just not there yet. If the situation has > changed in recent months, great then let's take a look. > > > > > > > > Yeah, we try to use that as our first preference but it did not > work as expected. In a recent usage in RBAC discussion a few weeks ago, > > > > many attendees complained about audio, joining issues or so. > That is why we are using non-free tooling. > > > > > > > > @zigo, If you have tried any free and stable platforms for video > calls, please suggest and we will love > > > > to use that. > > > > > > > > -gmann > > > > > > As I understand, the point isn't to make more than 16 persons talk > in > > > the meeting. Let me know if I'm wrong. > > > > > > In such case, a setup similar to Debconf (which used Jitsi for > Debconf > > > 2020 and 2021) can be used. You get just a few people in the > meeting, > > > and everyone else just listens to the broacast (ie: a regular > online > > > video in your browser or VLC). > > in general whe we have video meeting for thinkgs like rbac > discussions we do want to allow most or all attendes to be able to talk > so > > using vlc to have everyone else just listent would fail that > requireemtn in general. > > > > if it was a presentation sure but for a meeting that would not > really be useful > > Yeah, these are more interactive meetings than presentation style. If > anyone attending the > the meeting I expect they will participate in the discussion sometimes > or at least can/should > be able to make comments when things are related to their projects/area > or so. Jitsi meet seems to scale on the client side and video seems to impact that quite a bit. What this means is the more people you have in a call and the more people streaming webcams the worse the experience. That said it does seem to work fairly well with 30-40 people if video is turned off and you rely on the paired etherpad system. It isn't perfect, but the OpenDev team uses it when we need to do calls. That said it has been a while since I was in a call that large. As for debugging why sound doesn't work as mentioned prior to the PTG one known issue is that Firefox seems to treat jitsi's webrtc as autoplayed audio which means you have to enable autoplay audio for the jitsi server or enable it globally. Other than that we are not currently aware of any issues preventing audio (or video) from working. It is always a great help if the people who have problems with tools like this can reach out so that we can help debug them. Instead we're just told it doesn't work and get no details that are actionable. The mobile clients do seem to work quite well if you need a fallback too. > > -gmann > > > > > > > The full setup is well documented [1], and we have free software ansible > > > scripts [2]. This allows thousands of attendees, plus recording and > > > reviewing of the videos. > > > > > > I very much would love if there was some efforts put in this direction > > > (or some similar setup, as long as it's fully free). > > > > > > Hoping this helps, > > > Cheers, > > > > > > Thomas Goirand (zigo) > > > > > > [1] https://video.debconf.org > > > [2] https://salsa.debian.org/debconf-video-team > > > > > > > > > From mnaser at vexxhost.com Mon May 23 21:42:29 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 23 May 2022 17:42:29 -0400 Subject: [neutron][neutron-vpnaas] neutron-tempest-plugin coverage Message-ID: Hiya, As part of cleaning up VPNaaS, I decided to explore fixing the longstanding failures of the `neutron-tempest-plugin-vpnaas-libreswan-centos` job that was always failing and non-voting. Upon trying to flip the switch on it in a test change[1], I noticed that much of the test results seem to be simply exercising the API[2], essentially making the fact that we run that job on CentOS a bit redundant. The tests that actually exercise more than the API and validate connectivity are the functional tests, which currently run for "StrongSwan" only on Ubuntu. My question to the Neutron team is: 1. Should we get rid of the functional jobs and move them into neutron-tempest-plugin, and run that for different platforms? 2. Should we get rid of the neutron-tempest-plugin job under CentOS and add functional jobs for CentOS? My inclination seems to be #2, since that is what other stadium projects seem to be doing (no one seems to run platform-specific neutron-tempest-plugin jobs). It also keeps maintainability simpler since it remains in the hands of the VPNaaS team. Thanks Mohammed [1]: https://review.opendev.org/c/openstack/neutron-vpnaas/+/843005 [2]: https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_cd1/843005/2/check/neutron-tempest-plugin-vpnaas-libreswan-centos/cd1ac04/testr_results.html -- Mohammed Naser VEXXHOST, Inc. From gmann at ghanshyammann.com Mon May 23 23:52:12 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 23 May 2022 18:52:12 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 26, 2022 at 1500 UTC Message-ID: <180f355dc94.ff1fccf6136934.1285061492651779103@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for May 26, 2022 at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, May 25, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From tkajinam at redhat.com Tue May 24 00:13:52 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 24 May 2022 09:13:52 +0900 Subject: [puppet][tripleo][neutron][networking-ansible] Deprecating support for networking-ansible In-Reply-To: References: Message-ID: // adding [networking-ansible] tag in case it can attract attention of the team Because we have not heard any objections for a while, I've proposed deprecation of networking-ansible support in puppet-neutron[1] and TripleO[2]. [1] https://review.opendev.org/c/openstack/puppet-neutron/+/842996 [2] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/842997 If anybody has concerns about this, please let us know in the above reviews. Thank you, Takashi On Wed, May 11, 2022 at 12:10 PM Takashi Kajinami wrote: > Hello > > > Currently puppet-neutron supports deploying networking-ansible[1]. > > However I noticed the project looks unmaintained. > - No release was made since stable/victoria > - Its job template has not been updated still points > openstack-python3-wallaby-jobs > (likely because of no release activity) > - setup.cfg still indicates support for Python 2 while it does not > support 3.6/7/9 > > Although the repo got a few patches merged 6 months ago, I've not seen any > interest > in maintaining the repository. > > > Based on the above observations, I'd propose deprecating the support in > Zed and removing it > in the post-Zed release. TripleO also supports networking-ansible but that > will be also deprecated > and removed because the dependent implementation will be removed. > (That's why I added the tripleo tag) > > If you have any concerns then please let me know. If we don't hear any > concerns in a few > weeks then I'll propose the deprecation. > > I'm adding the [neutron] tag in case anybody in the team is interested in > fixing the maintenance status. > IIUC networking-ansible is not part of the stadium but I'm not aware of > any good alternatives... > > [1] https://opendev.org/x/networking-ansible > > Thank you, > Takashi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From seyeong.kim at canonical.com Tue May 24 03:28:35 2022 From: seyeong.kim at canonical.com (Seyeong Kim) Date: Tue, 24 May 2022 12:28:35 +0900 Subject: Question about lp 1950186 Message-ID: Hello, I have a question about lp https://bugs.launchpad.net/nova/+bug/1950186 if this can be fixed in near future or if there is any plan? Thanks in advance. From p.aminian.server at gmail.com Tue May 24 04:58:13 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Tue, 24 May 2022 09:28:13 +0430 Subject: skyline dashboard filter issue Message-ID: hello On Skyline dashboard of openstack there is no way to search instances base on fixed ip address . is there anyway that I can add fixed ip filter to this dashboard ? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Tue May 24 05:29:32 2022 From: bxzhu_5355 at 163.com (=?utf-8?B?5pyx5Y2a56Wl?=) Date: Tue, 24 May 2022 13:29:32 +0800 Subject: [skyline] skyline dashboard filter issue In-Reply-To: References: Message-ID: <87645A95-BE73-490D-A7D5-2D9A72AB08EB@163.com> Hi, Parsa Aminian If you want to add fixed ip filter for instances list, I think you need to add the filter param into '/extension/servers? API for skyline-apiserver project[0]. And then add this feature for skyline-console project[1]. [0]: https://opendev.org/openstack/skyline-apiserver [1]: https://opendev.org/openstack/skyline-console Thanks, Boxiang > 2022?5?24? ??12:58?Parsa Aminian ??? > > hello > On Skyline dashboard of openstack there is no way to search instances base on fixed ip address . is there anyway that I can add fixed ip filter to this dashboard ? > thanks From skaplons at redhat.com Tue May 24 06:06:54 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 24 May 2022 08:06:54 +0200 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <4cc70187-07f9-421e-b45a-3f5150c48428@www.fastmail.com> References: <2175937.irdbgypaU6@p1> <180f197fed2.b69f7f0d122307.5086605122825317883@ghanshyammann.com> <4cc70187-07f9-421e-b45a-3f5150c48428@www.fastmail.com> Message-ID: <1863524.taCxCBeP46@p1> Hi, Dnia poniedzia?ek, 23 maja 2022 17:53:39 CEST Clark Boylan pisze: > On Mon, May 23, 2022, at 8:45 AM, Ghanshyam Mann wrote: > > ---- On Mon, 23 May 2022 04:54:23 -0500 Sean Mooney > > wrote ---- > > > On Sat, 2022-05-21 at 17:33 +0200, Thomas Goirand wrote: > > > > On 5/21/22 06:07, Ghanshyam Mann wrote: > > > > > ---- On Fri, 20 May 2022 20:56:27 -0500 Erik McCormick > > wrote ---- > > > > > > > > > > > > On Fri, May 20, 2022, 7:03 PM Thomas Goirand > > wrote: > > > > > > On 5/20/22 19:42, Ghanshyam Mann wrote: > > > > > > > * Joining info Join with Google Meet = > > meet.google.com/gwi-cphb-rya > > > > > > > > > > > > What would it take to make everyone switch to free software? > > We're > > > > > > moving from one non-free platform (zoom) to another (google > > meet), even > > > > > > if we have Jitsi that works very well... :/ > > > > > > > > > > > > Jitsi degrades beyond 16 or so interactive users. I love it > > and wish it well, but it's just not there yet. If the situation has > > changed in recent months, great then let's take a look. > > > > > > > > > > Yeah, we try to use that as our first preference but it did not > > work as expected. In a recent usage in RBAC discussion a few weeks ago, > > > > > many attendees complained about audio, joining issues or so. > > That is why we are using non-free tooling. > > > > > > > > > > @zigo, If you have tried any free and stable platforms for video > > calls, please suggest and we will love > > > > > to use that. > > > > > > > > > > -gmann > > > > > > > > As I understand, the point isn't to make more than 16 persons talk > > in > > > > the meeting. Let me know if I'm wrong. > > > > > > > > In such case, a setup similar to Debconf (which used Jitsi for > > Debconf > > > > 2020 and 2021) can be used. You get just a few people in the > > meeting, > > > > and everyone else just listens to the broacast (ie: a regular > > online > > > > video in your browser or VLC). > > > in general whe we have video meeting for thinkgs like rbac > > discussions we do want to allow most or all attendes to be able to talk > > so > > > using vlc to have everyone else just listent would fail that > > requireemtn in general. > > > > > > if it was a presentation sure but for a meeting that would not > > really be useful > > > > Yeah, these are more interactive meetings than presentation style. If > > anyone attending the > > the meeting I expect they will participate in the discussion sometimes > > or at least can/should > > be able to make comments when things are related to their projects/area > > or so. > > Jitsi meet seems to scale on the client side and video seems to impact that quite a bit. What this means is the more people you have in a call and the more people streaming webcams the worse the experience. That said it does seem to work fairly well with 30-40 people if video is turned off and you rely on the paired etherpad system. It isn't perfect, but the OpenDev team uses it when we need to do calls. That said it has been a while since I was in a call that large. > > As for debugging why sound doesn't work as mentioned prior to the PTG one known issue is that Firefox seems to treat jitsi's webrtc as autoplayed audio which means you have to enable autoplay audio for the jitsi server or enable it globally. Other than that we are not currently aware of any issues preventing audio (or video) from working. It is always a great help if the people who have problems with tools like this can reach out so that we can help debug them. Instead we're just told it doesn't work and get no details that are actionable. FWIW We are using Jitsi during the Neutron CI meetings every other week. I installed Jitsi app from flathub recently (https://flathub.org/apps/details/org.jitsi.jitsi-meet[1]) and for me it works pretty good now. But I know that some people have problems with sound and they need to reconnect couple of times. It's not always problem that sound is not working at all. Sometimes people can hear part of the other people on call but not all of them. I don't have any more data about it but if we will experience such issues next time during CI meeting, We will try to reach out to You with hopefully more details. > > The mobile clients do seem to work quite well if you need a fallback too. > > > > > -gmann > > > > > > > > > > The full setup is well documented [1], and we have free software ansible > > > > scripts [2]. This allows thousands of attendees, plus recording and > > > > reviewing of the videos. > > > > > > > > I very much would love if there was some efforts put in this direction > > > > (or some similar setup, as long as it's fully free). > > > > > > > > Hoping this helps, > > > > Cheers, > > > > > > > > Thomas Goirand (zigo) > > > > > > > > [1] https://video.debconf.org > > > > [2] https://salsa.debian.org/debconf-video-team > > > > > > > > > > > > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://flathub.org/apps/details/org.jitsi.jitsi-meet -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Tue May 24 06:47:08 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 24 May 2022 08:47:08 +0200 Subject: [Neutron] CI meeting - 24.05.2022 Message-ID: <110228701.nniJfEyVGO@p1> Hi, Just quick reminder that today at 1500 UTC we will have Neutron CI meeting. It will be on Jitsi [1] this week. Agenda can be found at [2]. See You there :) [1] https://meetpad.opendev.org/neutron-ci-meetings[1] [2] https://etherpad.opendev.org/p/neutron-ci-meetings[2] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://meetpad.opendev.org/neutron-ci-meetings [2] https://etherpad.opendev.org/p/neutron-ci-meetings -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From rdhasman at redhat.com Tue May 24 07:29:10 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 24 May 2022 12:59:10 +0530 Subject: [cinder] This week's meeting will be in video+IRC Message-ID: Hello Argonauts, This week's meeting will be held in video + IRC mode with details as follows: Date: 25th May, 2022 Time: 1400 UTC Meeting link: https://bluejeans.com/556681290 IRC Channel: #openstack-meeting-alt Make sure you're connected to both the bluejeans meeting and IRC since we do roll call and also summarize the discussion points on IRC. Note: Every last meeting of the month is held in video + IRC mode to provide flexibility to communicate their thoughts via written or verbal way. It also acts as a medium to meet the team (virtually) every month. Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Tue May 24 07:36:41 2022 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 24 May 2022 09:36:41 +0200 Subject: [neutron] Propose Yatin Karel for Neutron core team Message-ID: Hi Neutrinos, I would like to propose Yatin Karel (IRC: ykarel) as a member of the Neutron core team. Yatin is a long time Openstack developer, and started concentrating to Networking projects in the last few cycles. His experience, proactivity and sharp eye proved to be very helpful both in reviews and fixing issues. His reviews are really valuable and reflect his experience and helpfulness. I am sure that he will be a great addition to the core team. Please vote for his nomination by answering this mail till next Tuesday. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue May 24 07:56:33 2022 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 24 May 2022 09:56:33 +0200 Subject: [qa][openeuler] QA wants to drop openeuler job In-Reply-To: References: Message-ID: https://review.opendev.org/c/openstack/devstack/+/842534 On Thu, 19 May 2022 at 10:57, Martin Kopec wrote: > Hi everyone, > > we're considering dropping the devstack-platform-openEuler-20.03-SP2 job. > It's been failing for a long time and based on the fact that no one has > complained about it, we assume that there isn't much interest in having the > job around. > > If that's not the case, please, reach out and let's discuss updating the > job. > > Thanks, > -- > Martin Kopec > Senior Software Quality Engineer > Red Hat EMEA > IM: kopecmartin > > > > -- Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From obondarev at mirantis.com Tue May 24 08:07:16 2022 From: obondarev at mirantis.com (Oleg Bondarev) Date: Tue, 24 May 2022 12:07:16 +0400 Subject: [neutron] Propose Yatin Karel for Neutron core team In-Reply-To: References: Message-ID: +1 well deserved! Thanks, Oleg On Tue, May 24, 2022 at 11:43 AM Lajos Katona wrote: > Hi Neutrinos, > > I would like to propose Yatin Karel (IRC: ykarel) as a member of the > Neutron core team. > Yatin is a long time Openstack developer, and started concentrating to > Networking projects in the last few cycles. > > His experience, proactivity and sharp eye proved to be very helpful both > in reviews and fixing issues. > His reviews are really valuable and reflect his experience and helpfulness. > > I am sure that he will be a great addition to the core team. > > Please vote for his nomination by answering this mail till next Tuesday. > > Lajos Katona (lajoskatona) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Tue May 24 08:14:36 2022 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 24 May 2022 10:14:36 +0200 Subject: [neutron][neutron-vpnaas] neutron-tempest-plugin coverage In-Reply-To: References: Message-ID: Hi, Basically I am fine with it, I will bring this topic to the meeting today. Lajos Mohammed Naser ezt ?rta (id?pont: 2022. m?j. 23., H, 23:49): > Hiya, > > As part of cleaning up VPNaaS, I decided to explore fixing the > longstanding failures of the > `neutron-tempest-plugin-vpnaas-libreswan-centos` job that was always > failing and non-voting. > > Upon trying to flip the switch on it in a test change[1], I noticed > that much of the test results seem to be simply exercising the API[2], > essentially making the fact that we run that job on CentOS a bit > redundant. > > The tests that actually exercise more than the API and validate > connectivity are the functional tests, which currently run for > "StrongSwan" only on Ubuntu. My question to the Neutron team is: > > 1. Should we get rid of the functional jobs and move them into > neutron-tempest-plugin, and run that for different platforms? > 2. Should we get rid of the neutron-tempest-plugin job under CentOS > and add functional jobs for CentOS? > > My inclination seems to be #2, since that is what other stadium > projects seem to be doing (no one seems to run platform-specific > neutron-tempest-plugin jobs). It also keeps maintainability simpler > since it remains in the hands of the VPNaaS team. > > Thanks > Mohammed > > > [1]: https://review.opendev.org/c/openstack/neutron-vpnaas/+/843005 > [2]: > https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_cd1/843005/2/check/neutron-tempest-plugin-vpnaas-libreswan-centos/cd1ac04/testr_results.html > > > -- > Mohammed Naser > VEXXHOST, Inc. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue May 24 09:22:43 2022 From: zigo at debian.org (Thomas Goirand) Date: Tue, 24 May 2022 11:22:43 +0200 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <180f197fed2.b69f7f0d122307.5086605122825317883@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> <94458541-bccb-4fe8-7ad4-93bdc14065b5@debian.org> <8642f998298488fea7a490bcd3345dde07e524ca.camel@redhat.com> <180f197fed2.b69f7f0d122307.5086605122825317883@ghanshyammann.com> Message-ID: <7a510767-e3d2-aa6d-eb68-a5a6c7c87550@debian.org> On 5/23/22 17:45, Ghanshyam Mann wrote: > Yeah, these are more interactive meetings than presentation style. If anyone attending the > the meeting I expect they will participate in the discussion sometimes or at least can/should > be able to make comments when things are related to their projects/area or so. > > -gmann In a meeting with 30 participants or more, it is impossible that everyone talks. More over, at least one of them will have his/her mic on with some disturbing noise (and this is platform agnostic...). In such a setup, asking questions on the chat (for example, on IRC) is a way more efficient, and usually enough. If not enough, you can always ask the person to join the meeting, if more interactive (unexpected) discussion is needed. Though it'd be surprising if this happens often. Cheers, Thomas Goirand (zigo) From smooney at redhat.com Tue May 24 09:36:47 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 24 May 2022 10:36:47 +0100 Subject: Question about lp 1950186 In-Reply-To: References: Message-ID: <6f0904cca7968e730480ff843862cbfc36d19ded.camel@redhat.com> On Tue, 2022-05-24 at 12:28 +0900, Seyeong Kim wrote: > Hello, > > I have a question about lp > https://bugs.launchpad.net/nova/+bug/1950186 if this can be fixed in > near future or if there is any plan? no this is not a bug nova is not configure correctly. we do not supprot mixing numa and non numa instance on the same host and we do not support mempage aware memory tracking in placment at this time. i have closed the bug as wont fix as this is really a feature reqeust and is not intended to work at this time unless you use hw:mem_page_size=small and the numa toplogy filter in combination with configuring the host reserved 4k pages as i noted in https://bugs.launchpad.net/nova/+bug/1950186/comments/8 > > Thanks in advance. > From smooney at redhat.com Tue May 24 09:44:55 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 24 May 2022 10:44:55 +0100 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <1863524.taCxCBeP46@p1> References: <2175937.irdbgypaU6@p1> <180f197fed2.b69f7f0d122307.5086605122825317883@ghanshyammann.com> <4cc70187-07f9-421e-b45a-3f5150c48428@www.fastmail.com> <1863524.taCxCBeP46@p1> Message-ID: <0bbce3c3a79574eb1d2cc1d02243bb9b3834a659.camel@redhat.com> On Tue, 2022-05-24 at 08:06 +0200, Slawek Kaplonski wrote: > Hi, > > Dnia poniedzia?ek, 23 maja 2022 17:53:39 CEST Clark Boylan pisze: > > On Mon, May 23, 2022, at 8:45 AM, Ghanshyam Mann wrote: > > > ---- On Mon, 23 May 2022 04:54:23 -0500 Sean Mooney > > > wrote ---- > > > > On Sat, 2022-05-21 at 17:33 +0200, Thomas Goirand wrote: > > > > > On 5/21/22 06:07, Ghanshyam Mann wrote: > > > > > > ---- On Fri, 20 May 2022 20:56:27 -0500 Erik McCormick > > > wrote ---- > > > > > > > > > > > > > > On Fri, May 20, 2022, 7:03 PM Thomas Goirand > > > wrote: > > > > > > > On 5/20/22 19:42, Ghanshyam Mann wrote: > > > > > > > > * Joining info Join with Google Meet = > > > meet.google.com/gwi-cphb-rya > > > > > > > > > > > > > > What would it take to make everyone switch to free software? > > > We're > > > > > > > moving from one non-free platform (zoom) to another (google > > > meet), even > > > > > > > if we have Jitsi that works very well... :/ > > > > > > > > > > > > > > Jitsi degrades beyond 16 or so interactive users. I love it > > > and wish it well, but it's just not there yet. If the situation has > > > changed in recent months, great then let's take a look. > > > > > > > > > > > > Yeah, we try to use that as our first preference but it did not > > > work as expected. In a recent usage in RBAC discussion a few weeks ago, > > > > > > many attendees complained about audio, joining issues or so. > > > That is why we are using non-free tooling. > > > > > > > > > > > > @zigo, If you have tried any free and stable platforms for video > > > calls, please suggest and we will love > > > > > > to use that. > > > > > > > > > > > > -gmann > > > > > > > > > > As I understand, the point isn't to make more than 16 persons talk > > > in > > > > > the meeting. Let me know if I'm wrong. > > > > > > > > > > In such case, a setup similar to Debconf (which used Jitsi for > > > Debconf > > > > > 2020 and 2021) can be used. You get just a few people in the > > > meeting, > > > > > and everyone else just listens to the broacast (ie: a regular > > > online > > > > > video in your browser or VLC). > > > > in general whe we have video meeting for thinkgs like rbac > > > discussions we do want to allow most or all attendes to be able to talk > > > so > > > > using vlc to have everyone else just listent would fail that > > > requireemtn in general. > > > > > > > > if it was a presentation sure but for a meeting that would not > > > really be useful > > > > > > Yeah, these are more interactive meetings than presentation style. If > > > anyone attending the > > > the meeting I expect they will participate in the discussion sometimes > > > or at least can/should > > > be able to make comments when things are related to their projects/area > > > or so. > > > > Jitsi meet seems to scale on the client side and video seems to impact that quite a bit. What this means is the more people you have in a call and the more people streaming webcams the worse the experience. That said it does seem to work fairly well with 30-40 people if video is turned off and you rely on the paired etherpad system. It isn't perfect, but the OpenDev team uses it when we need to do calls. That said it has been a while since I was in a call that large. > > > > As for debugging why sound doesn't work as mentioned prior to the PTG one known issue is that Firefox seems to treat jitsi's webrtc as autoplayed audio which means you have to enable autoplay audio for the jitsi server or enable it globally. Other than that we are not currently aware of any issues preventing audio (or video) from working. It is always a great help if the people who have problems with tools like this can reach out so that we can help debug them. Instead we're just told it doesn't work and get no details that are actionable. > > FWIW We are using Jitsi during the Neutron CI meetings every other week. I installed Jitsi app from flathub recently (https://flathub.org/apps/details/org.jitsi.jitsi-meet[1]) and for me it works pretty good now. But I know that some people have problems with sound and they need to reconnect couple of times. It's not always problem that sound is not working at all. Sometimes people can hear part of the other people on call but not all of them. > I don't have any more data about it but if we will experience such issues next time during CI meeting, We will try to reach out to You with hopefully more details. honestly i use chromium for video calls and firefox for everything else and i never have issue with any of the differnt video apps using that combination. granted im also still using xorg and pulseaudio not wayland and pipewire but for those that have issues with jitsi if you cant or dont want to use use the flathub app try chromium or chrome if you want the googale build and are having issue with firefox and audio. > > > > > The mobile clients do seem to work quite well if you need a fallback too. > > > > > > > > -gmann > > > > > > > > > > > > > The full setup is well documented [1], and we have free software ansible > > > > > scripts [2]. This allows thousands of attendees, plus recording and > > > > > reviewing of the videos. > > > > > > > > > > I very much would love if there was some efforts put in this direction > > > > > (or some similar setup, as long as it's fully free). > > > > > > > > > > Hoping this helps, > > > > > Cheers, > > > > > > > > > > Thomas Goirand (zigo) > > > > > > > > > > [1] https://video.debconf.org > > > > > [2] https://salsa.debian.org/debconf-video-team > > > > > > > > > > > > > > > > > > > > > > > From smooney at redhat.com Tue May 24 10:01:40 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 24 May 2022 11:01:40 +0100 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <7a510767-e3d2-aa6d-eb68-a5a6c7c87550@debian.org> References: <2175937.irdbgypaU6@p1> <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> <94458541-bccb-4fe8-7ad4-93bdc14065b5@debian.org> <8642f998298488fea7a490bcd3345dde07e524ca.camel@redhat.com> <180f197fed2.b69f7f0d122307.5086605122825317883@ghanshyammann.com> <7a510767-e3d2-aa6d-eb68-a5a6c7c87550@debian.org> Message-ID: <58b460773db8648a5c28eeec4f78ed3d7f871de9.camel@redhat.com> On Tue, 2022-05-24 at 11:22 +0200, Thomas Goirand wrote: > On 5/23/22 17:45, Ghanshyam Mann wrote: > > Yeah, these are more interactive meetings than presentation style. If anyone attending the > > the meeting I expect they will participate in the discussion sometimes or at least can/should > > be able to make comments when things are related to their projects/area or so. > > > > -gmann > > In a meeting with 30 participants or more, it is impossible that > everyone talks. More over, at least one of them will have his/her mic on > with some disturbing noise (and this is platform agnostic...). that has not been my experince. for ptgs and other video confernce we can have 30+ and still discuss things. general most people will have there mic muted for alot of the call and only interject if they feel they can add to the converstion or its a topic of interest. > > In such a setup, asking questions on the chat (for example, on IRC) is a > way more efficient, and usually enough. If not enough, you can always > ask the person to join the meeting, if more interactive (unexpected) > discussion is needed. Though it'd be surprising if this happens often. there is really not advantage to having a video call if the majority of the interaction is done over irc or chat. we use video calls to enable higher bandwith comunication then irc and generally try to aovid using video calls as a comunity with a prefernce for irc meeting due to the better logging that enables. so the primary reason for video mettings is to enable multiple people discuss topics rapitly with the visiual queue of if someone want to interject ectra that a void only call would not facilate. so i would think that unless you expect a lot of interaction you should not be using a video call and shoudl be using our default of irc. the other option we could look into perhaps is using matix. it has video capablities and text but i have not personally used that facilaty yet. you can also join via the web with 0 install using say element.io and there are native applicaiton on many operating systmes inclouding ios/andriod. if we were to look at using something other then jitsi then haveing an openstack home server or using rooms on the matrix.org home server may be something we should look into. > > Cheers, > > Thomas Goirand (zigo) > From Istvan.Szabo at agoda.com Tue May 24 10:06:32 2022 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Tue, 24 May 2022 10:06:32 +0000 Subject: Recreate virsh xml Message-ID: Hi, I have a node which has been live-migrated to another node and it is running on the new node BUT there isn't any xml definition file (/etc/libvirt/qemu/instance-*.xml) of it which is automatically generated. How I solve this issue normally is to respawn that vm with the same disk and ip that it has before, but I wonder is there any easier way to just simply recreate the automatically generated definition file. The file which I'm talking about auto generated is this one: /etc/libvirt/qemu/instance-*.xml starts like this so I guess I can't create by myself. instance-00006580 ac8cf83e-6819-417a-9f83-de73c28567e9 .... Thank you ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue May 24 10:26:11 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 24 May 2022 11:26:11 +0100 Subject: Recreate virsh xml In-Reply-To: References: Message-ID: On Tue, 2022-05-24 at 10:06 +0000, Szabo, Istvan (Agoda) wrote: > Hi, > > I have a node which has been live-migrated to another node and it is running on the new node BUT there isn't any xml definition file (/etc/libvirt/qemu/instance-*.xml) of it which is automatically generated. > How I solve this issue normally is to respawn that vm with the same disk and ip that it has before, but I wonder is there any easier way to just simply recreate the automatically generated definition file. > you normally would jsut regenerate the xml by hard rebooting the instance but the instace-id can change when you live migate so if you check with teh admin clint it might be presenct jsut with a differnt name. if you want stable xml path you can change the template we use to say use the instance uuid. https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.instance_name_template the default is "instance-%08x" i prefer to use "instance-%(uuid)s" we proably should change that default at some point. there is currently an upgrade impact to changing this but with a minor code chage we coudl actully workaroudn that we just need to store the current namein the instance_system_metadata table and only use the new default for new isntances. > The file which I'm talking about auto generated is this one: /etc/libvirt/qemu/instance-*.xml starts like this so I guess I can't create by myself. > > > > > instance-00006580 > ac8cf83e-6819-417a-9f83-de73c28567e9 > > > > .... > > > Thank you > > ________________________________ > This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From Russell.Stather at ignitiongroup.co.za Tue May 24 10:50:11 2022 From: Russell.Stather at ignitiongroup.co.za (Russell Stather) Date: Tue, 24 May 2022 10:50:11 +0000 Subject: Creating VM directly on external network. Message-ID: Hi I have a charm based OpenStack setup. The network topology is as below. Creating vms on the internal network and adding floating ips works as expected. Adding vms on directly on to the ext_net does not. openstack allows me to do it, but I get host is unreachable when trying to connect. Is there some configuration I have missed? Thanks [cid:3699cab3-ea6a-45eb-91c3-65e91056fd71] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 23132 bytes Desc: image.png URL: From smooney at redhat.com Tue May 24 11:37:26 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 24 May 2022 12:37:26 +0100 Subject: Creating VM directly on external network. In-Reply-To: References: Message-ID: On Tue, 2022-05-24 at 10:50 +0000, Russell Stather wrote: > Hi > > I have a charm based OpenStack setup. The network topology is as below. > > Creating vms on the internal network and adding floating ips works as expected. > > Adding vms on directly on to the ext_net does not. openstack allows me to do it, but I get host is unreachable when trying to connect. > Is there some configuration I have missed? am off the top of my head have you ensured that dhcp is enabled on the external network. floating ips are implemtend using neutron routers so you cannot used them with vms on an external network. other then that you just need to make sure that the external network subnet is routeable on your phsyical netowrk have you tried connecting to the vm form a vm in your openstack cloud to rule out datacenter routing issues. > > Thanks > > [cid:3699cab3-ea6a-45eb-91c3-65e91056fd71] From fungi at yuggoth.org Tue May 24 12:00:45 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 24 May 2022 12:00:45 +0000 Subject: OpenStack still promoting non-free platforms (was: [all][tc] Change OpenStack release naming policy proposal) In-Reply-To: <58b460773db8648a5c28eeec4f78ed3d7f871de9.camel@redhat.com> References: <180c03c9614.10ad4d72e148482.5160564778639516267@ghanshyammann.com> <180e2904a63.ed97d27316807.1879483015684560107@ghanshyammann.com> <3bb78ec6-23f1-c2bd-2995-5f13e698ff3f@debian.org> <180e4cc4c99.f31b243d23594.4050007669560262314@ghanshyammann.com> <94458541-bccb-4fe8-7ad4-93bdc14065b5@debian.org> <8642f998298488fea7a490bcd3345dde07e524ca.camel@redhat.com> <180f197fed2.b69f7f0d122307.5086605122825317883@ghanshyammann.com> <7a510767-e3d2-aa6d-eb68-a5a6c7c87550@debian.org> <58b460773db8648a5c28eeec4f78ed3d7f871de9.camel@redhat.com> Message-ID: <20220524120044.xsogl33p3rsfs5ry@yuggoth.org> On 2022-05-24 11:01:40 +0100 (+0100), Sean Mooney wrote: [...] > haveing an openstack home server or using rooms on the matrix.org > home server may be something we should look into. The OpenDev Collaboratory already has an "opendev.org" Matrix homeserver which we use to host channels (notably #zuul) and the Matrix versions of some of our chatbots like GerritBot. I've never tried Matrix's videoconferencing features, but would be interested to know how well they work. That said, from what I can see it's also WebRTC based, like Jitsi-Meet, so any users experiencing WebRTC connectivity or support challenges will likely see the same behaviors for either one. https://zuul-ci.org/docs/zuul/latest/howtos/matrix.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From eblock at nde.ag Tue May 24 12:09:36 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 24 May 2022 12:09:36 +0000 Subject: Creating VM directly on external network. In-Reply-To: References: Message-ID: <20220524120936.Horde.5r4Lp2DgqElbzArmXnVDxxv@webmail.nde.ag> In addition to Seans comment I wanted to point out that you might need config-drive to ensure the VMs get their metadata since the neutron metadata service won't work on the external network. At least that's how we spin up instances in our external networks. The cli option would be 'openstack server create --image ... --config-drive true ' Zitat von Sean Mooney : > On Tue, 2022-05-24 at 10:50 +0000, Russell Stather wrote: >> Hi >> >> I have a charm based OpenStack setup. The network topology is as below. >> >> Creating vms on the internal network and adding floating ips works >> as expected. >> >> Adding vms on directly on to the ext_net does not. openstack allows >> me to do it, but I get host is unreachable when trying to connect. >> Is there some configuration I have missed? > > am off the top of my head have you ensured that dhcp is enabled on > the external network. > floating ips are implemtend using neutron routers so you cannot used > them with vms on an external network. > other then that you just need to make sure that the external network > subnet is routeable on your phsyical netowrk > have you tried connecting to the vm form a vm in your openstack > cloud to rule out datacenter routing issues. > >> >> Thanks >> >> [cid:3699cab3-ea6a-45eb-91c3-65e91056fd71] From smooney at redhat.com Tue May 24 12:35:57 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 24 May 2022 13:35:57 +0100 Subject: Creating VM directly on external network. In-Reply-To: <20220524120936.Horde.5r4Lp2DgqElbzArmXnVDxxv@webmail.nde.ag> References: <20220524120936.Horde.5r4Lp2DgqElbzArmXnVDxxv@webmail.nde.ag> Message-ID: <8426df96c77f497469e50230cf553b67b9ed0b0d.camel@redhat.com> On Tue, 2022-05-24 at 12:09 +0000, Eugen Block wrote: > In addition to Seans comment I wanted to point out that you might need > config-drive to ensure the VMs get their metadata since the neutron > metadata service won't work on the external network. At least that's > how we spin up instances in our external networks. > The cli option would be > 'openstack server create --image ... --config-drive true ' yes config drive is one way the ohter way is to configure neutron to server the metadta via the dhcp namespace. > > > Zitat von Sean Mooney : > > > On Tue, 2022-05-24 at 10:50 +0000, Russell Stather wrote: > > > Hi > > > > > > I have a charm based OpenStack setup. The network topology is as below. > > > > > > Creating vms on the internal network and adding floating ips works > > > as expected. > > > > > > Adding vms on directly on to the ext_net does not. openstack allows > > > me to do it, but I get host is unreachable when trying to connect. > > > Is there some configuration I have missed? > > > > am off the top of my head have you ensured that dhcp is enabled on > > the external network. > > floating ips are implemtend using neutron routers so you cannot used > > them with vms on an external network. > > other then that you just need to make sure that the external network > > subnet is routeable on your phsyical netowrk > > have you tried connecting to the vm form a vm in your openstack > > cloud to rule out datacenter routing issues. > > > > > > > > Thanks > > > > > > [cid:3699cab3-ea6a-45eb-91c3-65e91056fd71] > > > > From zigo at debian.org Tue May 24 13:03:31 2022 From: zigo at debian.org (Thomas Goirand) Date: Tue, 24 May 2022 15:03:31 +0200 Subject: Creating VM directly on external network. In-Reply-To: References: Message-ID: On 5/24/22 12:50, Russell Stather wrote: > Hi > > I have a charm based OpenStack setup.? The network topology is as below. > > Creating vms on the internal network and adding floating ips works as > expected. > > Adding vms on directly on to the ext_net does not. openstack allows me > to do it, but I get host is unreachable when trying to connect. > Is there some configuration I have missed? > > Thanks I believe what you're trying to do is an internal shared publicly reachable network (with public IPs). This can be done with OpenStack, with DHCP and all. The only annoyance with this kind of setup, is that it can't be mixed with a private network with floating IPs, because your VMs would end up with 2 default gateways. I hope this helps, Cheers, Thomas Goirand (zigo) From gael.therond at bitswalk.com Tue May 24 13:56:04 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Tue, 24 May 2022 15:56:04 +0200 Subject: [GLANCE][VICTORIA] - Multi-backend support with ceph backed swift. Message-ID: Hi everyone, I'm currently enabling glance multi backend support for one of our platforms and we're facing a weird situation. This platform is a CentOS 8 Stream based Openstack VICTORIA and when we enable the multi-backends feature, the swift store isn't working properly. When we try to send an image to this store we end up with the following error message: *"ERROR glance_store._drivers.swift.store - A value for swift_store_auth_address is required."* Here is the complete log trace: https://paste.opendev.org/show/bGeCJjSB1C5GwD6JYFvu/ I'm a bit surprised because according to the error message and the glance_store code it means that either my *default_swift_reference* or *auth_address *fromthe *default_swift_reference *isn't set. https://opendev.org/openstack/glance_store/src/branch/stable/victoria/glance_store/_drivers/swift/store.py#L1267 However as you can see on my glance-api.conf file I've (seems so) correctly set the glance_store directive and the store settings: glance-api.conf: https://paste.opendev.org/show/bjFAPCjlCG6MqINGsIlK/ glance-swift.conf: https://paste.opendev.org/show/bcCfBQDj32T3qgiJj3P7/ I'm only using a single tenant glance container within the admin project. What's really weird is this issue still arise and exist with any known (to me) combination of configuration. I've tried to declare the store right within the glance-api.conf file, but it's not working. I've tried to declare the store right within the glance-api.conf file and with the deprecated swift_store_auth_address directive instead of the auth_address one, it's not working. I've tried to declare the store right within the glance-api.conf file and with the deprecated swift_store_auth_address directive plus the auth_address one, it's not working. I've tried to replace auth_address directive with the deprecated swift_store_auth_address within the glance-swift.conf, same error and issue again. When I replace the default_backend value with performance, uploading an image works perfectly. I know that my swift CEPH RGW backed endpoint works as I use it already to store data. I've even tried to push an image on it both using CLI (openstack object) and cURL API call and it works. I'm a bit lost here, so if anyone is having thoughts on this I would be happy to hear you! Thanks a lot everyone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Tue May 24 14:11:30 2022 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 24 May 2022 09:11:30 -0500 Subject: [neutron] Propose Yatin Karel for Neutron core team In-Reply-To: References: Message-ID: +1, well deserved! On Tue, May 24, 2022 at 3:07 AM Oleg Bondarev wrote: > +1 well deserved! > > Thanks, > Oleg > > On Tue, May 24, 2022 at 11:43 AM Lajos Katona > wrote: > >> Hi Neutrinos, >> >> I would like to propose Yatin Karel (IRC: ykarel) as a member of the >> Neutron core team. >> Yatin is a long time Openstack developer, and started concentrating to >> Networking projects in the last few cycles. >> >> His experience, proactivity and sharp eye proved to be very helpful both >> in reviews and fixing issues. >> His reviews are really valuable and reflect his experience and >> helpfulness. >> >> I am sure that he will be a great addition to the core team. >> >> Please vote for his nomination by answering this mail till next Tuesday. >> >> Lajos Katona (lajoskatona) >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue May 24 14:13:17 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 24 May 2022 23:13:17 +0900 Subject: [ops][heat][rbac] Call for feedback: Splitting stack to "system" stack and "project" stack In-Reply-To: References: Message-ID: This is a reminder of the feedback request. We are trying to gather feedback until the next policy pop-up meeting (June 7th) so I'd appreciate your (public or private) inputs ! On Wed, May 11, 2022 at 12:13 AM Takashi Kajinami wrote: > Hello, > > tl;dr > We are looking for some feedback from anyone developing their tool/software > creating/managing heat stacks, about the new requirement we are > considering. > > > Recently we've been discussing the issue with Heat and Secure RBAC work[1]. > > The current target of SRBAC work requires the appropriate scope according > to the resources. > - Project resources like instance, volume or network can be created by > project-scoped token > - Project resources like flavor, user, project or role can be created by > system-scoped token > > This is causing a problem with heat stacks which have both project > resources > and system resources, because heat currently uses the single token provided > by the user in a single stack API call. > > > As part of discussions we have discussed the "split stack" concept, which > requires > creating separate stacks per scope. This means If you want to create > project resources > and system resources by Heat, you should create two separate heat stacks > and call > heat stack api separately using different credentials. > > While we still need to investigate the feasibility of this option (eg. how > smooth we can > make the migration), we'd like to hear any feedback about the impact of > the "split stack" concept > on any external toolings depending on Heat, because this would require > some workflow/architecture > change in the toolings. If we hear many negative feedback/concerns then we > would examine > different approaches. > > Thank you, > Takashi > > [1] > https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ivo.Palli at bytesnet.nl Tue May 24 13:38:40 2022 From: Ivo.Palli at bytesnet.nl (Ivo Palli) Date: Tue, 24 May 2022 13:38:40 +0000 Subject: [Monasca] correlating monasca-api metrics resource_id to instance id Message-ID: Hi all, I'm new to openstack and monasca. I've gotten so far as to retrieve instances with their ID's via the REST API, but I'm running into trouble with getting metrics. The issue is that metrics: - show to which hostname they belong - and you can limit results to a project id But if you have 2 or more servers with the same hostname, you can't link them 1:1 with an instance. Is there a way how I can get that 1:1 link? Regards, Ivo Palli -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue May 24 15:55:54 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 24 May 2022 17:55:54 +0200 Subject: [neutron] Propose Yatin Karel for Neutron core team In-Reply-To: References: Message-ID: <23291146.24xSg1XeQm@p1> Hi, Dnia wtorek, 24 maja 2022 09:36:41 CEST Lajos Katona pisze: > Hi Neutrinos, > > I would like to propose Yatin Karel (IRC: ykarel) as a member of the > Neutron core team. > Yatin is a long time Openstack developer, and started concentrating to > Networking projects in the last few cycles. > > His experience, proactivity and sharp eye proved to be very helpful both in > reviews and fixing issues. > His reviews are really valuable and reflect his experience and helpfulness. > > I am sure that he will be a great addition to the core team. > > Please vote for his nomination by answering this mail till next Tuesday. > > Lajos Katona (lajoskatona) > Yay! Definitely big +1 from me. Well deserved Yatin :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gael.therond at bitswalk.com Tue May 24 16:04:26 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Tue, 24 May 2022 18:04:26 +0200 Subject: [GLANCE][VICTORIA] - Multi-backend support with ceph backed swift. In-Reply-To: References: Message-ID: Sure! However it is specified two times within the official glance configuration, once in glance_store and once in glance.store.swift.store but that last section is really not that well documented. https://docs.openstack.org/glance/latest/configuration/glance_api.html#glance-store Le mar. 24 mai 2022 ? 17:54, Abhishek Kekane a ?crit : > Hi Ga?l, > > Can you try moving default_swift_reference from glance_store section to > particular swift store section, i.e. cold as per your configs. > > I guess that is the issue. > > Abhishek > > On Tue, 24 May 2022 at 7:30 PM, Ga?l THEROND > wrote: > >> Hi everyone, >> >> I'm currently enabling glance multi backend support for one of our >> platforms and we're facing a weird situation. >> >> This platform is a CentOS 8 Stream based Openstack VICTORIA and when we >> enable the multi-backends feature, the swift store isn't working properly. >> >> When we try to send an image to this store we end up with the following >> error message: >> *"ERROR glance_store._drivers.swift.store - A value for >> swift_store_auth_address is required."* >> >> Here is the complete log trace: >> https://paste.opendev.org/show/bGeCJjSB1C5GwD6JYFvu/ >> >> I'm a bit surprised because according to the error message and the >> glance_store code it means that either my *default_swift_reference* or >> *auth_address *fromthe *default_swift_reference *isn't set. >> >> >> https://opendev.org/openstack/glance_store/src/branch/stable/victoria/glance_store/_drivers/swift/store.py#L1267 >> >> However as you can see on my glance-api.conf file I've (seems so) >> correctly set the glance_store directive and the store settings: >> >> glance-api.conf: https://paste.opendev.org/show/bjFAPCjlCG6MqINGsIlK/ >> glance-swift.conf: https://paste.opendev.org/show/bcCfBQDj32T3qgiJj3P7/ >> >> I'm only using a single tenant glance container within the admin project. >> What's really weird is this issue still arise and exist with any known >> (to me) combination of configuration. >> >> I've tried to declare the store right within the glance-api.conf file, >> but it's not working. >> I've tried to declare the store right within the glance-api.conf file and >> with the deprecated swift_store_auth_address directive instead of the >> auth_address one, it's not working. >> I've tried to declare the store right within the glance-api.conf file and >> with the deprecated swift_store_auth_address directive plus the >> auth_address one, it's not working. >> I've tried to replace auth_address directive with the deprecated >> swift_store_auth_address within the glance-swift.conf, same error and issue >> again. >> >> When I replace the default_backend value with performance, uploading an >> image works perfectly. >> >> I know that my swift CEPH RGW backed endpoint works as I use it already >> to store data. I've even tried to push an image on it both using CLI >> (openstack object) and cURL API call and it works. >> I'm a bit lost here, so if anyone is having thoughts on this I would be >> happy to hear you! >> >> Thanks a lot everyone! >> > -- > Thanks & Best Regards, > > Abhishek Kekane > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Tue May 24 16:10:09 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 24 May 2022 21:40:09 +0530 Subject: [GLANCE][VICTORIA] - Multi-backend support with ceph backed swift. In-Reply-To: References: Message-ID: On Tue, 24 May 2022 at 9:34 PM, Ga?l THEROND wrote: > Sure! However it is specified two times within the official glance > configuration, once in glance_store and once in glance.store.swift.store > but that last section is really not that well documented. > > > https://docs.openstack.org/glance/latest/configuration/glance_api.html#glance-store > Ack, that needs to be fixed then, important thing is if you are using multiple stores then store specific parameters must go under specific store section otherwise they won?t get recognised. Abhishek > > Le mar. 24 mai 2022 ? 17:54, Abhishek Kekane a > ?crit : > >> Hi Ga?l, >> >> Can you try moving default_swift_reference from glance_store section to >> particular swift store section, i.e. cold as per your configs. >> >> I guess that is the issue. >> >> Abhishek >> >> On Tue, 24 May 2022 at 7:30 PM, Ga?l THEROND >> wrote: >> >>> Hi everyone, >>> >>> I'm currently enabling glance multi backend support for one of our >>> platforms and we're facing a weird situation. >>> >>> This platform is a CentOS 8 Stream based Openstack VICTORIA and when we >>> enable the multi-backends feature, the swift store isn't working properly. >>> >>> When we try to send an image to this store we end up with the following >>> error message: >>> *"ERROR glance_store._drivers.swift.store - A value for >>> swift_store_auth_address is required."* >>> >>> Here is the complete log trace: >>> https://paste.opendev.org/show/bGeCJjSB1C5GwD6JYFvu/ >>> >>> I'm a bit surprised because according to the error message and the >>> glance_store code it means that either my *default_swift_reference* or >>> *auth_address *fromthe *default_swift_reference *isn't set. >>> >>> >>> https://opendev.org/openstack/glance_store/src/branch/stable/victoria/glance_store/_drivers/swift/store.py#L1267 >>> >>> However as you can see on my glance-api.conf file I've (seems so) >>> correctly set the glance_store directive and the store settings: >>> >>> glance-api.conf: https://paste.opendev.org/show/bjFAPCjlCG6MqINGsIlK/ >>> glance-swift.conf: https://paste.opendev.org/show/bcCfBQDj32T3qgiJj3P7/ >>> >>> I'm only using a single tenant glance container within the admin project. >>> What's really weird is this issue still arise and exist with any known >>> (to me) combination of configuration. >>> >>> I've tried to declare the store right within the glance-api.conf file, >>> but it's not working. >>> I've tried to declare the store right within the glance-api.conf file >>> and with the deprecated swift_store_auth_address directive instead of the >>> auth_address one, it's not working. >>> I've tried to declare the store right within the glance-api.conf file >>> and with the deprecated swift_store_auth_address directive plus the >>> auth_address one, it's not working. >>> I've tried to replace auth_address directive with the deprecated >>> swift_store_auth_address within the glance-swift.conf, same error and issue >>> again. >>> >>> When I replace the default_backend value with performance, uploading an >>> image works perfectly. >>> >>> I know that my swift CEPH RGW backed endpoint works as I use it already >>> to store data. I've even tried to push an image on it both using CLI >>> (openstack object) and cURL API call and it works. >>> I'm a bit lost here, so if anyone is having thoughts on this I would be >>> happy to hear you! >>> >>> Thanks a lot everyone! >>> >> -- >> Thanks & Best Regards, >> >> Abhishek Kekane >> > -- Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Tue May 24 16:12:31 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Tue, 24 May 2022 18:12:31 +0200 Subject: [GLANCE][VICTORIA] - Multi-backend support with ceph backed swift. In-Reply-To: References: Message-ID: Ok ok, I was kinda assuming that but was a bit lost by both doc and code as doc specify it multiple time within config but also within the configuring page but this time with third alternative location ^^ Will test that and let you know. Thanks a lot for this first review. Le mar. 24 mai 2022 ? 18:10, Abhishek Kekane a ?crit : > > > On Tue, 24 May 2022 at 9:34 PM, Ga?l THEROND > wrote: > >> Sure! However it is specified two times within the official glance >> configuration, once in glance_store and once in glance.store.swift.store >> but that last section is really not that well documented. >> >> >> https://docs.openstack.org/glance/latest/configuration/glance_api.html#glance-store >> > > Ack, that needs to be fixed then, important thing is if you are using > multiple stores then store specific parameters must go under specific store > section otherwise they won?t get recognised. > > Abhishek > > > >> >> Le mar. 24 mai 2022 ? 17:54, Abhishek Kekane a >> ?crit : >> >>> Hi Ga?l, >>> >>> Can you try moving default_swift_reference from glance_store section to >>> particular swift store section, i.e. cold as per your configs. >>> >>> I guess that is the issue. >>> >>> Abhishek >>> >>> On Tue, 24 May 2022 at 7:30 PM, Ga?l THEROND >>> wrote: >>> >>>> Hi everyone, >>>> >>>> I'm currently enabling glance multi backend support for one of our >>>> platforms and we're facing a weird situation. >>>> >>>> This platform is a CentOS 8 Stream based Openstack VICTORIA and when we >>>> enable the multi-backends feature, the swift store isn't working properly. >>>> >>>> When we try to send an image to this store we end up with the following >>>> error message: >>>> *"ERROR glance_store._drivers.swift.store - A value for >>>> swift_store_auth_address is required."* >>>> >>>> Here is the complete log trace: >>>> https://paste.opendev.org/show/bGeCJjSB1C5GwD6JYFvu/ >>>> >>>> I'm a bit surprised because according to the error message and the >>>> glance_store code it means that either my *default_swift_reference* or >>>> *auth_address *fromthe *default_swift_reference *isn't set. >>>> >>>> >>>> https://opendev.org/openstack/glance_store/src/branch/stable/victoria/glance_store/_drivers/swift/store.py#L1267 >>>> >>>> However as you can see on my glance-api.conf file I've (seems so) >>>> correctly set the glance_store directive and the store settings: >>>> >>>> glance-api.conf: https://paste.opendev.org/show/bjFAPCjlCG6MqINGsIlK/ >>>> glance-swift.conf: https://paste.opendev.org/show/bcCfBQDj32T3qgiJj3P7/ >>>> >>>> I'm only using a single tenant glance container within the admin >>>> project. >>>> What's really weird is this issue still arise and exist with any known >>>> (to me) combination of configuration. >>>> >>>> I've tried to declare the store right within the glance-api.conf file, >>>> but it's not working. >>>> I've tried to declare the store right within the glance-api.conf file >>>> and with the deprecated swift_store_auth_address directive instead of the >>>> auth_address one, it's not working. >>>> I've tried to declare the store right within the glance-api.conf file >>>> and with the deprecated swift_store_auth_address directive plus the >>>> auth_address one, it's not working. >>>> I've tried to replace auth_address directive with the deprecated >>>> swift_store_auth_address within the glance-swift.conf, same error and issue >>>> again. >>>> >>>> When I replace the default_backend value with performance, uploading an >>>> image works perfectly. >>>> >>>> I know that my swift CEPH RGW backed endpoint works as I use it already >>>> to store data. I've even tried to push an image on it both using CLI >>>> (openstack object) and cURL API call and it works. >>>> I'm a bit lost here, so if anyone is having thoughts on this I would be >>>> happy to hear you! >>>> >>>> Thanks a lot everyone! >>>> >>> -- >>> Thanks & Best Regards, >>> >>> Abhishek Kekane >>> >> -- > Thanks & Best Regards, > > Abhishek Kekane > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue May 24 16:24:10 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 24 May 2022 12:24:10 -0400 Subject: [neutron][neutron-vpnaas][release] dropping feature/lbaasv2 branch Message-ID: Hi there, It looks like we've managed to track along a feature/lbaasv2 branch into neutron-vpnaas all the way from 2014 :) I'd like to request for the release team to delete that branch if possible. :) Thanks, Mohammed -- Mohammed Naser VEXXHOST, Inc. From akekane at redhat.com Tue May 24 17:54:35 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 24 May 2022 23:24:35 +0530 Subject: [GLANCE][VICTORIA] - Multi-backend support with ceph backed swift. In-Reply-To: References: Message-ID: On Tue, 24 May 2022 at 9:42 PM, Ga?l THEROND wrote: > Ok ok, I was kinda assuming that but was a bit lost by both doc and code > as doc specify it multiple time within config but also within the > configuring page but this time with third alternative location ^^ > > Will test that and let you know. > > Thanks a lot for this first review. > No worries, please let me know if you have any questions. Abhishek > > Le mar. 24 mai 2022 ? 18:10, Abhishek Kekane a > ?crit : > >> >> >> On Tue, 24 May 2022 at 9:34 PM, Ga?l THEROND >> wrote: >> >>> Sure! However it is specified two times within the official glance >>> configuration, once in glance_store and once in glance.store.swift.store >>> but that last section is really not that well documented. >>> >>> >>> https://docs.openstack.org/glance/latest/configuration/glance_api.html#glance-store >>> >> >> Ack, that needs to be fixed then, important thing is if you are using >> multiple stores then store specific parameters must go under specific store >> section otherwise they won?t get recognised. >> >> Abhishek >> >> >> >>> >>> Le mar. 24 mai 2022 ? 17:54, Abhishek Kekane a >>> ?crit : >>> >>>> Hi Ga?l, >>>> >>>> Can you try moving default_swift_reference from glance_store section to >>>> particular swift store section, i.e. cold as per your configs. >>>> >>>> I guess that is the issue. >>>> >>>> Abhishek >>>> >>>> On Tue, 24 May 2022 at 7:30 PM, Ga?l THEROND >>>> wrote: >>>> >>>>> Hi everyone, >>>>> >>>>> I'm currently enabling glance multi backend support for one of our >>>>> platforms and we're facing a weird situation. >>>>> >>>>> This platform is a CentOS 8 Stream based Openstack VICTORIA and when >>>>> we enable the multi-backends feature, the swift store isn't working >>>>> properly. >>>>> >>>>> When we try to send an image to this store we end up with the >>>>> following error message: >>>>> *"ERROR glance_store._drivers.swift.store - A value for >>>>> swift_store_auth_address is required."* >>>>> >>>>> Here is the complete log trace: >>>>> https://paste.opendev.org/show/bGeCJjSB1C5GwD6JYFvu/ >>>>> >>>>> I'm a bit surprised because according to the error message and the >>>>> glance_store code it means that either my *default_swift_reference* or >>>>> *auth_address *fromthe *default_swift_reference *isn't set. >>>>> >>>>> >>>>> https://opendev.org/openstack/glance_store/src/branch/stable/victoria/glance_store/_drivers/swift/store.py#L1267 >>>>> >>>>> However as you can see on my glance-api.conf file I've (seems so) >>>>> correctly set the glance_store directive and the store settings: >>>>> >>>>> glance-api.conf: https://paste.opendev.org/show/bjFAPCjlCG6MqINGsIlK/ >>>>> glance-swift.conf: >>>>> https://paste.opendev.org/show/bcCfBQDj32T3qgiJj3P7/ >>>>> >>>>> I'm only using a single tenant glance container within the admin >>>>> project. >>>>> What's really weird is this issue still arise and exist with any known >>>>> (to me) combination of configuration. >>>>> >>>>> I've tried to declare the store right within the glance-api.conf file, >>>>> but it's not working. >>>>> I've tried to declare the store right within the glance-api.conf file >>>>> and with the deprecated swift_store_auth_address directive instead of the >>>>> auth_address one, it's not working. >>>>> I've tried to declare the store right within the glance-api.conf file >>>>> and with the deprecated swift_store_auth_address directive plus the >>>>> auth_address one, it's not working. >>>>> I've tried to replace auth_address directive with the deprecated >>>>> swift_store_auth_address within the glance-swift.conf, same error and issue >>>>> again. >>>>> >>>>> When I replace the default_backend value with performance, uploading >>>>> an image works perfectly. >>>>> >>>>> I know that my swift CEPH RGW backed endpoint works as I use it >>>>> already to store data. I've even tried to push an image on it both using >>>>> CLI (openstack object) and cURL API call and it works. >>>>> I'm a bit lost here, so if anyone is having thoughts on this I would >>>>> be happy to hear you! >>>>> >>>>> Thanks a lot everyone! >>>>> >>>> -- >>>> Thanks & Best Regards, >>>> >>>> Abhishek Kekane >>>> >>> -- >> Thanks & Best Regards, >> >> Abhishek Kekane >> > -- Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Tue May 24 18:02:59 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Tue, 24 May 2022 20:02:59 +0200 Subject: [GLANCE][VICTORIA] - Multi-backend support with ceph backed swift. In-Reply-To: References: Message-ID: All right, just tested to move the directive and it now works like a charm! Thanks a lot! Le mar. 24 mai 2022 ? 19:54, Abhishek Kekane a ?crit : > > > On Tue, 24 May 2022 at 9:42 PM, Ga?l THEROND > wrote: > >> Ok ok, I was kinda assuming that but was a bit lost by both doc and code >> as doc specify it multiple time within config but also within the >> configuring page but this time with third alternative location ^^ >> >> Will test that and let you know. >> >> Thanks a lot for this first review. >> > > > No worries, please let me know if you have any questions. > > Abhishek > >> >> Le mar. 24 mai 2022 ? 18:10, Abhishek Kekane a >> ?crit : >> >>> >>> >>> On Tue, 24 May 2022 at 9:34 PM, Ga?l THEROND >>> wrote: >>> >>>> Sure! However it is specified two times within the official glance >>>> configuration, once in glance_store and once in glance.store.swift.store >>>> but that last section is really not that well documented. >>>> >>>> >>>> https://docs.openstack.org/glance/latest/configuration/glance_api.html#glance-store >>>> >>> >>> Ack, that needs to be fixed then, important thing is if you are using >>> multiple stores then store specific parameters must go under specific store >>> section otherwise they won?t get recognised. >>> >>> Abhishek >>> >>> >>> >>>> >>>> Le mar. 24 mai 2022 ? 17:54, Abhishek Kekane a >>>> ?crit : >>>> >>>>> Hi Ga?l, >>>>> >>>>> Can you try moving default_swift_reference from glance_store section >>>>> to particular swift store section, i.e. cold as per your configs. >>>>> >>>>> I guess that is the issue. >>>>> >>>>> Abhishek >>>>> >>>>> On Tue, 24 May 2022 at 7:30 PM, Ga?l THEROND < >>>>> gael.therond at bitswalk.com> wrote: >>>>> >>>>>> Hi everyone, >>>>>> >>>>>> I'm currently enabling glance multi backend support for one of our >>>>>> platforms and we're facing a weird situation. >>>>>> >>>>>> This platform is a CentOS 8 Stream based Openstack VICTORIA and when >>>>>> we enable the multi-backends feature, the swift store isn't working >>>>>> properly. >>>>>> >>>>>> When we try to send an image to this store we end up with the >>>>>> following error message: >>>>>> *"ERROR glance_store._drivers.swift.store - A value for >>>>>> swift_store_auth_address is required."* >>>>>> >>>>>> Here is the complete log trace: >>>>>> https://paste.opendev.org/show/bGeCJjSB1C5GwD6JYFvu/ >>>>>> >>>>>> I'm a bit surprised because according to the error message and the >>>>>> glance_store code it means that either my *default_swift_reference* >>>>>> or *auth_address *fromthe *default_swift_reference *isn't set. >>>>>> >>>>>> >>>>>> https://opendev.org/openstack/glance_store/src/branch/stable/victoria/glance_store/_drivers/swift/store.py#L1267 >>>>>> >>>>>> However as you can see on my glance-api.conf file I've (seems so) >>>>>> correctly set the glance_store directive and the store settings: >>>>>> >>>>>> glance-api.conf: https://paste.opendev.org/show/bjFAPCjlCG6MqINGsIlK/ >>>>>> glance-swift.conf: >>>>>> https://paste.opendev.org/show/bcCfBQDj32T3qgiJj3P7/ >>>>>> >>>>>> I'm only using a single tenant glance container within the admin >>>>>> project. >>>>>> What's really weird is this issue still arise and exist with any >>>>>> known (to me) combination of configuration. >>>>>> >>>>>> I've tried to declare the store right within the glance-api.conf >>>>>> file, but it's not working. >>>>>> I've tried to declare the store right within the glance-api.conf file >>>>>> and with the deprecated swift_store_auth_address directive instead of the >>>>>> auth_address one, it's not working. >>>>>> I've tried to declare the store right within the glance-api.conf file >>>>>> and with the deprecated swift_store_auth_address directive plus the >>>>>> auth_address one, it's not working. >>>>>> I've tried to replace auth_address directive with the deprecated >>>>>> swift_store_auth_address within the glance-swift.conf, same error and issue >>>>>> again. >>>>>> >>>>>> When I replace the default_backend value with performance, uploading >>>>>> an image works perfectly. >>>>>> >>>>>> I know that my swift CEPH RGW backed endpoint works as I use it >>>>>> already to store data. I've even tried to push an image on it both using >>>>>> CLI (openstack object) and cURL API call and it works. >>>>>> I'm a bit lost here, so if anyone is having thoughts on this I would >>>>>> be happy to hear you! >>>>>> >>>>>> Thanks a lot everyone! >>>>>> >>>>> -- >>>>> Thanks & Best Regards, >>>>> >>>>> Abhishek Kekane >>>>> >>>> -- >>> Thanks & Best Regards, >>> >>> Abhishek Kekane >>> >> -- > Thanks & Best Regards, > > Abhishek Kekane > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.aminian.server at gmail.com Tue May 24 18:19:36 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Tue, 24 May 2022 22:49:36 +0430 Subject: live resize Message-ID: hello on openstack with ceph backend is it possible to live resize instances ? I want to change flavor without any down time . -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Tue May 24 22:48:50 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Tue, 24 May 2022 18:48:50 -0400 Subject: [tripleo] gate blocker - openstacksdk library - please hold rechecks Message-ID: Hello All, We have a blocker on TripleO check/gate/integration jobs detailed in: https://bugs.launchpad.net/tripleo/+bug/1975646 - tripleo tests are failing os_tempest with "openstacksdk library MUST be >=0.99." There is a proposed fix in test now: https://review.opendev.org/c/openstack/tripleo-ci/+/843220. Please hold rechecks until we can either merge the fix or revert the breaking change. Thank you, TripleO CI. -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed May 25 02:08:03 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 25 May 2022 07:38:03 +0530 Subject: [GLANCE][VICTORIA] - Multi-backend support with ceph backed swift. In-Reply-To: References: Message-ID: Great, thanks for the update. I will try to improve the documentation for the same. Abhishek On Tue, 24 May 2022 at 11:33 PM, Ga?l THEROND wrote: > All right, just tested to move the directive and it now works like a charm! > > Thanks a lot! > > Le mar. 24 mai 2022 ? 19:54, Abhishek Kekane a > ?crit : > >> >> >> On Tue, 24 May 2022 at 9:42 PM, Ga?l THEROND >> wrote: >> >>> Ok ok, I was kinda assuming that but was a bit lost by both doc and code >>> as doc specify it multiple time within config but also within the >>> configuring page but this time with third alternative location ^^ >>> >>> Will test that and let you know. >>> >>> Thanks a lot for this first review. >>> >> >> >> No worries, please let me know if you have any questions. >> >> Abhishek >> >>> >>> Le mar. 24 mai 2022 ? 18:10, Abhishek Kekane a >>> ?crit : >>> >>>> >>>> >>>> On Tue, 24 May 2022 at 9:34 PM, Ga?l THEROND >>>> wrote: >>>> >>>>> Sure! However it is specified two times within the official glance >>>>> configuration, once in glance_store and once in glance.store.swift.store >>>>> but that last section is really not that well documented. >>>>> >>>>> >>>>> https://docs.openstack.org/glance/latest/configuration/glance_api.html#glance-store >>>>> >>>> >>>> Ack, that needs to be fixed then, important thing is if you are using >>>> multiple stores then store specific parameters must go under specific store >>>> section otherwise they won?t get recognised. >>>> >>>> Abhishek >>>> >>>> >>>> >>>>> >>>>> Le mar. 24 mai 2022 ? 17:54, Abhishek Kekane a >>>>> ?crit : >>>>> >>>>>> Hi Ga?l, >>>>>> >>>>>> Can you try moving default_swift_reference from glance_store section >>>>>> to particular swift store section, i.e. cold as per your configs. >>>>>> >>>>>> I guess that is the issue. >>>>>> >>>>>> Abhishek >>>>>> >>>>>> On Tue, 24 May 2022 at 7:30 PM, Ga?l THEROND < >>>>>> gael.therond at bitswalk.com> wrote: >>>>>> >>>>>>> Hi everyone, >>>>>>> >>>>>>> I'm currently enabling glance multi backend support for one of our >>>>>>> platforms and we're facing a weird situation. >>>>>>> >>>>>>> This platform is a CentOS 8 Stream based Openstack VICTORIA and when >>>>>>> we enable the multi-backends feature, the swift store isn't working >>>>>>> properly. >>>>>>> >>>>>>> When we try to send an image to this store we end up with the >>>>>>> following error message: >>>>>>> *"ERROR glance_store._drivers.swift.store - A value for >>>>>>> swift_store_auth_address is required."* >>>>>>> >>>>>>> Here is the complete log trace: >>>>>>> https://paste.opendev.org/show/bGeCJjSB1C5GwD6JYFvu/ >>>>>>> >>>>>>> I'm a bit surprised because according to the error message and the >>>>>>> glance_store code it means that either my *default_swift_reference* >>>>>>> or *auth_address *fromthe *default_swift_reference *isn't set. >>>>>>> >>>>>>> >>>>>>> https://opendev.org/openstack/glance_store/src/branch/stable/victoria/glance_store/_drivers/swift/store.py#L1267 >>>>>>> >>>>>>> However as you can see on my glance-api.conf file I've (seems so) >>>>>>> correctly set the glance_store directive and the store settings: >>>>>>> >>>>>>> glance-api.conf: >>>>>>> https://paste.opendev.org/show/bjFAPCjlCG6MqINGsIlK/ >>>>>>> glance-swift.conf: >>>>>>> https://paste.opendev.org/show/bcCfBQDj32T3qgiJj3P7/ >>>>>>> >>>>>>> I'm only using a single tenant glance container within the admin >>>>>>> project. >>>>>>> What's really weird is this issue still arise and exist with any >>>>>>> known (to me) combination of configuration. >>>>>>> >>>>>>> I've tried to declare the store right within the glance-api.conf >>>>>>> file, but it's not working. >>>>>>> I've tried to declare the store right within the glance-api.conf >>>>>>> file and with the deprecated swift_store_auth_address directive instead of >>>>>>> the auth_address one, it's not working. >>>>>>> I've tried to declare the store right within the glance-api.conf >>>>>>> file and with the deprecated swift_store_auth_address directive plus the >>>>>>> auth_address one, it's not working. >>>>>>> I've tried to replace auth_address directive with the deprecated >>>>>>> swift_store_auth_address within the glance-swift.conf, same error and issue >>>>>>> again. >>>>>>> >>>>>>> When I replace the default_backend value with performance, uploading >>>>>>> an image works perfectly. >>>>>>> >>>>>>> I know that my swift CEPH RGW backed endpoint works as I use it >>>>>>> already to store data. I've even tried to push an image on it both using >>>>>>> CLI (openstack object) and cURL API call and it works. >>>>>>> I'm a bit lost here, so if anyone is having thoughts on this I would >>>>>>> be happy to hear you! >>>>>>> >>>>>>> Thanks a lot everyone! >>>>>>> >>>>>> -- >>>>>> Thanks & Best Regards, >>>>>> >>>>>> Abhishek Kekane >>>>>> >>>>> -- >>>> Thanks & Best Regards, >>>> >>>> Abhishek Kekane >>>> >>> -- >> Thanks & Best Regards, >> >> Abhishek Kekane >> > -- Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoshito.itou.dr at hco.ntt.co.jp Wed May 25 03:30:43 2022 From: yoshito.itou.dr at hco.ntt.co.jp (Yoshito Ito) Date: Wed, 25 May 2022 12:30:43 +0900 Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for heat-translator core team Message-ID: Hi heat-translator team! I would like to propose Hiromu Asahina (h-asahina) as a member of the heat-translator core team. I'm now not able to actively contribute to heat-translator, and he can lead the team instead of me. He is mainly contributing to Tacker by fixing issues, refactoring existing codes, and a lot of helpful reviews, with his experience as a research engineer in NTT. Tacker highly depends on heat-translator in its main functionality, so I believe he can manage it with his knowledge of Tacker. Please vote for his nomination by answering this mail till June 1st. Best Regards, Yoshito Ito From yasufum.o at gmail.com Wed May 25 04:14:37 2022 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Wed, 25 May 2022 13:14:37 +0900 Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for heat-translator core team In-Reply-To: References: Message-ID: +1, thanks yoshito! Yasufumi On 2022/05/25 12:30, Yoshito Ito wrote: > Hi heat-translator team! > > I would like to propose Hiromu Asahina (h-asahina) as a member of the > heat-translator core team. I'm now not able to actively contribute to > heat-translator, and he can lead the team instead of me. > > He is mainly contributing to Tacker by fixing issues, refactoring > existing codes, and a lot of helpful reviews, with his experience as a > research engineer in NTT. Tacker highly depends on heat-translator in > its main functionality, so I believe he can manage it with his knowledge > of Tacker. > > Please vote for his nomination by answering this mail till June 1st. > > Best Regards, > > Yoshito Ito > > From ueha.ayumu at fujitsu.com Wed May 25 04:39:47 2022 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Wed, 25 May 2022 04:39:47 +0000 Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for heat-translator core team In-Reply-To: References: Message-ID: +1 Best Regards, Ueha -----Original Message----- From: Yoshito Ito Sent: Wednesday, May 25, 2022 12:31 PM To: openstack-discuss at lists.openstack.org Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for heat-translator core team Hi heat-translator team! I would like to propose Hiromu Asahina (h-asahina) as a member of the heat-translator core team. I'm now not able to actively contribute to heat-translator, and he can lead the team instead of me. He is mainly contributing to Tacker by fixing issues, refactoring existing codes, and a lot of helpful reviews, with his experience as a research engineer in NTT. Tacker highly depends on heat-translator in its main functionality, so I believe he can manage it with his knowledge of Tacker. Please vote for his nomination by answering this mail till June 1st. Best Regards, Yoshito Ito From yamakawa.keiich at fujitsu.com Wed May 25 07:27:18 2022 From: yamakawa.keiich at fujitsu.com (yamakawa.keiich at fujitsu.com) Date: Wed, 25 May 2022 07:27:18 +0000 Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for heat-translator core team In-Reply-To: References: Message-ID: +1 Best Regards, Keiichiro Yamakawa -----Original Message----- From: Yoshito Ito Sent: Wednesday, May 25, 2022 12:31 PM To: openstack-discuss at lists.openstack.org Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for heat-translator core team Hi heat-translator team! I would like to propose Hiromu Asahina (h-asahina) as a member of the heat-translator core team. I'm now not able to actively contribute to heat-translator, and he can lead the team instead of me. He is mainly contributing to Tacker by fixing issues, refactoring existing codes, and a lot of helpful reviews, with his experience as a research engineer in NTT. Tacker highly depends on heat-translator in its main functionality, so I believe he can manage it with his knowledge of Tacker. Please vote for his nomination by answering this mail till June 1st. Best Regards, Yoshito Ito From gael.therond at bitswalk.com Wed May 25 07:29:36 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 25 May 2022 09:29:36 +0200 Subject: [GLANCE][VICTORIA] - Multi-backend support with ceph backed swift. In-Reply-To: References: Message-ID: Hi Abhishek, if it can help here are both documentation URLs that are mixin up things and so get me confused with what?s in code. The default glance documentation: https://docs.openstack.org/glance/latest/configuration/configuring.html#configuring-multiple-swift-accounts-stores Here the doc state that swift_store_config_file should be in [DEFAULT] section. If it?s true when not using multiple backend store we could probably add a disclaimer paragraph for multiple backend stores setup. The default glance-api.conf configuration file: https://docs.openstack.org/glance/latest/configuration/glance_api.html Here the doc state that the directive should be placed within either [glance_store] or [glance.store.swift.store] as for upper if it?s true for setup not using multiple backends we probably should add a disclaimer paragraph and add a more explicit paragraph about backend naming reference within the config file as well as a more specific exemple for the value of default_backend directive as with multiple backend it HAVE to relate to the backend key value you want to target. Anyway, thanks a lot for your kind help, let me know if you want me to fix this doc. Cheers! Le mer. 25 mai 2022 ? 04:08, Abhishek Kekane a ?crit : > > Great, thanks for the update. I will try to improve the documentation for > the same. > > Abhishek > > > On Tue, 24 May 2022 at 11:33 PM, Ga?l THEROND > wrote: > >> All right, just tested to move the directive and it now works like a >> charm! >> >> Thanks a lot! >> >> Le mar. 24 mai 2022 ? 19:54, Abhishek Kekane a >> ?crit : >> >>> >>> >>> On Tue, 24 May 2022 at 9:42 PM, Ga?l THEROND >>> wrote: >>> >>>> Ok ok, I was kinda assuming that but was a bit lost by both doc and >>>> code as doc specify it multiple time within config but also within the >>>> configuring page but this time with third alternative location ^^ >>>> >>>> Will test that and let you know. >>>> >>>> Thanks a lot for this first review. >>>> >>> >>> >>> No worries, please let me know if you have any questions. >>> >>> Abhishek >>> >>>> >>>> Le mar. 24 mai 2022 ? 18:10, Abhishek Kekane a >>>> ?crit : >>>> >>>>> >>>>> >>>>> On Tue, 24 May 2022 at 9:34 PM, Ga?l THEROND < >>>>> gael.therond at bitswalk.com> wrote: >>>>> >>>>>> Sure! However it is specified two times within the official glance >>>>>> configuration, once in glance_store and once in glance.store.swift.store >>>>>> but that last section is really not that well documented. >>>>>> >>>>>> >>>>>> https://docs.openstack.org/glance/latest/configuration/glance_api.html#glance-store >>>>>> >>>>> >>>>> Ack, that needs to be fixed then, important thing is if you are using >>>>> multiple stores then store specific parameters must go under specific store >>>>> section otherwise they won?t get recognised. >>>>> >>>>> Abhishek >>>>> >>>>> >>>>> >>>>>> >>>>>> Le mar. 24 mai 2022 ? 17:54, Abhishek Kekane a >>>>>> ?crit : >>>>>> >>>>>>> Hi Ga?l, >>>>>>> >>>>>>> Can you try moving default_swift_reference from glance_store section >>>>>>> to particular swift store section, i.e. cold as per your configs. >>>>>>> >>>>>>> I guess that is the issue. >>>>>>> >>>>>>> Abhishek >>>>>>> >>>>>>> On Tue, 24 May 2022 at 7:30 PM, Ga?l THEROND < >>>>>>> gael.therond at bitswalk.com> wrote: >>>>>>> >>>>>>>> Hi everyone, >>>>>>>> >>>>>>>> I'm currently enabling glance multi backend support for one of our >>>>>>>> platforms and we're facing a weird situation. >>>>>>>> >>>>>>>> This platform is a CentOS 8 Stream based Openstack VICTORIA and >>>>>>>> when we enable the multi-backends feature, the swift store isn't working >>>>>>>> properly. >>>>>>>> >>>>>>>> When we try to send an image to this store we end up with the >>>>>>>> following error message: >>>>>>>> *"ERROR glance_store._drivers.swift.store - A value for >>>>>>>> swift_store_auth_address is required."* >>>>>>>> >>>>>>>> Here is the complete log trace: >>>>>>>> https://paste.opendev.org/show/bGeCJjSB1C5GwD6JYFvu/ >>>>>>>> >>>>>>>> I'm a bit surprised because according to the error message and the >>>>>>>> glance_store code it means that either my *default_swift_reference* >>>>>>>> or *auth_address *fromthe *default_swift_reference *isn't set. >>>>>>>> >>>>>>>> >>>>>>>> https://opendev.org/openstack/glance_store/src/branch/stable/victoria/glance_store/_drivers/swift/store.py#L1267 >>>>>>>> >>>>>>>> However as you can see on my glance-api.conf file I've (seems so) >>>>>>>> correctly set the glance_store directive and the store settings: >>>>>>>> >>>>>>>> glance-api.conf: >>>>>>>> https://paste.opendev.org/show/bjFAPCjlCG6MqINGsIlK/ >>>>>>>> glance-swift.conf: >>>>>>>> https://paste.opendev.org/show/bcCfBQDj32T3qgiJj3P7/ >>>>>>>> >>>>>>>> I'm only using a single tenant glance container within the admin >>>>>>>> project. >>>>>>>> What's really weird is this issue still arise and exist with any >>>>>>>> known (to me) combination of configuration. >>>>>>>> >>>>>>>> I've tried to declare the store right within the glance-api.conf >>>>>>>> file, but it's not working. >>>>>>>> I've tried to declare the store right within the glance-api.conf >>>>>>>> file and with the deprecated swift_store_auth_address directive instead of >>>>>>>> the auth_address one, it's not working. >>>>>>>> I've tried to declare the store right within the glance-api.conf >>>>>>>> file and with the deprecated swift_store_auth_address directive plus the >>>>>>>> auth_address one, it's not working. >>>>>>>> I've tried to replace auth_address directive with the deprecated >>>>>>>> swift_store_auth_address within the glance-swift.conf, same error and issue >>>>>>>> again. >>>>>>>> >>>>>>>> When I replace the default_backend value with performance, >>>>>>>> uploading an image works perfectly. >>>>>>>> >>>>>>>> I know that my swift CEPH RGW backed endpoint works as I use it >>>>>>>> already to store data. I've even tried to push an image on it both using >>>>>>>> CLI (openstack object) and cURL API call and it works. >>>>>>>> I'm a bit lost here, so if anyone is having thoughts on this I >>>>>>>> would be happy to hear you! >>>>>>>> >>>>>>>> Thanks a lot everyone! >>>>>>>> >>>>>>> -- >>>>>>> Thanks & Best Regards, >>>>>>> >>>>>>> Abhishek Kekane >>>>>>> >>>>>> -- >>>>> Thanks & Best Regards, >>>>> >>>>> Abhishek Kekane >>>>> >>>> -- >>> Thanks & Best Regards, >>> >>> Abhishek Kekane >>> >> -- > Thanks & Best Regards, > > Abhishek Kekane > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed May 25 07:51:36 2022 From: eblock at nde.ag (Eugen Block) Date: Wed, 25 May 2022 07:51:36 +0000 Subject: live resize In-Reply-To: Message-ID: <20220525075136.Horde.WxauaYh9jwegZP1Uc_-vWRn@webmail.nde.ag> Hi, I don't know of any way to resize an instance without downtime. The docs [1] confirm this, they state: > You can change the size of an instance by changing its flavor. This > rebuilds the instance and therefore results in a restart. [1] https://docs.openstack.org/nova/xena/user/resize.html Zitat von Parsa Aminian : > hello > on openstack with ceph backend is it possible to live resize instances ? I > want to change flavor without any down time . From niimi.yusuke at fujitsu.com Wed May 25 08:40:26 2022 From: niimi.yusuke at fujitsu.com (niimi.yusuke at fujitsu.com) Date: Wed, 25 May 2022 08:40:26 +0000 Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for heat-translator core team In-Reply-To: References: Message-ID: +1 Best Regards, Yusuke Niimi > -----Original Message----- > From: Yoshito Ito > Sent: Wednesday, May 25, 2022 12:31 PM > To: openstack-discuss at lists.openstack.org > Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for > heat-translator core team > > Hi heat-translator team! > > I would like to propose Hiromu Asahina (h-asahina) as a member of the > heat-translator core team. I'm now not able to actively contribute to > heat-translator, and he can lead the team instead of me. > > He is mainly contributing to Tacker by fixing issues, refactoring existing codes, > and a lot of helpful reviews, with his experience as a research engineer in NTT. > Tacker highly depends on heat-translator in its main functionality, so I believe he > can manage it with his knowledge of Tacker. > > Please vote for his nomination by answering this mail till June 1st. > > Best Regards, > > Yoshito Ito > From stephenfin at redhat.com Wed May 25 09:32:54 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 25 May 2022 10:32:54 +0100 Subject: [dev][nova] How to add a column in table of Nova Database In-Reply-To: References: Message-ID: <151e69a269dff998b33c019e717bc71d6a3f2b3f.camel@redhat.com> On Sun, 2022-05-22 at 17:27 +0800, ??? wrote: > Hi, > > I'm a beginner in OpenStack development. > > I would like to try modifying by adding a property to the 'Instances' > database table. But I didn't find description in the documentation of > the data model mechanism and how to extend the database. > https://docs.openstack.org/nova/latest/ > > Now, I know that it involves versined object model and alembic. > > My question is: > What is the process of adding a column to a table in the database? > > Could someone show me the process of modifying a database table or > recommend me the relevant documentation > > Best wishes, > Han This is pretty well documented: https://docs.openstack.org/nova/latest/reference/database-migrations.html The tl;dr: is a) make your changes to 'nova/db/{main|api}/models.py' then (b) either auto-generate a schema using alembic's autogeneration functionality or write one yourself. We have tests in place that will ensure the migrations and models don't diverge. You should know that making changes to the database schema will generally require a spec. You can find information on the purpose of specs and how to write one here: https://specs.openstack.org/openstack/nova-specs/readme.html Be careful not to treat this as a downstream-only thing (i.e. by forking nova). If you do, you are likely to cause yourself a lot of pain in the future as the database schema of upstream nova will invariably diverge. If you have any questions, please ask here or (better) on #openstack-nova on OFTC IRC. Hope this helps, Stephen From stephenfin at redhat.com Wed May 25 09:42:21 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 25 May 2022 10:42:21 +0100 Subject: [nova] Why shouldn't we request the 'nova' AZ? (Was: Need help) In-Reply-To: References: Message-ID: <143548486fb69b01ec4ea16c2e10b2118f9768fa.camel@redhat.com> On Fri, 2022-05-20 at 12:21 +0530, Gk Gk wrote: > Hi All, > > In this link? https://docs.openstack.org/nova/queens/user/aggregates.html , > there is a warning section about availability zones: > > "That last rule can be very error-prone. Since the user can see the list of > availability zones, they have no way to know whether the default availability > zone name (currently nova) is provided because an host belongs to an aggregate > whose AZ metadata key is set to nova, or because there are at least one host > belonging to no aggregate. Consequently, it is highly recommended for users to > never ever ask for booting an instance by specifying an explicit AZ named nova > and for operators to never set the AZ metadata for an aggregate to nova. That > leads to some problems due to the fact that the instance AZ information is > explicitly attached tonova which could break further move operations when > either the host is moved to another aggregate or when the user would like to > migrate the instance." > > My questions are, > > 1. If a host belongs to an aggregate whose AZ metadata key is set to nova, why > is it not possible to know if a default AZ is provided or not ? Because the nova API with report the AZ as "nova" regardless. There's no way to say "is this value the default or is it user-configured". > > 2. Why is it a problem when the host is moved to another aggregate ? We can > move hosts from one aggregate to another. Right ? You can but if there are instances on that host then they can become stranded there because the instance would request AZ "foo" but the host would now be in AZ "bar". To be honest, I suspect (a) this isn't specific to using the "nova" AZ and applies to moving hosts in general and (b) it is likely no longer an issue since we now prevent (at the the API layer) moving hosts between aggregates if they have instances on them. > > 3. Why is it a problem when we migrate the instance ? See above. Hope this helps, Stephen PS: This is a nova-specific question and should be tagged as such. [1] describes some general best practices to follow when posting to the openstack mailing lists. Please take a look :) [1] https://wiki.openstack.org/wiki/MailingListEtiquette > > > Thanks > Kumar From syedammad83 at gmail.com Wed May 25 09:45:38 2022 From: syedammad83 at gmail.com (Ammad Syed) Date: Wed, 25 May 2022 14:45:38 +0500 Subject: [nova] Snapshot Image Format Message-ID: Hi, I am using nova 24.0. I have an image in raw format in glance store. I have created a volume backed instance from raw image. Then I have created the server image using nova api below with *createImage* . https://docs.openstack.org/api-ref/compute/?expanded=create-image-createimage-action-detail#:~:text=server_id%7D/action-,Create%20Image,-(createImage%20Action) OR via openstack server image create --name The created image I see in glance store is in qcow2 format. I have set this option snapshot_image_format in nova.conf but still it creates the images in qcow2 format. How config need to be done if i need to save server image in raw format ? Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Wed May 25 09:59:20 2022 From: tobias.urdin at binero.com (Tobias Urdin) Date: Wed, 25 May 2022 09:59:20 +0000 Subject: [neutron] Propose Yatin Karel for Neutron core team In-Reply-To: References: Message-ID: unofficial +1, Yatin is always super helpful! Well deserved! > On 24 May 2022, at 09:41, Lajos Katona wrote: > > ? > Hi Neutrinos, > > I would like to propose Yatin Karel (IRC: ykarel) as a member of the Neutron core team. > Yatin is a long time Openstack developer, and started concentrating to Networking projects in the last few cycles. > > His experience, proactivity and sharp eye proved to be very helpful both in reviews and fixing issues. > His reviews are really valuable and reflect his experience and helpfulness. > > I am sure that he will be a great addition to the core team. > > Please vote for his nomination by answering this mail till next Tuesday. > > Lajos Katona (lajoskatona) > From sandeepggn93 at gmail.com Wed May 25 10:27:42 2022 From: sandeepggn93 at gmail.com (Sandeep Yadav) Date: Wed, 25 May 2022 15:57:42 +0530 Subject: [tripleo] gate blocker - openstacksdk library - please hold rechecks In-Reply-To: References: Message-ID: Hello All, TripleO Check/Gate jobs are back to green after we reverted patch [1] which raise the minimum OpenStack SDK version to 0.99.0. [1] https://review.opendev.org/c/openstack/ansible-collections-openstack/+/843192 On Wed, May 25, 2022 at 4:29 AM Ronelle Landy wrote: > Hello All, > > We have a blocker on TripleO check/gate/integration jobs detailed in: > > https://bugs.launchpad.net/tripleo/+bug/1975646 - tripleo tests are > failing os_tempest with "openstacksdk library MUST be >=0.99." > > There is a proposed fix in test now: > https://review.opendev.org/c/openstack/tripleo-ci/+/843220. > > Please hold rechecks until we can either merge the fix or revert the > breaking change. > > Thank you, > > TripleO CI. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed May 25 11:00:00 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 25 May 2022 08:00:00 -0300 Subject: [cinder] Bug deputy report for week of 05-25-2022 Message-ID: This is a bug report from 05-18-2022 to 05-25-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- No meeting today! High - https://bugs.launchpad.net/cinder/+bug/1974078 "Scheduler doesn't consume space properly for create_volume." Unassigned. Incomplete - https://bugs.launchpad.net/cinder/+bug/1974092 "Cinder-backup tries to take incremental backups of failed previous backups." Unassigned. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 25 11:59:47 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 25 May 2022 11:59:47 +0000 Subject: [nova] Why shouldn't we request the 'nova' AZ? (Was: Need help) In-Reply-To: <143548486fb69b01ec4ea16c2e10b2118f9768fa.camel@redhat.com> References: <143548486fb69b01ec4ea16c2e10b2118f9768fa.camel@redhat.com> Message-ID: <20220525115946.mtdqz4o2x7mnafpo@yuggoth.org> On 2022-05-25 10:42:21 +0100 (+0100), Stephen Finucane wrote: [...] > PS: This is a nova-specific question and should be tagged as such. > [1] describes some general best practices to follow when posting > to the openstack mailing lists. Please take a look :) > > [1] https://wiki.openstack.org/wiki/MailingListEtiquette For subject tagging specifically, it's also helpful to point out https://docs.openstack.org/project-team-guide/open-community.html#mailing-lists (both links are included in the long description for this mailing list as well, for completeness). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From katada.yoshiyuk at fujitsu.com Wed May 25 12:09:37 2022 From: katada.yoshiyuk at fujitsu.com (katada.yoshiyuk at fujitsu.com) Date: Wed, 25 May 2022 12:09:37 +0000 Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for heat-translator core team In-Reply-To: References: Message-ID: +1 Best Regards, Yoshiyuki Katada -----Original Message----- From: Yoshito Ito Sent: Wednesday, May 25, 2022 12:31 PM To: openstack-discuss at lists.openstack.org Subject: [heat-translator][tacker] Propose Hiromu Asahina (h-asahina) for heat-translator core team Hi heat-translator team! I would like to propose Hiromu Asahina (h-asahina) as a member of the heat-translator core team. I'm now not able to actively contribute to heat-translator, and he can lead the team instead of me. He is mainly contributing to Tacker by fixing issues, refactoring existing codes, and a lot of helpful reviews, with his experience as a research engineer in NTT. Tacker highly depends on heat-translator in its main functionality, so I believe he can manage it with his knowledge of Tacker. Please vote for his nomination by answering this mail till June 1st. Best Regards, Yoshito Ito From lokendrarathour at gmail.com Wed May 25 12:12:01 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 25 May 2022 17:42:01 +0530 Subject: [Triple0] Undercloud Restore Failed Message-ID: Back Up and Restore the Director Undercloud Red Hat OpenStack Platform 16.2 | Red Hat Customer Portal Hi Team, Was trying to restore the Undercloud from the backup but restore is getting failed with the error message: "", "stdout: ", "stderr: + sudo -E kolla_set_configs", "sudo: unable to send audit message: Operation not permitted", "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying service configuration files", "ERROR:__main__:MissingRequiredSource: /var/lib/kolla/config_files/src/* file is not found", "", "Error executing ['podman', 'container', 'exists', 'rabbitmq']: returned 1", "Did not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=rabbitmq', '--filter', 'label=config_id=tripleo_step1', '--format', '{{.Names}}']\" - retrying without config_id", "Did not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=rabbitmq', '--format', '{{.Names}}']\"", "Created symlink /etc/systemd/system/multi-user.target.wants/tripleo_rabbitmq.service \u2192 /etc/systemd/system/tripleo_rabbitmq.service." ], "_ansible_no_log": false, "attempts": 4, "changed": false } ] ] Also, we checked for the path ("ERROR:__main__:MissingRequiredSource: /var/lib/kolla/config_files/src/* file is not found",) in the original server , there also it is not existing. Document followed: Back Up and Restore the Director Undercloud Red Hat OpenStack Platform 16.2 | Red Hat Customer Portal Openstack release: Train Any lead would be helpful. --lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Wed May 25 13:00:18 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 25 May 2022 15:00:18 +0200 Subject: [neutron] Propose Yatin Karel for Neutron core team In-Reply-To: References: Message-ID: For sure +1 from me (welcome Yatin in advance!). On Wed, May 25, 2022 at 12:07 PM Tobias Urdin wrote: > unofficial +1, Yatin is always super helpful! > > Well deserved! > > > On 24 May 2022, at 09:41, Lajos Katona wrote: > > > > ? > > Hi Neutrinos, > > > > I would like to propose Yatin Karel (IRC: ykarel) as a member of the > Neutron core team. > > Yatin is a long time Openstack developer, and started concentrating to > Networking projects in the last few cycles. > > > > His experience, proactivity and sharp eye proved to be very helpful both > in reviews and fixing issues. > > His reviews are really valuable and reflect his experience and > helpfulness. > > > > I am sure that he will be a great addition to the core team. > > > > Please vote for his nomination by answering this mail till next Tuesday. > > > > Lajos Katona (lajoskatona) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Wed May 25 13:32:46 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Wed, 25 May 2022 14:32:46 +0100 Subject: live resize In-Reply-To: References: Message-ID: Do you mean increase the size of the disk volume? It is possible to increase volume sizes live with ceph storage. On 24/05/2022 19:19, Parsa Aminian wrote: > hello > on openstack with ceph backend?is it possible to live resize instances > ? I want to change flavor without any down time . From haleyb.dev at gmail.com Wed May 25 14:27:44 2022 From: haleyb.dev at gmail.com (Brian Haley) Date: Wed, 25 May 2022 10:27:44 -0400 Subject: [neutron] Propose Yatin Karel for Neutron core team In-Reply-To: References: Message-ID: <021d3964-8c36-a300-72c6-d1c5faed5d1f@gmail.com> +1 from me, congrats Yatin! On 5/24/22 03:36, Lajos Katona wrote: > Hi Neutrinos, > > I would like to propose Yatin Karel (IRC: ykarel) as a member of the > Neutron core team. > Yatin is a long time Openstack developer, and started concentrating to > Networking projects in the last few cycles. > > His experience, proactivity and sharp eye proved to be very helpful both > in reviews and fixing issues. > His reviews are really valuable and reflect his experience and helpfulness. > > I am sure that he will be a great addition to the core team. > > Please vote for his nomination by answering this mail till next Tuesday. > > Lajos Katona (lajoskatona) > From smooney at redhat.com Wed May 25 14:34:03 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 25 May 2022 15:34:03 +0100 Subject: live resize In-Reply-To: References: Message-ID: <88ea743e00ae6598691ba7e62c0a44ddcbceddea.camel@redhat.com> On Wed, 2022-05-25 at 14:32 +0100, Jonathan Rosser wrote: > Do you mean increase the size of the disk volume? no i think they mean change the flavor to increase or decrease the cpu/ram this type of virtial vm scaling has been requested in the past but it really non tivial to implement so we have never approved a spec to add support to this to nova. if they are refering to voluem resize that should be possibel live if you are using cinder and not novas built in rbd support. > > It is possible to increase volume sizes live with ceph storage. > > On 24/05/2022 19:19, Parsa Aminian wrote: > > hello > > on openstack with ceph backend?is it possible to live resize instances > > ? I want to change flavor without any down time . > From thierry at openstack.org Wed May 25 15:24:44 2022 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 25 May 2022 17:24:44 +0200 Subject: [largescale-sig] Next meeting: May 25th, 15utc In-Reply-To: References: Message-ID: <69e0b768-2576-9583-8f7c-bafdd9b855e3@openstack.org> Hi everyone, We held our traditional SIG meeting today. We discussed our upcoming Forum session in Berlin: "Challenges & Lessons from Operating OpenStack at Scale", which will happen early afternoon on Tuesday. You can read the meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-05-25-15.01.html Our next IRC meeting will be Jun 22, at 1500utc on #openstack-operators on OFTC. Regards, -- Thierry From lokendrarathour at gmail.com Wed May 25 16:34:47 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 25 May 2022 22:04:47 +0530 Subject: [Triple0] Undercloud Restore Failed In-Reply-To: References: Message-ID: Hi team, Any support on it please. On Wed, 25 May 2022, 17:42 Lokendra Rathour, wrote: > Back Up and Restore the Director Undercloud Red Hat OpenStack Platform > 16.2 | Red Hat Customer Portal > > Hi Team, > Was trying to restore the Undercloud from the backup but restore is > getting failed with the error message: > "", > "stdout: ", > "stderr: + sudo -E kolla_set_configs", > "sudo: unable to send audit message: Operation not permitted", > "INFO:__main__:Loading config file at > /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "ERROR:__main__:MissingRequiredSource: > /var/lib/kolla/config_files/src/* file is not found", > "", > "Error executing ['podman', 'container', 'exists', 'rabbitmq']: > returned 1", > "Did not find container with \"['podman', 'ps', '-a', '--filter', > 'label=container_name=rabbitmq', '--filter', > 'label=config_id=tripleo_step1', '--format', '{{.Names}}']\" - retrying > without config_id", > "Did not find container with \"['podman', 'ps', '-a', '--filter', > 'label=container_name=rabbitmq', '--format', '{{.Names}}']\"", > "Created symlink > /etc/systemd/system/multi-user.target.wants/tripleo_rabbitmq.service \u2192 > /etc/systemd/system/tripleo_rabbitmq.service." > ], > "_ansible_no_log": false, > "attempts": 4, > "changed": false > } > ] > ] > > Also, we checked for the path ("ERROR:__main__:MissingRequiredSource: > /var/lib/kolla/config_files/src/* file is not found",) in the original > server , there also it is not existing. > > Document followed: > Back Up and Restore the Director Undercloud Red Hat OpenStack Platform > 16.2 | Red Hat Customer Portal > > > Openstack release: Train > > Any lead would be helpful. > > > --lokendra > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed May 25 22:18:15 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 25 May 2022 17:18:15 -0500 Subject: [qa][requirements] Tempest Gate blocked for py3.6|7 support In-Reply-To: <20220521162310.z5csf5gytmtkenaa@mthode.org> References: <180e42d8e9c.d7f7f37622699.5997519070259133028@ghanshyammann.com> <20220521162310.z5csf5gytmtkenaa@mthode.org> Message-ID: <180fd4c9068.128dd745829118.1196667679148952290@ghanshyammann.com> ---- On Sat, 21 May 2022 11:23:10 -0500 Matthew Thode wrote ---- > On 22-05-20 20:13:52, Ghanshyam Mann wrote: > > Hello Everyone, > > > > As we know, in Zed cycle we have dropped the support of py3.6|7 from OpenStack which is why > > projects, lib like oslo started dropping it and made new releases. For example, oslo.log 5.0.0 does not > > support py3.6|7. > > > > Tempest being branchless and still supporting stable/victoria onwards stable branches thought of > > keeping the py36|7 support. But with the oslo dropped py3.6|7 and upper constraint has been updated > > to the oslo lib latest version made Tempest unit py3.6|7 test jobs failed. > > > > - https://bugs.launchpad.net/tempest/+bug/1975036 > > > > We have two options here: > > > > 1. requirements repo maintain different constraints for py3.6|7 and py3.8|9 which fix the Tempest py3.6| jobs > > and we can keep the python3.6|7 support in Tempest. Example: oslo.log which fixed the gate[1] but we might > > need to do the same for Tempest's other deps > > - https://review.opendev.org/c/openstack/requirements/+/842820 > > > > If we go this route, I think I'd like to have a separate file per python > version. At that point 'unmaintained' versions of python/constraints > could have their maintanece migrated to another team if needed. Also, > the targets that are not for the current development cycle should have a > comment stating such at the top of the file. > > A problem with this is the sprawl of tests that could be needed. > > > 2. Drop the support of py3.6|7 from Tempest > > If the requirement team is not ok with the above solution then we can think of dropping the py3.6|7 support > > from Tempest too. This will stop Tempest to run on py3.6|7 but it will not block Tempest to test the OpenStack > > running on py3.6|7 as that can be done by running the Tempest in virtual env. > > > > One option is to generate th 36/37 constraints and putting the file in > the tempest repo. Yeah, even with this extra maintenance it will not be good as Tempest master supporting py36 will not be able to consume any of the new features from dependencies who already dropped py36. We discussed it in QA meeting and agree to go with option 2 means dropping the py36|7 support in tempest too. I have started the work and most of the things are working. - https://review.opendev.org/q/topic:bug%252F1975036 -gmann > > > Opinion? > > > > NOTE: Until we figure this out, I have proposed to temporarily stop running py3.6|7 on tempest gate, (especially > > to get c9s volume detach failure fix to merged otherwise that will block other projects gate too) > > > > - https://review.opendev.org/c/openstack/tempest/+/842821 > > > > [1] https://review.opendev.org/c/openstack/tempest/+/842819 > > > > -gmann > > > > -- > Matthew Thode > From gmann at ghanshyammann.com Wed May 25 22:28:11 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 25 May 2022 17:28:11 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 26, 2022 at 1500 UTC In-Reply-To: <180f355dc94.ff1fccf6136934.1285061492651779103@ghanshyammann.com> References: <180f355dc94.ff1fccf6136934.1285061492651779103@ghanshyammann.com> Message-ID: <180fd55a87f.12833c70329261.6480966315440706407@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check ** Fixing Zuul config error in OpenStack *** https://etherpad.opendev.org/p/zuul-config-error-openstack * New ELK service dashboard: e-r service ** https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global ** https://review.opendev.org/c/openstack/governance-sigs/+/835838 * 'SLURP' as release cadence terminology ** Document release notes approach * Release identification schema in development process/cycle ** https://review.opendev.org/c/openstack/governance/+/843214/ ** https://review.opendev.org/c/openstack/governance/+/841800 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 23 May 2022 18:52:12 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for May 26, 2022 at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, May 25, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > From katonalala at gmail.com Thu May 26 06:33:39 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 26 May 2022 08:33:39 +0200 Subject: [neutron][neutron-vpnaas][release] dropping feature/lbaasv2 branch In-Reply-To: References: Message-ID: Hi, Thanks Mohammed, I agree with it, as I see feature/lbaasv2 is long left abandoned. I think the usual way, delete the branch but add a tag on top of it to keep it if somebody appears for it can work for it. Lajos Mohammed Naser ezt ?rta (id?pont: 2022. m?j. 24., K, 18:34): > Hi there, > > It looks like we've managed to track along a feature/lbaasv2 branch > into neutron-vpnaas all the way from 2014 :) > > I'd like to request for the release team to delete that branch if > possible. :) > > Thanks, > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.aminian.server at gmail.com Thu May 26 07:50:13 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Thu, 26 May 2022 12:20:13 +0430 Subject: horizon security group page bug Message-ID: On the horizon user setting section there is an option called 'Items per page' that changes the number of items per page . It seems it does not work on security_groups page and on that page all of the security groups will be loaded and that causes a bad gateway error timeout on the browser . -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Thu May 26 08:44:40 2022 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 26 May 2022 17:44:40 +0900 Subject: [neutron][neutron-vpnaas][release] dropping feature/lbaasv2 branch In-Reply-To: References: Message-ID: Hi, I agree that feature/lbaasv2 branch in vpnaas is unnecessary and we can drop it. neutron-vpnaas was originally developed in the main neutron repository and advanced services like vpnaas were split out into separate repositories around Dec 2014. When splitting out advanced services, we use git filter-branch or similar command to keep all commit history. feature/lbaasv2 branch was used to develop LBaaSv2 before splitting out advanced services and we forgot to clean up unrelated branches. This is the history of feature/lbaasv2 branch. Considering this, feature/lbaasv2 branch is completely unnecessary for neutron-vpnaas. I don't think we need a tag before deleting the branch. Thanks, Akihiro Motoki (amotoki) On Thu, May 26, 2022 at 3:47 PM Lajos Katona wrote: > > Hi, > > Thanks Mohammed, I agree with it, as I see feature/lbaasv2 is long left abandoned. > I think the usual way, delete the branch but add a tag on top of it to keep it if somebody appears for it can work for it. > > Lajos > > Mohammed Naser ezt ?rta (id?pont: 2022. m?j. 24., K, 18:34): >> >> Hi there, >> >> It looks like we've managed to track along a feature/lbaasv2 branch >> into neutron-vpnaas all the way from 2014 :) >> >> I'd like to request for the release team to delete that branch if possible. :) >> >> Thanks, >> Mohammed >> >> -- >> Mohammed Naser >> VEXXHOST, Inc. >> From ignaziocassano at gmail.com Thu May 26 09:01:26 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 26 May 2022 11:01:26 +0200 Subject: openstack queens how increase dashboard threads Message-ID: Hello, I'd like to increase on centos queens the threads for openstack dashboard. At this time my /etc/httpd/conf.d/openstack-dashboard.conf is the following: WSGIDaemonProcess dashboard WSGIProcessGroup dashboard WSGISocketPrefix run/wsgi WSGIApplicationGroup %{GLOBAL} WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi Alias /dashboard/static /usr/share/openstack-dashboard/static Options All AllowOverride All Require all granted Options All AllowOverride All Require all granted I tired to modify the first line like this: WSGIDaemonProcess dashboard user=apache group=apache processes=3 threads=10 I restartded httpd but now I am not able to log on the the dashboard. I am sure keystone is working because I am able do execute all commands with command line. Any help, please ? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Thu May 26 12:03:57 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 26 May 2022 14:03:57 +0200 Subject: [neutron][neutron-vpnaas][release] dropping feature/lbaasv2 branch In-Reply-To: References: Message-ID: <8cb0e610-2d19-17b0-8f43-d75afae33437@est.tech> Hi, @Akihiro: does that mean that the branch content is not relevant or it means that the branch content/history can be found in an other repository? Anyway, let me know what is the team's preference (to tag it or not before we delete it) and I'll do accordingly. Thanks, El?d (irc: elodilles) On 2022. 05. 26. 10:44, Akihiro Motoki wrote: > Hi, > > I agree that feature/lbaasv2 branch in vpnaas is unnecessary and we can drop it. > > neutron-vpnaas was originally developed in the main neutron repository > and advanced services like vpnaas were split out into separate > repositories around Dec 2014. > When splitting out advanced services, we use git filter-branch or > similar command to keep all commit history. > feature/lbaasv2 branch was used to develop LBaaSv2 before splitting > out advanced services > and we forgot to clean up unrelated branches. This is the history of > feature/lbaasv2 branch. > > Considering this, feature/lbaasv2 branch is completely unnecessary > for neutron-vpnaas. > I don't think we need a tag before deleting the branch. > > Thanks, > Akihiro Motoki (amotoki) > > On Thu, May 26, 2022 at 3:47 PM Lajos Katona wrote: >> Hi, >> >> Thanks Mohammed, I agree with it, as I see feature/lbaasv2 is long left abandoned. >> I think the usual way, delete the branch but add a tag on top of it to keep it if somebody appears for it can work for it. >> >> Lajos >> >> Mohammed Naser ezt ?rta (id?pont: 2022. m?j. 24., K, 18:34): >>> Hi there, >>> >>> It looks like we've managed to track along a feature/lbaasv2 branch >>> into neutron-vpnaas all the way from 2014 :) >>> >>> I'd like to request for the release team to delete that branch if possible. :) >>> >>> Thanks, >>> Mohammed >>> >>> -- >>> Mohammed Naser >>> VEXXHOST, Inc. >>> From fungi at yuggoth.org Thu May 26 13:58:14 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 26 May 2022 13:58:14 +0000 Subject: [neutron][neutron-vpnaas][release] dropping feature/lbaasv2 branch In-Reply-To: <8cb0e610-2d19-17b0-8f43-d75afae33437@est.tech> References: <8cb0e610-2d19-17b0-8f43-d75afae33437@est.tech> Message-ID: <20220526135813.jm45bzrh7kk66sfs@yuggoth.org> On 2022-05-26 14:03:57 +0200 (+0200), El?d Ill?s wrote: > @Akihiro: does that mean that the branch content is not relevant > or it means that the branch content/history can be found in an > other repository? [...] The latter. If you look at the history of the feature/lbaasv2 branch in openstack/neutron-vpnaas you'll see that the most recent commit (other than the bulk OpenDev migration from 2019) has Change-Id Iba59aa20adc6b369b4b9d250afee406159287ba1. This same commit also appears in the history of the openstack/neutron repository's feature/lbaasv2 branch with an identical commit (c089154). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From katonalala at gmail.com Thu May 26 16:41:22 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 26 May 2022 18:41:22 +0200 Subject: [neutron] Drivers meeting agenda - 27.05.2022. Message-ID: Hi Neutron Drivers, The agenda for tomorrow's drivers meeting is at [1]. We have one RFE to discuss tomorrow: * [RFE] Allow setting --dst-port for all port based protocols at once (#link https://bugs.launchpad.net/neutron/+bug/1973487) On Demand agenda: * (lajoskatona): meaning of option "router_auto_schedule" is ambiguous (#link https://bugs.launchpad.net/neutron/+bug/1973656 ) [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda See you at the meeting tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu May 26 21:57:27 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 26 May 2022 16:57:27 -0500 Subject: [qa][requirements] Tempest Gate blocked for py3.6|7 support In-Reply-To: <180fd4c9068.128dd745829118.1196667679148952290@ghanshyammann.com> References: <180e42d8e9c.d7f7f37622699.5997519070259133028@ghanshyammann.com> <20220521162310.z5csf5gytmtkenaa@mthode.org> <180fd4c9068.128dd745829118.1196667679148952290@ghanshyammann.com> Message-ID: <181025fe164.1047e7d0491076.3669615819444453316@ghanshyammann.com> Just a heads up. py36|7 jobs are stopped to run on tempest gate and the gate is unblocked, you can recheck your tempest patch. - https://review.opendev.org/c/openstack/tempest/+/842821 To drop py36|7 support in the tempest, we need to pin stable/ussuri with old tempest and work is in progress - https://review.opendev.org/q/topic:bug%252F1975036 - https://review.opendev.org/q/topic:ussuri-pin-tempest -gmann ---- On Wed, 25 May 2022 17:18:15 -0500 Ghanshyam Mann wrote ---- > ---- On Sat, 21 May 2022 11:23:10 -0500 Matthew Thode wrote ---- > > On 22-05-20 20:13:52, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > As we know, in Zed cycle we have dropped the support of py3.6|7 from OpenStack which is why > > > projects, lib like oslo started dropping it and made new releases. For example, oslo.log 5.0.0 does not > > > support py3.6|7. > > > > > > Tempest being branchless and still supporting stable/victoria onwards stable branches thought of > > > keeping the py36|7 support. But with the oslo dropped py3.6|7 and upper constraint has been updated > > > to the oslo lib latest version made Tempest unit py3.6|7 test jobs failed. > > > > > > - https://bugs.launchpad.net/tempest/+bug/1975036 > > > > > > We have two options here: > > > > > > 1. requirements repo maintain different constraints for py3.6|7 and py3.8|9 which fix the Tempest py3.6| jobs > > > and we can keep the python3.6|7 support in Tempest. Example: oslo.log which fixed the gate[1] but we might > > > need to do the same for Tempest's other deps > > > - https://review.opendev.org/c/openstack/requirements/+/842820 > > > > > > > If we go this route, I think I'd like to have a separate file per python > > version. At that point 'unmaintained' versions of python/constraints > > could have their maintanece migrated to another team if needed. Also, > > the targets that are not for the current development cycle should have a > > comment stating such at the top of the file. > > > > A problem with this is the sprawl of tests that could be needed. > > > > > 2. Drop the support of py3.6|7 from Tempest > > > If the requirement team is not ok with the above solution then we can think of dropping the py3.6|7 support > > > from Tempest too. This will stop Tempest to run on py3.6|7 but it will not block Tempest to test the OpenStack > > > running on py3.6|7 as that can be done by running the Tempest in virtual env. > > > > > > > One option is to generate th 36/37 constraints and putting the file in > > the tempest repo. > > Yeah, even with this extra maintenance it will not be good as Tempest master supporting > py36 will not be able to consume any of the new features from dependencies who already > dropped py36. > > We discussed it in QA meeting and agree to go with option 2 means dropping the py36|7 support > in tempest too. I have started the work and most of the things are working. > > - https://review.opendev.org/q/topic:bug%252F1975036 > > -gmann > > > > > > Opinion? > > > > > > NOTE: Until we figure this out, I have proposed to temporarily stop running py3.6|7 on tempest gate, (especially > > > to get c9s volume detach failure fix to merged otherwise that will block other projects gate too) > > > > > > - https://review.opendev.org/c/openstack/tempest/+/842821 > > > > > > [1] https://review.opendev.org/c/openstack/tempest/+/842819 > > > > > > -gmann > > > > > > > -- > > Matthew Thode > > > > From kdhall at binghamton.edu Fri May 27 01:27:06 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Thu, 26 May 2022 21:27:06 -0400 Subject: Openstack-Ansible - Test Cluster on VirtualBox Hosts Message-ID: Hello, Sorry for a beginner question. I am trying to deploy a 3-host minimal test cluster on VirtualBox hosts via OpenStack-Ansible. My purpose is to first build this test cluster and then to deploy a production cluster on physical hardware. The test cluster will be used first to learn, and then to validate changes prior to deploying them on the production cluster. I started with Vagrant-Openstack, but it is still based on Stein and not easily adaptable to Xena. Right now I'm working with Xena and OpenStack-Ansible 24.x on Debian 10 with a deployment host running Debian 11. All 4 systems are native VirtualBox VMs I've managed to deploy a couple of nearly functional clusters this way, but never fully functional and stable. Any guidelines or recommendations for this kind of deployment would be sincerely appreciated. Thanks. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri May 27 06:13:54 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 27 May 2022 08:13:54 +0200 Subject: Openstack-Ansible - Test Cluster on VirtualBox Hosts In-Reply-To: References: Message-ID: Hi, I'm not sure if you have seen mnaio guide or not, but it might give you some insights: https://opendev.org/openstack/openstack-ansible-ops/src/branch/master/multi-node-aio It's for the same purpose, but describes installation using virt-manager (KVM) and only for Ubuntu Focal at the moment. However it might give some ideas if needed. Also I'm not sure if that makes any sense to deploy Debian 10 now as in Yoga it's support will be dropped. Other then that it's hard to say anything as you never wrote what issues you had. You can also feel free to join us in #openstack-ansible IRC on OFTC network. ??, 27 ??? 2022 ?., 3:29 Dave Hall : > Hello, > > Sorry for a beginner question. I am trying to deploy a 3-host minimal > test cluster on VirtualBox hosts via OpenStack-Ansible. My purpose is to > first build this test cluster and then to deploy a production cluster on > physical hardware. The test cluster will be used first to learn, and then > to validate changes prior to deploying them on the production cluster. > > I started with Vagrant-Openstack, but it is still based on Stein and not > easily adaptable to Xena. > > Right now I'm working with Xena and OpenStack-Ansible 24.x on Debian 10 > with a deployment host running Debian 11. All 4 systems are native > VirtualBox VMs I've managed to deploy a couple of nearly functional > clusters this way, but never fully functional and stable. > > Any guidelines or recommendations for this kind of deployment would be > sincerely appreciated. > > Thanks. > > -Dave > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Fri May 27 08:08:31 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 27 May 2022 10:08:31 +0200 Subject: [ops][nova] Problems with AggregateMultiTenancyIsolation/AggregateInstanceExtraSpecsFilter after Train --> Yoga update Message-ID: Dear all We have the following use case: - reserve 3 hypervisors for VMs with "big" flavors (whatever the users of these instances are) - partition the rest of the hypervisors according to the project (so projects A1,A2..,An can use only subset S1 of hypervisors, project B1,B2,..,Bm can use only subset S2 of hypervisors) We implemented this: 1- by setting an aggregate_instance_extra_spec 'size' property (with value 'normal' or 'big') for each flavor [*] 2- by creating a BigVMs HostAggregate for size=big [**] 3- by creating an HostAggregate for size=normal for each project, such as this one [***] This used to work. A few days ago we updated our infrastructure from Train to Yoga This was an offline Fast Forward Update: we went through the intermediate releases just to do the dbsyncs. Since this update the instantiation of VMs with flavors with the size=big property doesn't work anymore This is what I see in nova-scheduler log: 2022-05-27 08:38:02.058 5273 INFO nova.filters [req-f92c0e38-262a-4d22-a7fd-8874c4265401 e237e43716fb490db5bda4b777835669 32b5d42c02b0411b8ebf2c33079eeecf - default default] Filtering removed all hosts for the request with instance ID '2412e188-9d5f-4812-ad21-195769a3c220'. Filter results: ['AggregateMultiTenancyIsolation: (start: 59, end: 10)', 'AggregateInstanceExtraSpecsFilter: (start: 10, end: 0)'] Only modifying the property of the BigVMs HA using the filter_tenant_id adding the relevant project: [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs | grep prop | properties | filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='big' | the scheduling works Specifying each project and keeping the list up-to-date would be a problem. Moreover if I am not wrong there is a maximum length for the property field. Any hints ? I didn't find anything related to this issue in the nova release notes for Openstack releases > Train These are the filters that we enabled: [filter_scheduler] enabled_filters = AggregateMultiTenancyIsolation,AggregateInstanceExtraSpecsFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGro\ upAffinityFilter,PciPassthroughFilter,NUMATopologyFilter Thanks a lot, Massimo [*] E.g. [root at cld-ctrl-01 ~]# openstack flavor show cldareapd.medium | grep prope | properties | aggregate_instance_extra_specs:size='normal' |[root at cld-ctrl-01 ~]# openstack flavor show cloudvenetocloudveneto.40cores128GB25-bigunipd | grep prop | properties | aggregate_instance_extra_specs:size='big' | [**] [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs +-------------------+---------------------------------------------------------------------------------------+ | Field | Value | +-------------------+---------------------------------------------------------------------------------------+ | availability_zone | nova | | created_at | 2018-06-20T06:54:51.000000 | | deleted_at | None | | hosts | cld-blu-08.cloud.pd.infn.it, cld-blu-09.cloud.pd.infn.it, cld-blu-10.cloud.pd.infn.it | | id | 135 | | is_deleted | False | | name | BigVMs | | properties |size='big' | | updated_at | None | | uuid | 4b593395-1c76-441c-9022-d421f4ea2dfb | +-------------------+---------------------------------------------------------------------------------------+ [***] [root at cld-ctrl-01 ~]# openstack aggregate show Unipd-AdminTesting-Unipd +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | availability_zone | nova | | created_at | 2018-04-12T07:31:01.000000 | | deleted_at | None | | hosts | cld-blu-01.cloud.pd.infn.it, cld-blu-02.cloud.pd.infn.it, cld-blu-05.cloud.pd.infn.it, cld-blu-06.cloud.pd.infn.it, cld-blu-07.cloud.pd.infn.it, cld-blu-11.cloud.pd.infn.it, cld-blu-12.cloud.pd.infn.it, cld-blu-13.cloud.pd.infn.it, cld-blu-14.cloud.pd.infn.it, cld-blu-15.cloud.pd.infn.it, cld-blu-16.cloud.pd.infn.it | | id | 126 | | is_deleted | False | | name | Unipd-AdminTesting-Unipd | | properties | filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='normal' | | updated_at | 2018-06-08T09:06:20.000000 | | uuid | 38f6a0d4-77ab-42e0-abeb-57e06ba13cca | +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ [root at cld-ctrl-01 ~]# -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Fri May 27 08:27:42 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Fri, 27 May 2022 13:57:42 +0530 Subject: [cinder] festival of XS reviews 27th May 2022 Message-ID: Hello Argonauts, We will be having our monthly festival of XS reviews today i.e. 27th May (Friday) from 1400-1600 UTC. Following are some additional details: Date: 27th May, 2022 Time: 1400-1600 UTC Meeting link: https://meetpad.opendev.org/cinder-festival-of-reviews etherpad: https://etherpad.opendev.org/p/cinder-festival-of-reviews Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim+openstack.org at coote.org Fri May 27 11:08:03 2022 From: tim+openstack.org at coote.org (tim+openstack.org at coote.org) Date: Fri, 27 May 2022 12:08:03 +0100 Subject: Novice question Message-ID: Hullo Na?ve question follows. Sorry. I?m trying a minimal OS install on a Virtualbox VM on a mac. I?d like to get to the point where I can launch a compute node. I?ve failed with `packstack` and I?m trying `devstack`. Should this work out of the box: ie Spin up a vm with vagrant: I?m using Centos Stream 9 to ensure that I get a current(ish) version of Python. It has 9GB RAM Ensure that SELinux and firewalls are turned off Clone devstack, cd to the directory and run `stack.sh` as user `vagrant` (this fails 1/3 of the time as some repo or other resets a connection. `stack.sh` doesn?t seem to be idempotent as reinvoking it may or may not install and run the OS environment Upload a ssh keypair through the web interface Use the web interface to launch the m1.nano flavor with Cirros image (I think that this flavor is quite new as some of the documentation refers to creating such a flavor with 64MB, whereas this one has 128MB. I did try the 64MB route [with `packstack`] and concluded that at least 96MB was needed and the documentation was wrong. I couldn?t log into launchpad.net to report this ? At this point the launch process fails with the error message: ?Build of instance 157bfa1d-7f8c-4a6c-ba3a-b02fb4f4b6a9 aborted: Failed to allocate the network(s), not rescheduling.? In the web ui Afaict, the vm has enough memory (just: it?s using a bit of swap, but more cache, so it could reclaim that). I?d expected the instance to launch, and I can well believe that I?ve missed something, but the documentation seems to point all over the place for various logs. Should this approach work? Is there an alternative that?s better (e.g. use Ubuntu: I?m not keen on apt/dpkg/.deb based distros as I?ve been tripped up in the past over the dependency handling and systemd integration, so I?ve avoided this, but I can see that Canonical is spending money on OS. But so is IBM/Redhat)? Where can I find info on how to trouble shoot the failing process? tia Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Fri May 27 11:39:52 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 27 May 2022 13:39:52 +0200 Subject: [all][oslo] Generation of sample policy files broken since oslo.policy 3.12.0 Message-ID: Hello, While updating sample policy files (those with all rules commented out and including comments) in CloudKitty, I discovered that the functionality was broken: oslo.policy generates an actual policy file instead [1]. This issue was first introduced in 3.12.0. I submitted a fix [2] which I assume will ship in the next release. I thought this was worth sharing since any project using output from oslopolicy-sample-generator in their docs or repository could be impacted. A workaround is to stick to oslo.policy 3.11.0 in your genpolicy tox environment. Cheers, Pierre Riteau (priteau) [1] https://bugs.launchpad.net/oslo.policy/+bug/1975682 [2] https://review.opendev.org/c/openstack/oslo.policy/+/843250 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmeng at redhat.com Fri May 27 12:10:21 2022 From: jmeng at redhat.com (Jakob Meng) Date: Fri, 27 May 2022 14:10:21 +0200 Subject: Ansible OpenStack Collection 2.0.0 and OpenStack SDK 1.0.0/0.99.0 Message-ID: <79624a41-e280-7134-7c1c-37ecaa526c7d@redhat.com> Hello contributors and users of the Ansible OpenStack collection [1]! This week a release candidate of the upcoming first major release of OpenStack SDK has been released [2],[3]. It streamlined and improved large parts of its codebase. For example, its Connection interface now consistently uses the Resource interfaces under the hood. This required breaking changes from older SDK releases though. The Ansible OpenStack collection is heavily based on OpenStack SDK. With OpenStack SDK becoming backward incompatible (for the better), so does our Ansible OpenStack collection. We simply lack the devpower to maintain a backward compatible interface in Ansible OpenStack collection across several SDK releases. We already split our codebase into two separate git branches: master and stable/1.0.0. The former will track the upcoming 2.x.x releases of Ansible OpenStack collection which will be compatible with OpenStack SDK 1.x.x (and its rcs 0.99.x) *only*. Our stable/1.0.0 branch will track the current 1.x.x releases of Ansible OpenStack collection which is compatible with OpenStack SDK prior to 0.99.0 *only*. Both branches will be developed in parallel for the time being. Our 2.0.0 release is currently under development and we still have a long way to go. "We" mainly are a couple of Red Hat employees working part-time on the collection. If you use modules of Ansible OpenStack collection and want to help us with porting them to the new SDK, please contact us! If you want to help, please reach out to us (e.g. [7],[8]) and we can give you a quick introduction into everything. We have extensive documentation on why, what and how we are adopting and reviewing the new modules [4], how to set up a working DevStack environment for hacking on the collection [5] and, most importantly, a list of modules where we are coordinating our porting efforts [6]. We are also hanging around on irc.oftc.net/#openstack-ansible-sig and #oooq ? [1] https://opendev.org/openstack/ansible-collections-openstack [2] https://github.com/openstack/openstacksdk/releases/tag/0.99.0 [3] https://pypi.org/project/openstacksdk/0.99.0/ [4] https://hackmd.io/szgyWa5qSUOWw3JJBXLmOQ?view [5] https://hackmd.io/PI10x-iCTBuO09duvpeWgQ?view [6] https://hackmd.io/7NtovjRkRn-tKraBXfz9jw?view [7] Rafael Castillo (rcastillo) [8] Jakob Meng , (jm1) Best, Jakob From mnaser at vexxhost.com Fri May 27 15:44:22 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 27 May 2022 17:44:22 +0200 Subject: [neutron][neutron-vpnaas] retiring train/ussuri/victoria branches Message-ID: Hi everyone, At the moment, the VPNaaS team is a bit low on people and in order to be able to handle the workload, I'd like to opt to only keeping maintained releases active. I propose retiring those branches unless someone steps up to fix and clean them up: https://review.opendev.org/c/openstack/releases/+/843167 Please respond to this email if you are running those releases and you are willing to maintain those older branches. Thanks Mohammed -- Mohammed Naser VEXXHOST, Inc. From noonedeadpunk at gmail.com Fri May 27 16:10:49 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 27 May 2022 18:10:49 +0200 Subject: [neutron][neutron-vpnaas] retiring train/ussuri/victoria branches In-Reply-To: References: Message-ID: Hey, I'd love to help fixing CI at least for Victoria, but will have time for that only after summit ??, 27 ??? 2022 ?., 17:47 Mohammed Naser : > Hi everyone, > > At the moment, the VPNaaS team is a bit low on people and in order to > be able to handle the workload, I'd like to opt to only keeping > maintained releases active. > > I propose retiring those branches unless someone steps up to fix and > clean them up: > > https://review.opendev.org/c/openstack/releases/+/843167 > > Please respond to this email if you are running those releases and you > are willing to maintain those older branches. > > Thanks > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri May 27 18:03:57 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 27 May 2022 13:03:57 -0500 Subject: [all][tc] What's happening in Technical Committee: summary May 27, 2022: Reading: 5 min Message-ID: <18106b077d1.125684257152065.2924377406472941934@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on May 26. Most of the meeting discussions are summarized in this email. Meeting summary logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-05-26-15.00.log.html * Next TC weekly meeting will be on June 3 Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by June 2. 2. What we completed this week: ========================= * Update governance documentation to try the DPL model for leaderless projects [2]. After election, TC used to search for PTL for leaderless projects but from now onwards we will try if such projects can be moved to the DPL model. 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[3], we have started the many items. Open Reviews ----------------- * Seven open reviews for ongoing activities[4]. Change OpenStack release naming policy proposal ----------------------------------------------------------- We had a call on Tuesday and figured out how to use the number/name in our development cycle/tooling. I am adding the agreement in resolution[5] and also the documentation about using the release number and names [6]. New release cadence "SLURP" open items -------------------------------------------------- 1. release notes strategy: Brian proposal for ""SLURP release notes approach is up for review[7]. Improve project governance --------------------------------- Slawek has the proposal the framework up and it is under review[8]. New ELK service dashboard: e-r service ----------------------------------------------- A small summary from dpawlik: "the opensearch cluster space seems to be stable right now (there is enough space). For the elastic search recheck, I'm working on ansible role for deploying the e-r and push it to the ci-log-processing project." TripleO team and dpawlik are working on the e-r repo merge. Consistent and Secure Default RBAC ------------------------------------------- We had call on May 24 and discussed the new defaults approach clarification for designate and octavia[9]. Takashi sent a reminder for the feedback about the 'split stack' approach[10]. We will also plan to bring it to the ops meetup in Berlin (Most of us not travelling there but we will find someone who can discuss it). I will schedule the next call and will update the timing on email and wiki[11] 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[12]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's the situation again on ML and hope Braden will be ready with their company side permission[13]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[14][15]. Project updates ------------------- * Add a repository for the Large Scale SIG[16] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[17]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [18] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/840363 [3] https://etherpad.opendev.org/p/tc-zed-tracker [4] https://review.opendev.org/q/projects:openstack/governance+status:open [5] https://review.opendev.org/c/openstack/governance/+/843214 [6] https://review.opendev.org/c/openstack/governance/+/841800/ [7] https://review.opendev.org/c/openstack/project-team-guide/+/843457 [8] https://review.opendev.org/c/openstack/governance/+/839880 [9] https://etherpad.opendev.org/p/rbac-zed-ptg#L97 [10] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028677.html [11] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting [12] https://review.opendev.org/c/openstack/governance/+/836888 [13] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [14] https://etherpad.opendev.org/p/zuul-config-error-openstack [15] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html [16] https://review.opendev.org/c/openstack/project-config/+/843534 [17] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [18] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From johnsomor at gmail.com Fri May 27 19:11:46 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 27 May 2022 12:11:46 -0700 Subject: Novice question In-Reply-To: References: Message-ID: Hi Tim, This should work fine. You will want a localrc/local.conf file to configure devstack. I didn't see that mentioned in your steps. See this section in the docs: https://docs.openstack.org/devstack/latest/#create-a-local-conf The only caveat I would mention is the VM instances in Nova will run super slow on virtualbox as it lacks the required "nested virtualization" support and will run them all in software emulation. To find the root cause of the issue in nova, I would look through the devstack at n-cpu log file (journal -u devstack at n-cpui) and the devstack at n-sch logs. Also, you might have a look at one of the nova test localrc file as an example: https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_da7/843127/6/check/tempest-integrated-compute-centos-9-stream/da7bebc/controller/logs/local_conf.txt Michael On Fri, May 27, 2022 at 4:16 AM wrote: > > Hullo > > Na?ve question follows. Sorry. > > I?m trying a minimal OS install on a Virtualbox VM on a mac. I?d like to get to the point where I can launch a compute node. I?ve failed with `packstack` and I?m trying `devstack`. > > Should this work out of the box: ie > > Spin up a vm with vagrant: I?m using Centos Stream 9 to ensure that I get a current(ish) version of Python. It has 9GB RAM > Ensure that SELinux and firewalls are turned off > Clone devstack, cd to the directory and run `stack.sh` as user `vagrant` (this fails 1/3 of the time as some repo or other resets a connection. `stack.sh` doesn?t seem to be idempotent as reinvoking it may or may not install and run the OS environment > Upload a ssh keypair through the web interface > Use the web interface to launch the m1.nano flavor with Cirros image (I think that this flavor is quite new as some of the documentation refers to creating such a flavor with 64MB, whereas this one has 128MB. I did try the 64MB route [with `packstack`] and concluded that at least 96MB was needed and the documentation was wrong. I couldn?t log into launchpad.net to report this ? > At this point the launch process fails with the error message: ?Build of instance 157bfa1d-7f8c-4a6c-ba3a-b02fb4f4b6a9 aborted: Failed to allocate the network(s), not rescheduling.? In the web ui > > > Afaict, the vm has enough memory (just: it?s using a bit of swap, but more cache, so it could reclaim that). I?d expected the instance to launch, and I can well believe that I?ve missed something, but the documentation seems to point all over the place for various logs. > > Should this approach work? Is there an alternative that?s better (e.g. use Ubuntu: I?m not keen on apt/dpkg/.deb based distros as I?ve been tripped up in the past over the dependency handling and systemd integration, so I?ve avoided this, but I can see that Canonical is spending money on OS. But so is IBM/Redhat)? Where can I find info on how to trouble shoot the failing process? > > tia > Tim From mtomaska at redhat.com Sat May 28 02:24:32 2022 From: mtomaska at redhat.com (Miro Tomaska) Date: Fri, 27 May 2022 21:24:32 -0500 Subject: [all][tc] Lets talk about Flake8 E501 Message-ID: Hello All, This is probably going to be a hot topic but I was wondering if the community ever considered raising the default 79 characters line limit. I have seen some places where even a very innocent line of code needs to be split into two lines. I have also seen some code where I feel like variable names were abbreviated on purpose to squeeze everything into one line. How does the community feel about raising the E501 limit to 119 characters? The 119 character limit is the second most popular limit besides the default one. It's long enough to give a developer enough room for descriptive variables without being forced to break lines too much. And it is short enough for a diff between two files to look OK. The only downside I can see right now is that it's not easy to convert an existing code. So we will end up with files where the new code is 79+ characters and the "old" code is <=79. I can also see an argument where someone might have trouble reviewing a patch on a laptop screen (assuming standard 14" screen) ? Here is an example of one extreme, a diff of two files maxing out at 119 characters https://review.opendev.org/c/opendev/sandbox/+/843697/1..2 Thank you for your time and I am looking forward to this conversation :) -- Miro Tomaska irc: mtomaska Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat May 28 12:22:08 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 28 May 2022 12:22:08 +0000 Subject: [all][tc] Lets talk about Flake8 E501 In-Reply-To: References: Message-ID: <20220528122207.cyokyj2i7yvg5pyo@yuggoth.org> On 2022-05-27 21:24:32 -0500 (-0500), Miro Tomaska wrote: > This is probably going to be a hot topic but I was wondering if > the community ever considered raising the default 79 characters > line limit. [...] This is a choice individual projects can make; the TC hasn't demanded all projects follow a specific coding style. That said, I personally do all my coding and code review in 80-column text terminals, so any lines longer that that end up wrapping and making indentation harder to follow. There have been numerous studies to suggest that shorter lines are easier for people to read and comprehend, whether it's prose or source code, and the ideal lengths actually end up being around 50-75 characters. There's a reason wide-format print media almost always breaks text into multiple columns: The eyes tend to get lost as you scan across longer lines of content. (As an aside, you'll probably notice I've set my MUA to wrap all lines at 68 characters.) Just my opinion, but I do really appreciate when projects keep source code files wrapped at 79 columns. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Sat May 28 16:04:04 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 28 May 2022 11:04:04 -0500 Subject: [all][tc] Lets talk about Flake8 E501 In-Reply-To: References: Message-ID: <1810b691354.b34ae5fb169369.5612306160139015283@ghanshyammann.com> ---- On Fri, 27 May 2022 21:24:32 -0500 Miro Tomaska wrote ---- > Hello All, > This is probably going to be a hot topic but I was wondering if the community ever considered raising the default 79 characters line limit. I have seen some places where even a very innocent line of code needs to be split into two lines. I have also seen some code where I feel like variable names were abbreviated on purpose to squeeze everything into one line. > How does the community feel about raising the E501 limit to 119 characters? The 119 character limit is the second most popular limit besides the default one. It's long enough to give a developer enough room for descriptive variables without being forced to break lines too much. And it is short enough for a diff between two files to look OK. > The only downside I can see right now is that it's not easy to convert an existing code. So we will end up with files where the new code is 79+ characters and the "old" code is <=79. I can also see an argument where someone might have trouble reviewing a patch on a laptop screen (assuming standard 14" screen) ? This is a good point and having such in-consistency will be more problems than having it more than 79 char for the reason you mentioned above. And I do not think we will be able to convert all the existing code to the new limit. I feel more comfortable with 79 char when I review on a small or large screen. If we end up scrolling horizontally then it is definitely not good. Log files are good examples of it. For long line/var name, we do split line due to the 79 char limit but I do not think that is more cases in our code, maybe ~1% of our existing code? I think keeping consistency in code is important which is what flake8/pep8 checks are all about. -gmann > > Here is an example of one extreme, a diff of two files maxing out at 119 characters > https://review.opendev.org/c/opendev/sandbox/+/843697/1..2 > Thank you for your time and I am looking forward to this conversation :) > -- > Miro Tomaskairc: mtomaskaRed Hat From gmann at ghanshyammann.com Sat May 28 23:41:11 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 28 May 2022 18:41:11 -0500 Subject: [qa][stable][gate] Gate fix for stable/ussuri and Tempest pin for stable/victoria Message-ID: <1810d0b9544.1158f226d173555.1038431972596585840@ghanshyammann.com> Hello Everyone, You might know the stable/ussuri gate is broken because we dropped the py36 support in zed cycle and Tempest master is used to test it which uses master constraints. To fix this, I am pinning Tempest 26.1.0 in stable/ussuri and tested it with compatible stable/ussuri constraints and tempest plugins. - https://review.opendev.org/q/topic:ussuri-pin-tempest at the same time, I am also pinning tempest in stable/victoria too as it is also in EM state - https://review.opendev.org/q/topic:victoria-pin-tempest Both stable branches pining is working fine and ready to merge now. I have proposed the fixes for a few tempest plugins and stable branches, and request them to merge because we are going to merge the tempest and devstack changes. Below is the complete set of changes we need: QA patches need to go first: - Tempest - https://review.opendev.org/c/openstack/tempest/+/843045 - https://review.opendev.org/c/openstack/tempest/+/843293/2 - Devstack: - https://review.opendev.org/c/openstack/devstack/+/838051 - https://review.opendev.org/c/openstack/devstack/+/843295 - Devstack-plugin-ceph - https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/843354 - https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/843355 (it is +A and waiting for depends-on devstack patch to merge) - Devstack-gate - https://review.opendev.org/c/openstack/devstack-gate/+/843689 Cinder: - https://review.opendev.org/c/openstack/cinder/+/843305 - https://review.opendev.org/c/openstack/cinder/+/843092 (it is +A and waiting for depends-on devstack patch to merge) - https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/843319 (merged) Cyborg: no action is needed - https://review.opendev.org/c/openstack/cyborg-tempest-plugin/+/843329 (it is +A and waiting for depends-on devstack patch to merge) Neutron: need review - https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/838053 Designate: need review - https://review.opendev.org/c/openstack/designate/+/843709 - https://review.opendev.org/c/openstack/designate/+/843710 -gmann From nsitlani03 at gmail.com Mon May 30 05:56:51 2022 From: nsitlani03 at gmail.com (Namrata Sitlani) Date: Mon, 30 May 2022 11:26:51 +0530 Subject: [magnum] [xena] [IPv6] What is needed to make dual-stack IPv4/6 work in Magnum-managed Kubernetes 1.21+? In-Reply-To: <218cc6fc19116ebafd69c824217cc4e175e4ef23.camel@etc.gen.nz> References: <218cc6fc19116ebafd69c824217cc4e175e4ef23.camel@etc.gen.nz> Message-ID: Hi Andrew, Thanks for the response. The patch you mentioned will make dual-stack IPv4/6 work in Magnum-managed. I will look into this further. Thanks, Namrata On Fri, May 20, 2022 at 7:01 PM Andrew Ruthven wrote: > Hi Nimrata, > > On Fri, 2022-05-20 at 17:58 +0530, Namrata Sitlani wrote: > > Can someone please help us with the information, what changes are required > to be made at the magnum client-side to have an IPv6 supported Kubernetes > Cluster? > > > Purely because I happened to be looking at the patches currently pending > for Magnum just now, I spotted this one which will be of interest to you: > > https://review.opendev.org/c/openstack/magnum/+/802235 > > It is to add IPv6 support in Magnum. So it appears it isn't currently > supported. > > Cheers, > Andrew > > -- > > Andrew Ruthven, Wellington, New Zealandandrew at etc.gen.nz | > Catalyst Cloud: | This space intentionally left blank > https://catalystcloud.nz | > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Mon May 30 07:31:41 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Mon, 30 May 2022 09:31:41 +0200 Subject: [ops][nova] Problems with AggregateMultiTenancyIsolation/AggregateInstanceExtraSpecsFilter after Train --> Yoga update In-Reply-To: References: Message-ID: It looks like I need now to create a HostAggregate for "size=big" for each project, specifying as properties: filter_tenant_id=, size='big' Is this the expected behaviour ? Till Train a single HostAggregate with the property: size='big' was enough Thanks, Massimo On Fri, May 27, 2022 at 10:08 AM Massimo Sgaravatto < massimo.sgaravatto at gmail.com> wrote: > Dear all > > We have the following use case: > > - reserve 3 hypervisors for VMs with "big" flavors (whatever the users of > these instances are) > - partition the rest of the hypervisors according to the project (so > projects A1,A2..,An can use only subset S1 of hypervisors, > project B1,B2,..,Bm can use only subset S2 of hypervisors) > > We implemented this: > > 1- by setting an aggregate_instance_extra_spec 'size' property (with > value 'normal' or 'big') for each flavor [*] > 2- by creating a BigVMs HostAggregate for size=big [**] > 3- by creating an HostAggregate for size=normal for each project, such as > this one [***] > > > This used to work. > > A few days ago we updated our infrastructure from Train to Yoga > This was an offline Fast Forward Update: we went through the intermediate > releases just to do the dbsyncs. > Since this update the instantiation of VMs with flavors with the size=big > property doesn't work anymore > > > This is what I see in nova-scheduler log: > 2022-05-27 08:38:02.058 5273 INFO nova.filters > [req-f92c0e38-262a-4d22-a7fd-8874c4265401 e237e43716fb490db5bda4b777835669 > 32b5d42c02b0411b8ebf2c33079eeecf - default default] Filtering removed all > hosts for the request with instance ID > '2412e188-9d5f-4812-ad21-195769a3c220'. Filter results: > ['AggregateMultiTenancyIsolation: (start: 59, end: 10)', > 'AggregateInstanceExtraSpecsFilter: (start: 10, end: 0)'] > > > Only modifying the property of the BigVMs HA using the filter_tenant_id > adding the relevant project: > > [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs | grep prop > | properties | filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', > size='big' | > > the scheduling works > > Specifying each project and keeping the list up-to-date would be a > problem. Moreover if I am not wrong there is a maximum length for the > property field. > > Any hints ? > I didn't find anything related to this issue in the nova release notes for > Openstack releases > Train > > > These are the filters that we enabled: > > [filter_scheduler] > enabled_filters = > AggregateMultiTenancyIsolation,AggregateInstanceExtraSpecsFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGro\ > upAffinityFilter,PciPassthroughFilter,NUMATopologyFilter > > > Thanks a lot, Massimo > > > [*] > E.g. > [root at cld-ctrl-01 ~]# openstack flavor show cldareapd.medium | grep prope > | properties | > aggregate_instance_extra_specs:size='normal' |[root at cld-ctrl-01 ~]# > openstack flavor show cloudvenetocloudveneto.40cores128GB25-bigunipd | grep > prop > | properties | aggregate_instance_extra_specs:size='big' > | > > > [**] > > [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs > > +-------------------+---------------------------------------------------------------------------------------+ > | Field | Value > | > > +-------------------+---------------------------------------------------------------------------------------+ > | availability_zone | nova > | > | created_at | 2018-06-20T06:54:51.000000 > | > | deleted_at | None > | > | hosts | cld-blu-08.cloud.pd.infn.it, > cld-blu-09.cloud.pd.infn.it, cld-blu-10.cloud.pd.infn.it | > | id | 135 > | > | is_deleted | False > | > | name | BigVMs > | > | properties |size='big' | > | updated_at | None > | > | uuid | 4b593395-1c76-441c-9022-d421f4ea2dfb > | > > +-------------------+---------------------------------------------------------------------------------------+ > > > [***] > [root at cld-ctrl-01 ~]# openstack aggregate show Unipd-AdminTesting-Unipd > > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > > > | > > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | availability_zone | nova > > > > | > | created_at | 2018-04-12T07:31:01.000000 > > > > | > | deleted_at | None > > > > | > | hosts | cld-blu-01.cloud.pd.infn.it, > cld-blu-02.cloud.pd.infn.it, cld-blu-05.cloud.pd.infn.it, > cld-blu-06.cloud.pd.infn.it, cld-blu-07.cloud.pd.infn.it, > cld-blu-11.cloud.pd.infn.it, cld-blu-12.cloud.pd.infn.it, > cld-blu-13.cloud.pd.infn.it, cld-blu-14.cloud.pd.infn.it, > cld-blu-15.cloud.pd.infn.it, cld-blu-16.cloud.pd.infn.it | > | id | 126 > > > > | > | is_deleted | False > > > > | > | name | Unipd-AdminTesting-Unipd > > > > | > | properties | filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', > size='normal' > > > | > | updated_at | 2018-06-08T09:06:20.000000 > > > > | > | uuid | 38f6a0d4-77ab-42e0-abeb-57e06ba13cca > > > > | > > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > [root at cld-ctrl-01 ~]# > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Mon May 30 07:45:40 2022 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 30 May 2022 09:45:40 +0200 Subject: [neutron] Bug deputy report (week starting on 2022-05-23) Message-ID: Hey neutrinos, I was the bug deputy last week, here are the reported bugs. All high/critical bugs have patches merged (and backports in progress for some), most of the other bugs have patches up for review or merged Critical * Compute tests are failing with failed to reach ACTIVE status and task state "None" within the required time. - https://bugs.launchpad.net/neutron/+bug/1964940 Revert pushed by ykarel https://review.opendev.org/c/openstack/neutron/+/843426 merged High * Open vSwitch agent - does not report to the segment plugin - https://bugs.launchpad.net/neutron/+bug/1975542 DHCP agent issue with routed provider networks, once an agent is running, new segment mappings are not added for new networks Fix by ralonsoh https://review.opendev.org/c/openstack/neutron/+/843294 merged * [sqlalchemy-20] Missing DB context in L2pop methods - https://bugs.launchpad.net/neutron/+bug/1975797 Fix by ralonsoh https://review.opendev.org/c/openstack/neutron/+/843413 merged * ``ovn_revision_bumbers_db._ensure_revision_row_exist`` is mixing DB contexts - https://bugs.launchpad.net/neutron/+bug/1975837 Also fix by ralonsoh https://review.opendev.org/c/openstack/neutron/+/843478 also merged Medium * Neutron RBAC not sharing subnet - https://bugs.launchpad.net/neutron/+bug/1975603 Reported around last week's meeting time, lajos looking into it? * Neutron agent blocks during VM deletion when a remote security group is involved - https://bugs.launchpad.net/neutron/+bug/1975674 Happening with OVS firewall and large number of VMS, suggested fix for deferred https://review.opendev.org/c/openstack/neutron/+/843253 * OVN migration failed due to unhandled error in neutron_ovn_db_sync_util - https://bugs.launchpad.net/neutron/+bug/1975692 Patch by ralonsoh https://review.opendev.org/c/openstack/neutron/+/843263 merged * difference in execution time between admin/non-admin call - https://bugs.launchpad.net/neutron/+bug/1975828 Follow-up to https://bugs.launchpad.net/neutron/+bug/1973349 (that added an index to ports.network_id) Unassigned * ML2 OVN - Creating an instance with hardware offloaded port is broken - https://bugs.launchpad.net/neutron/+bug/1975743 Suggested fix by fnordahl - https://review.opendev.org/c/openstack/neutron/+/843601 Have a nice week! -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon May 30 07:50:54 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 30 May 2022 09:50:54 +0200 Subject: [ops][nova] Problems with AggregateMultiTenancyIsolation/AggregateInstanceExtraSpecsFilter after Train --> Yoga update In-Reply-To: References: Message-ID: Hi there, Are you sure you haven't done any other changes to your environment? Both those filters haven't been changed for years (~2017): https://github.com/openstack/nova/commits/master/nova/scheduler/filters/aggregate_multitenancy_isolation.py https://github.com/openstack/nova/commits/master/nova/scheduler/filters/aggregate_instance_extra_specs.py I think you've got something else here, I'd suggest enabling debug and checking why these nodes are being filtered out. Mohammed On Mon, May 30, 2022 at 9:39 AM Massimo Sgaravatto wrote: > > It looks like I need now to create a HostAggregate for "size=big" for each project, specifying as properties: > filter_tenant_id=, size='big' > > Is this the expected behaviour ? > > Till Train a single HostAggregate with the property: size='big' was enough > > Thanks, Massimo > > > > On Fri, May 27, 2022 at 10:08 AM Massimo Sgaravatto wrote: >> >> Dear all >> >> We have the following use case: >> >> - reserve 3 hypervisors for VMs with "big" flavors (whatever the users of these instances are) >> - partition the rest of the hypervisors according to the project (so projects A1,A2..,An can use only subset S1 of hypervisors, project B1,B2,..,Bm can use only subset S2 of hypervisors) >> >> We implemented this: >> >> 1- by setting an aggregate_instance_extra_spec 'size' property (with value 'normal' or 'big') for each flavor [*] >> 2- by creating a BigVMs HostAggregate for size=big [**] >> 3- by creating an HostAggregate for size=normal for each project, such as this one [***] >> >> >> This used to work. >> >> A few days ago we updated our infrastructure from Train to Yoga >> This was an offline Fast Forward Update: we went through the intermediate releases just to do the dbsyncs. >> Since this update the instantiation of VMs with flavors with the size=big property doesn't work anymore >> >> >> This is what I see in nova-scheduler log: >> 2022-05-27 08:38:02.058 5273 INFO nova.filters [req-f92c0e38-262a-4d22-a7fd-8874c4265401 e237e43716fb490db5bda4b777835669 32b5d42c02b0411b8ebf2c33079eeecf - default default] Filtering removed all hosts for the request with instance ID '2412e188-9d5f-4812-ad21-195769a3c220'. Filter results: ['AggregateMultiTenancyIsolation: (start: 59, end: 10)', 'AggregateInstanceExtraSpecsFilter: (start: 10, end: 0)'] >> >> >> Only modifying the property of the BigVMs HA using the filter_tenant_id adding the relevant project: >> >> [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs | grep prop >> | properties | filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='big' | >> >> the scheduling works >> >> Specifying each project and keeping the list up-to-date would be a problem. Moreover if I am not wrong there is a maximum length for the property field. >> >> Any hints ? >> I didn't find anything related to this issue in the nova release notes for Openstack releases > Train >> >> >> These are the filters that we enabled: >> >> [filter_scheduler] >> enabled_filters = AggregateMultiTenancyIsolation,AggregateInstanceExtraSpecsFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGro\ >> upAffinityFilter,PciPassthroughFilter,NUMATopologyFilter >> >> >> Thanks a lot, Massimo >> >> >> [*] >> E.g. >> [root at cld-ctrl-01 ~]# openstack flavor show cldareapd.medium | grep prope >> | properties | aggregate_instance_extra_specs:size='normal' |[root at cld-ctrl-01 ~]# openstack flavor show cloudvenetocloudveneto.40cores128GB25-bigunipd | grep prop >> | properties | aggregate_instance_extra_specs:size='big' | >> >> >> [**] >> >> [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs >> +-------------------+---------------------------------------------------------------------------------------+ >> | Field | Value | >> +-------------------+---------------------------------------------------------------------------------------+ >> | availability_zone | nova | >> | created_at | 2018-06-20T06:54:51.000000 | >> | deleted_at | None | >> | hosts | cld-blu-08.cloud.pd.infn.it, cld-blu-09.cloud.pd.infn.it, cld-blu-10.cloud.pd.infn.it | >> | id | 135 | >> | is_deleted | False | >> | name | BigVMs | >> | properties |size='big' | >> | updated_at | None | >> | uuid | 4b593395-1c76-441c-9022-d421f4ea2dfb | >> +-------------------+---------------------------------------------------------------------------------------+ >> >> >> [***] >> [root at cld-ctrl-01 ~]# openstack aggregate show Unipd-AdminTesting-Unipd >> +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value | >> +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | availability_zone | nova | >> | created_at | 2018-04-12T07:31:01.000000 | >> | deleted_at | None | >> | hosts | cld-blu-01.cloud.pd.infn.it, cld-blu-02.cloud.pd.infn.it, cld-blu-05.cloud.pd.infn.it, cld-blu-06.cloud.pd.infn.it, cld-blu-07.cloud.pd.infn.it, cld-blu-11.cloud.pd.infn.it, cld-blu-12.cloud.pd.infn.it, cld-blu-13.cloud.pd.infn.it, cld-blu-14.cloud.pd.infn.it, cld-blu-15.cloud.pd.infn.it, cld-blu-16.cloud.pd.infn.it | >> | id | 126 | >> | is_deleted | False | >> | name | Unipd-AdminTesting-Unipd | >> | properties | filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='normal' | >> | updated_at | 2018-06-08T09:06:20.000000 | >> | uuid | 38f6a0d4-77ab-42e0-abeb-57e06ba13cca | >> +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> [root at cld-ctrl-01 ~]# >> -- Mohammed Naser VEXXHOST, Inc. From massimo.sgaravatto at gmail.com Mon May 30 08:44:07 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Mon, 30 May 2022 10:44:07 +0200 Subject: [ops][nova] Problems with AggregateMultiTenancyIsolation/AggregateInstanceExtraSpecsFilter after Train --> Yoga update In-Reply-To: References: Message-ID: Thanks Mohammed for your feedback This [*] is what I see in the log file. The 3 relevant nodes are filtered out by the AggregateMultiTenancyIsolation (i.e. they are not among the 10 returned hosts) In the Train --> Yoga updated we also removed the AvailabilityZoneFilter (which has been deprecated): I tried to re-add it but this didn't help I can't remember other changes that could be relevant with this issue Cheers, Massimo [*] 2022-05-30 10:08:32.620 3168092 DEBUG nova.filters [req-f698ce3b-18c9-4f3f-9b73-49496e19c237 e237e43716fb490db5bda4b777835669 32b5d42c02b0411b8ebf2c3\ 3079eeecf - default default] Filtering removed all hosts for the request with instance ID 'fd6d978a-3739-45c6-b1cc-28fc9e57d381'. Filter results: [('\ AggregateMultiTenancyIsolation', [('cld-blu-12.cloud.pd.infn.it', ' cld-blu-12.cloud.pd.infn.it'), ('cld-blu-02.cloud.pd.infn.it', 'cld-blu-02.cloud.p\ d.infn.it'), ('cld-blu-15.cloud.pd.infn.it', 'cld-blu-15.cloud.pd.infn.it'), ('cld-blu-11.cloud.pd.infn.it', 'cld-blu-11.cloud.pd.infn.it'), ('cld-bl\ u-14.cloud.pd.infn.it', 'cld-blu-14.cloud.pd.infn.it'), (' cld-blu-13.cloud.pd.infn.it', 'cld-blu-13.cloud.pd.infn.it'), (' cld-blu-07.cloud.pd.infn.it\ ', 'cld-blu-07.cloud.pd.infn.it'), ('cld-blu-06.cloud.pd.infn.it', ' cld-blu-06.cloud.pd.infn.it'), ('cld-blu-01.cloud.pd.infn.it', 'cld-blu-01.cloud.\ pd.infn.it'), ('cld-blu-16.cloud.pd.infn.it', 'cld-blu-16.cloud.pd.infn.it')]), ('AggregateInstanceExtraSpecsFilter', None)] get_filtered_objects /us\ r/lib/python3.6/site-packages/nova/filters.py:114 2022-05-30 10:08:32.620 3168092 INFO nova.filters [req-f698ce3b-18c9-4f3f-9b73-49496e19c237 e237e43716fb490db5bda4b777835669 32b5d42c02b0411b8ebf2c33\ 079eeecf - default default] Filtering removed all hosts for the request with instance ID 'fd6d978a-3739-45c6-b1cc-28fc9e57d381'. Filter results: ['Ag\ gregateMultiTenancyIsolation: (start: 58, end: 10)', 'AggregateInstanceExtraSpecsFilter: (start: 10, end: 0)'] On Mon, May 30, 2022 at 9:51 AM Mohammed Naser wrote: > Hi there, > > Are you sure you haven't done any other changes to your environment? > Both those filters haven't been changed for years (~2017): > > > https://github.com/openstack/nova/commits/master/nova/scheduler/filters/aggregate_multitenancy_isolation.py > > https://github.com/openstack/nova/commits/master/nova/scheduler/filters/aggregate_instance_extra_specs.py > > I think you've got something else here, I'd suggest enabling debug and > checking why these nodes are being filtered out. > > Mohammed > > On Mon, May 30, 2022 at 9:39 AM Massimo Sgaravatto > wrote: > > > > It looks like I need now to create a HostAggregate for "size=big" for > each project, specifying as properties: > > filter_tenant_id=, size='big' > > > > Is this the expected behaviour ? > > > > Till Train a single HostAggregate with the property: size='big' was > enough > > > > Thanks, Massimo > > > > > > > > On Fri, May 27, 2022 at 10:08 AM Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote: > >> > >> Dear all > >> > >> We have the following use case: > >> > >> - reserve 3 hypervisors for VMs with "big" flavors (whatever the users > of these instances are) > >> - partition the rest of the hypervisors according to the project (so > projects A1,A2..,An can use only subset S1 of hypervisors, project > B1,B2,..,Bm can use only subset S2 of hypervisors) > >> > >> We implemented this: > >> > >> 1- by setting an aggregate_instance_extra_spec 'size' property (with > value 'normal' or 'big') for each flavor [*] > >> 2- by creating a BigVMs HostAggregate for size=big [**] > >> 3- by creating an HostAggregate for size=normal for each project, such > as this one [***] > >> > >> > >> This used to work. > >> > >> A few days ago we updated our infrastructure from Train to Yoga > >> This was an offline Fast Forward Update: we went through the > intermediate releases just to do the dbsyncs. > >> Since this update the instantiation of VMs with flavors with the > size=big property doesn't work anymore > >> > >> > >> This is what I see in nova-scheduler log: > >> 2022-05-27 08:38:02.058 5273 INFO nova.filters > [req-f92c0e38-262a-4d22-a7fd-8874c4265401 e237e43716fb490db5bda4b777835669 > 32b5d42c02b0411b8ebf2c33079eeecf - default default] Filtering removed all > hosts for the request with instance ID > '2412e188-9d5f-4812-ad21-195769a3c220'. Filter results: > ['AggregateMultiTenancyIsolation: (start: 59, end: 10)', > 'AggregateInstanceExtraSpecsFilter: (start: 10, end: 0)'] > >> > >> > >> Only modifying the property of the BigVMs HA using the > filter_tenant_id adding the relevant project: > >> > >> [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs | grep prop > >> | properties | > filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='big' > | > >> > >> the scheduling works > >> > >> Specifying each project and keeping the list up-to-date would be a > problem. Moreover if I am not wrong there is a maximum length for the > property field. > >> > >> Any hints ? > >> I didn't find anything related to this issue in the nova release notes > for Openstack releases > Train > >> > >> > >> These are the filters that we enabled: > >> > >> [filter_scheduler] > >> enabled_filters = > AggregateMultiTenancyIsolation,AggregateInstanceExtraSpecsFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGro\ > >> upAffinityFilter,PciPassthroughFilter,NUMATopologyFilter > >> > >> > >> Thanks a lot, Massimo > >> > >> > >> [*] > >> E.g. > >> [root at cld-ctrl-01 ~]# openstack flavor show cldareapd.medium | grep > prope > >> | properties | > aggregate_instance_extra_specs:size='normal' |[root at cld-ctrl-01 ~]# > openstack flavor show cloudvenetocloudveneto.40cores128GB25-bigunipd | grep > prop > >> | properties | > aggregate_instance_extra_specs:size='big' | > >> > >> > >> [**] > >> > >> [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs > >> > +-------------------+---------------------------------------------------------------------------------------+ > >> | Field | Value > | > >> > +-------------------+---------------------------------------------------------------------------------------+ > >> | availability_zone | nova > | > >> | created_at | 2018-06-20T06:54:51.000000 > | > >> | deleted_at | None > | > >> | hosts | cld-blu-08.cloud.pd.infn.it, > cld-blu-09.cloud.pd.infn.it, cld-blu-10.cloud.pd.infn.it | > >> | id | 135 > | > >> | is_deleted | False > | > >> | name | BigVMs > | > >> | properties |size='big' | > >> | updated_at | None > | > >> | uuid | 4b593395-1c76-441c-9022-d421f4ea2dfb > | > >> > +-------------------+---------------------------------------------------------------------------------------+ > >> > >> > >> [***] > >> [root at cld-ctrl-01 ~]# openstack aggregate show Unipd-AdminTesting-Unipd > >> > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >> | Field | Value > > > > | > >> > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >> | availability_zone | nova > > > > | > >> | created_at | 2018-04-12T07:31:01.000000 > > > > | > >> | deleted_at | None > > > > | > >> | hosts | cld-blu-01.cloud.pd.infn.it, > cld-blu-02.cloud.pd.infn.it, cld-blu-05.cloud.pd.infn.it, > cld-blu-06.cloud.pd.infn.it, cld-blu-07.cloud.pd.infn.it, > cld-blu-11.cloud.pd.infn.it, cld-blu-12.cloud.pd.infn.it, > cld-blu-13.cloud.pd.infn.it, cld-blu-14.cloud.pd.infn.it, > cld-blu-15.cloud.pd.infn.it, cld-blu-16.cloud.pd.infn.it | > >> | id | 126 > > > > | > >> | is_deleted | False > > > > | > >> | name | Unipd-AdminTesting-Unipd > > > > | > >> | properties | > filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='normal' > > > > | > >> | updated_at | 2018-06-08T09:06:20.000000 > > > > | > >> | uuid | 38f6a0d4-77ab-42e0-abeb-57e06ba13cca > > > > | > >> > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >> [root at cld-ctrl-01 ~]# > >> > > > -- > Mohammed Naser > VEXXHOST, Inc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon May 30 10:57:37 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 30 May 2022 11:57:37 +0100 Subject: [all][tc] Lets talk about Flake8 E501 In-Reply-To: <1810b691354.b34ae5fb169369.5612306160139015283@ghanshyammann.com> References: <1810b691354.b34ae5fb169369.5612306160139015283@ghanshyammann.com> Message-ID: <82dc2a68b1e49f78525b31eeabd988a050d282aa.camel@redhat.com> On Sat, 2022-05-28 at 11:04 -0500, Ghanshyam Mann wrote: > ---- On Fri, 27 May 2022 21:24:32 -0500 Miro Tomaska wrote ---- > > Hello All, > > This is probably going to be a hot topic but I was wondering if the community ever considered raising the default 79 characters line limit. I have seen some places where even a very innocent line of code needs to be split into two lines. I have also seen some code where I feel like variable names were abbreviated on purpose to squeeze everything into one line. > > How does the community feel about raising the E501 limit to 119 characters? The 119 character limit is the second most popular limit besides the default one. It's long enough to give a developer enough room for descriptive variables without being forced to break lines too much. And it is short enough for a diff between two files to look OK. > > The only downside I can see right now is that it's not easy to convert an existing code. So we will end up with files where the new code is 79+ characters and the "old" code is <=79. I can also see an argument where someone might have trouble reviewing a patch on a laptop screen (assuming standard 14" screen) ? > > This is a good point and having such in-consistency will be more problems than having it more than > 79 char for the reason you mentioned above. And I do not think we will be able to convert all the > existing code to the new limit. we would not convert it that woudl break git blame. but i dont see the consonsitnce being an issue we would just update teh older code whenever it was next touched if needed. its not as if all lines are 79 chars today the code with varies anyway > > I feel more comfortable with 79 char when I review on a small or large screen. If we end up scrolling horizontally > then it is definitely not good. Log files are good examples of it. even on my phone which i alsmost never use use 80 dose not really help much as i would generally put my phone in landscape in that case and i would have more then enough space for 2 120 char colums side by side. when i used ot code on my 13 inch dell xps i also used ot have multiple terminal/sbuffer side by side. granted it was a 4k dispaly but even on my work laptop which is only 1080p or my ipad generally have room for 2 150 ish cahrater terminals > > For long line/var name, we do split line due to the 79 char limit but I do not think that is more cases in our code, > maybe ~1% of our existing code? it deffinetly more common then that. i know i at least often end up rewriging my code locally befor ei push it to fit in the currently limit on most patches i write. i stongly belive our current file limits are reduceing readblity not helping it. i however dont want to break git blames usfully ness so any cahnge we woudl make in the fucutre would have to not touch the exsitng code and be gradual. like for example swaping to black i would be strongly against since it fundemtaly chagnes how code is formated and breaks git blame but i was infavor of adding autopep8 which had minimal change to nova when we added it since it only fixes pep8 issue and does not reformat all the code. if we were to adopt a new longer line limit i would hope we ensure that we only use it for new code an let the code organcialy convert. im not actully saying we shoudl change our line limits on a per project or comunity wide basis just that outside fo readbilyt there is a maintainablity aspect to consider. with that said i really prefer when tools enforce style not humans so any change would have to be enforcable by tooling/ci too or its a non starter in my book. flake8 does htave teh ablity to set teh line lenght https://flake8.pycqa.org/en/latest/user/options.html#cmdoption-flake8-max-line-length and tools like autopep8 can also consume that but not all ides or editors will. > > I think keeping consistency in code is important which is what flake8/pep8 checks are all about. it is but i dont think applies ot lines that are shorter the the new line lenght. i.e. removeign lines that were split and replacing them with unsplit lines that now fit. that is something that should only be done if you are modifying that line for a different reason. > > -gmann > > > > > Here is an example of one extreme, a diff of two files maxing out at 119 characters > > https://review.opendev.org/c/opendev/sandbox/+/843697/1..2 > > Thank you for your time and I am looking forward to this conversation :) > > -- > > Miro Tomaskairc: mtomaskaRed Hat > From tim+openstack.org at coote.org Mon May 30 11:06:48 2022 From: tim+openstack.org at coote.org (tim+openstack.org at coote.org) Date: Mon, 30 May 2022 12:06:48 +0100 Subject: Novice question In-Reply-To: References: Message-ID: <83EE0BBE-7FE2-4815-912F-A6383A0FFFA8@coote.org> Thanks, Michael. Very reassuring. I?ll have a look and comment back. > On 27 May 2022, at 20:11, Michael Johnson wrote: > > Hi Tim, > > This should work fine. You will want a localrc/local.conf file to > configure devstack. I didn't see that mentioned in your steps. > See this section in the docs: > https://docs.openstack.org/devstack/latest/#create-a-local-conf > > The only caveat I would mention is the VM instances in Nova will run > super slow on virtualbox as it lacks the required "nested > virtualization" support and will run them all in software emulation. > > To find the root cause of the issue in nova, I would look through the > devstack at n-cpu log file (journal -u devstack at n-cpui) and the > devstack at n-sch logs. > > Also, you might have a look at one of the nova test localrc file as an example: > https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_da7/843127/6/check/tempest-integrated-compute-centos-9-stream/da7bebc/controller/logs/local_conf.txt > > Michael > > > On Fri, May 27, 2022 at 4:16 AM wrote: >> >> Hullo >> >> Na?ve question follows. Sorry. >> >> I?m trying a minimal OS install on a Virtualbox VM on a mac. I?d like to get to the point where I can launch a compute node. I?ve failed with `packstack` and I?m trying `devstack`. >> >> Should this work out of the box: ie >> >> Spin up a vm with vagrant: I?m using Centos Stream 9 to ensure that I get a current(ish) version of Python. It has 9GB RAM >> Ensure that SELinux and firewalls are turned off >> Clone devstack, cd to the directory and run `stack.sh` as user `vagrant` (this fails 1/3 of the time as some repo or other resets a connection. `stack.sh` doesn?t seem to be idempotent as reinvoking it may or may not install and run the OS environment >> Upload a ssh keypair through the web interface >> Use the web interface to launch the m1.nano flavor with Cirros image (I think that this flavor is quite new as some of the documentation refers to creating such a flavor with 64MB, whereas this one has 128MB. I did try the 64MB route [with `packstack`] and concluded that at least 96MB was needed and the documentation was wrong. I couldn?t log into launchpad.net to report this ? >> At this point the launch process fails with the error message: ?Build of instance 157bfa1d-7f8c-4a6c-ba3a-b02fb4f4b6a9 aborted: Failed to allocate the network(s), not rescheduling.? In the web ui >> >> >> Afaict, the vm has enough memory (just: it?s using a bit of swap, but more cache, so it could reclaim that). I?d expected the instance to launch, and I can well believe that I?ve missed something, but the documentation seems to point all over the place for various logs. >> >> Should this approach work? Is there an alternative that?s better (e.g. use Ubuntu: I?m not keen on apt/dpkg/.deb based distros as I?ve been tripped up in the past over the dependency handling and systemd integration, so I?ve avoided this, but I can see that Canonical is spending money on OS. But so is IBM/Redhat)? Where can I find info on how to trouble shoot the failing process? >> >> tia >> Tim From stephenfin at redhat.com Mon May 30 11:23:17 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 30 May 2022 12:23:17 +0100 Subject: [all][tc] Lets talk about Flake8 E501 In-Reply-To: <82dc2a68b1e49f78525b31eeabd988a050d282aa.camel@redhat.com> References: <1810b691354.b34ae5fb169369.5612306160139015283@ghanshyammann.com> <82dc2a68b1e49f78525b31eeabd988a050d282aa.camel@redhat.com> Message-ID: <25bc62986cc9590322d879a749a57bd62fd1cee2.camel@redhat.com> On Mon, 2022-05-30 at 11:57 +0100, Sean Mooney wrote: > On Sat, 2022-05-28 at 11:04 -0500, Ghanshyam Mann wrote: > > ---- On Fri, 27 May 2022 21:24:32 -0500 Miro Tomaska wrote ---- > > > Hello All, > > > This is probably going to be a hot topic but I was wondering if the community ever considered raising the default 79 characters line limit. I have seen some places where even a very innocent line of code needs to be split into two lines. I have also seen some code where I feel like variable names were abbreviated on purpose to squeeze everything into one line. > > > How does the community feel about raising the E501 limit to 119 characters? The 119 character limit is the second most popular limit besides the default one. It's long enough to give a developer enough room for descriptive variables without being forced to break lines too much. And it is short enough for a diff between two files to look OK. > > > The only downside I can see right now is that it's not easy to convert an existing code. So we will end up with files where the new code is 79+ characters and the "old" code is <=79. I can also see an argument where someone might have trouble reviewing a patch on a laptop screen (assuming standard 14" screen) ? > > > > This is a good point and having such in-consistency will be more problems than having it more than > > 79 char for the reason you mentioned above. And I do not think we will be able to convert all the > > existing code to the new limit. > > we would not convert it that woudl break git blame. This isn't true. You can configure a '.git-blame-ignore-revs' file that will cause git-blame to ignore particular revision(s). The Django community recently used this feature when they ran black over the Django codebase [2] and I added one such file to sqlalchemy a few weeks back for the same reason [3] (they did their black'ifying a few years ago). GitHub will now parse these files also [4] though I don't know if gitea has this functionality yet. In any case, my point is there are certainly valid arguments against larger wrapping width but breaking git-blame isn't really one of them anymore. Stephen [1] https://stackoverflow.com/questions/34957237/ [2] https://news.ycombinator.com/item?id=30258530 [3] https://github.com/sqlalchemy/sqlalchemy/commit/27828d668dd [4] https://github.blog/changelog/2022-03-24-ignore-commits-in-the-blame-view-beta/ > but i dont see the consonsitnce being an issue we would just update teh older code whenever it was next touched if needed. > its not as if all lines are 79 chars today the code with varies anyway > > > > > I feel more comfortable with 79 char when I review on a small or large screen. If we end up scrolling horizontally > > then it is definitely not good. Log files are good examples of it. > even on my phone which i alsmost never use use 80 dose not really help much as i would generally put my phone in landscape in that case > and i would have more then enough space for 2 120 char colums side by side. > when i used ot code on my 13 inch dell xps i also used ot have multiple terminal/sbuffer side by side. > granted it was a 4k dispaly but even on my work laptop which is only 1080p or my ipad generally have room for 2 150 ish cahrater terminals > > > > > For long line/var name, we do split line due to the 79 char limit but I do not think that is more cases in our code, > > maybe ~1% of our existing code? > it deffinetly more common then that. > i know i at least often end up rewriging my code locally befor ei push it to fit in the currently limit on most patches > i write. i stongly belive our current file limits are reduceing readblity not helping it. i however dont want to break git > blames usfully ness so any cahnge we woudl make in the fucutre would have to not touch the exsitng code and be gradual. > > like for example swaping to black i would be strongly against since it fundemtaly chagnes how code is formated and breaks git blame > but i was infavor of adding autopep8 which had minimal change to nova when we added it since it only fixes pep8 issue and does not reformat all the > code. > > if we were to adopt a new longer line limit i would hope we ensure that we only use it for new code an let the code organcialy convert. > im not actully saying we shoudl change our line limits on a per project or comunity wide basis just that outside fo readbilyt there > is a maintainablity aspect to consider. > > with that said i really prefer when tools enforce style not humans so any change would have to be enforcable by tooling/ci too or its a non > starter in my book. flake8 does htave teh ablity to set teh line lenght > https://flake8.pycqa.org/en/latest/user/options.html#cmdoption-flake8-max-line-length > and tools like autopep8 can also consume that but not all ides or editors will. > > > > > I think keeping consistency in code is important which is what flake8/pep8 checks are all about. > it is but i dont think applies ot lines that are shorter the the new line lenght. > i.e. removeign lines that were split and replacing them with unsplit lines that now fit. > that is something that should only be done if you are modifying that line for a different reason. > > > > -gmann > > > > > > > > Here is an example of one extreme, a diff of two files maxing out at 119 characters > > > https://review.opendev.org/c/opendev/sandbox/+/843697/1..2 > > > Thank you for your time and I am looking forward to this conversation :) > > > -- > > > Miro Tomaskairc: mtomaskaRed Hat > > > > From smooney at redhat.com Mon May 30 11:26:32 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 30 May 2022 12:26:32 +0100 Subject: [ops][nova] Problems with AggregateMultiTenancyIsolation/AggregateInstanceExtraSpecsFilter after Train --> Yoga update In-Reply-To: References: Message-ID: <2d0e8fb7b5cf1e5a98908f2543e3c3868c261d7d.camel@redhat.com> On Mon, 2022-05-30 at 10:44 +0200, Massimo Sgaravatto wrote: > Thanks Mohammed for your feedback > > This [*] is what I see in the log file. The 3 relevant nodes are filtered > out by the AggregateMultiTenancyIsolation (i.e. they are not among the 10 > returned hosts) we havent actully chage the code that im aware of but one thing to note. it is considered bad pratice to use un namespaced flaovr extra specs with the AggregateInstanceExtraSpecsFilter or ComputeCapabilitiesFilter both have legacy support for unnamespced extra specs but you cannot enable both filters if you use that if you have both the AggregateInstanceExtraSpecsFilter and ComputeCapabilitiesFilter enabled you must use namespace custome extra specs. so aggregate_instance_extra_specs: and capabilities: you have both enabled below so "size=big" is not vaild a a flaovr extra spec in this configuration. yoga by the way the AggregateMultiTenancyIsolation filter is not needed anymore that can be done with placemnet https://github.com/openstack/nova/blob/stable/yoga/nova/scheduler/request_filter.py#L94-L135= by defiening [scheduler] limit_tenants_to_placement_aggregate=True placement_aggregate_required_for_tenants=True that was also possible in train we litrally ment to deprecate and remove teh AggregateMultiTenancyIsolatio a few years ago and never got around to it. it was the first filter implemetn as a placement prefilter in rocky https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-req-filter.html so we problay should formally deprecate it this cycle and drop it in AA https://docs.openstack.org/nova/latest/admin/aggregates.html#tenant-isolation-with-placement explains in more relevent detail how to use placment for this. its basically a drop in replacement. you dont neeed to update teh aggreate metadta but there a re some subtlies with regards too what happens to host that are not mapped to any tenant which is why we have 2 config options so you can decide what you want to happen. i dont think there is any reason for an operator to ever use the AggregateMultiTenancyIsolation after rocky as the placment verions is much more effienct. you should alos proably read https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html while that is mainly inteded to replace the AggregateImagePropertiesIsolation it can also replace some of the usecasue enabeld by AggregateInstanceExtraSpecsFilter. simiarly due to bug 1677217 https://bugs.launchpad.net/nova/+bug/1677217 you really shoudl not use the AggregateImagePropertiesIsolation any more and any enve that can use isolated aggreates should this however is not a drop in repalcemtn as it requires changes to the images properties/flavor extra specs to request the triats. this was intoduced in Train. > > In the Train --> Yoga updated we also removed the AvailabilityZoneFilter > (which has been deprecated): I tried to re-add it but this didn't help > I can't remember other changes that could be relevant with this issue > > Cheers, Massimo > > > [*] > > 2022-05-30 10:08:32.620 3168092 DEBUG nova.filters > [req-f698ce3b-18c9-4f3f-9b73-49496e19c237 e237e43716fb490db5bda4b777835669 > 32b5d42c02b0411b8ebf2c3\ > 3079eeecf - default default] Filtering removed all hosts for the request > with instance ID 'fd6d978a-3739-45c6-b1cc-28fc9e57d381'. Filter results: > [('\ > AggregateMultiTenancyIsolation', [('cld-blu-12.cloud.pd.infn.it', ' > cld-blu-12.cloud.pd.infn.it'), ('cld-blu-02.cloud.pd.infn.it', > 'cld-blu-02.cloud.p\ > d.infn.it'), ('cld-blu-15.cloud.pd.infn.it', 'cld-blu-15.cloud.pd.infn.it'), > ('cld-blu-11.cloud.pd.infn.it', 'cld-blu-11.cloud.pd.infn.it'), ('cld-bl\ > u-14.cloud.pd.infn.it', 'cld-blu-14.cloud.pd.infn.it'), (' > cld-blu-13.cloud.pd.infn.it', 'cld-blu-13.cloud.pd.infn.it'), (' > cld-blu-07.cloud.pd.infn.it\ > ', 'cld-blu-07.cloud.pd.infn.it'), ('cld-blu-06.cloud.pd.infn.it', ' > cld-blu-06.cloud.pd.infn.it'), ('cld-blu-01.cloud.pd.infn.it', > 'cld-blu-01.cloud.\ > pd.infn.it'), ('cld-blu-16.cloud.pd.infn.it', 'cld-blu-16.cloud.pd.infn.it')]), > ('AggregateInstanceExtraSpecsFilter', None)] get_filtered_objects /us\ > r/lib/python3.6/site-packages/nova/filters.py:114 > 2022-05-30 10:08:32.620 3168092 INFO nova.filters > [req-f698ce3b-18c9-4f3f-9b73-49496e19c237 e237e43716fb490db5bda4b777835669 > 32b5d42c02b0411b8ebf2c33\ > 079eeecf - default default] Filtering removed all hosts for the request > with instance ID 'fd6d978a-3739-45c6-b1cc-28fc9e57d381'. Filter results: > ['Ag\ > gregateMultiTenancyIsolation: (start: 58, end: 10)', > 'AggregateInstanceExtraSpecsFilter: (start: 10, end: 0)'] > > On Mon, May 30, 2022 at 9:51 AM Mohammed Naser wrote: > > > Hi there, > > > > Are you sure you haven't done any other changes to your environment? > > Both those filters haven't been changed for years (~2017): > > > > > > https://github.com/openstack/nova/commits/master/nova/scheduler/filters/aggregate_multitenancy_isolation.py > > > > https://github.com/openstack/nova/commits/master/nova/scheduler/filters/aggregate_instance_extra_specs.py > > > > I think you've got something else here, I'd suggest enabling debug and > > checking why these nodes are being filtered out. > > > > Mohammed > > > > On Mon, May 30, 2022 at 9:39 AM Massimo Sgaravatto > > wrote: > > > > > > It looks like I need now to create a HostAggregate for "size=big" for > > each project, specifying as properties: > > > filter_tenant_id=, size='big' > > > > > > Is this the expected behaviour ? > > > > > > Till Train a single HostAggregate with the property: size='big' was > > enough > > > > > > Thanks, Massimo > > > > > > > > > > > > On Fri, May 27, 2022 at 10:08 AM Massimo Sgaravatto < > > massimo.sgaravatto at gmail.com> wrote: > > > > > > > > Dear all > > > > > > > > We have the following use case: > > > > > > > > - reserve 3 hypervisors for VMs with "big" flavors (whatever the users > > of these instances are) > > > > - partition the rest of the hypervisors according to the project (so > > projects A1,A2..,An can use only subset S1 of hypervisors, project > > B1,B2,..,Bm can use only subset S2 of hypervisors) > > > > > > > > We implemented this: > > > > > > > > 1- by setting an aggregate_instance_extra_spec 'size' property (with > > value 'normal' or 'big') for each flavor [*] > > > > 2- by creating a BigVMs HostAggregate for size=big [**] > > > > 3- by creating an HostAggregate for size=normal for each project, such > > as this one [***] > > > > > > > > > > > > This used to work. > > > > > > > > A few days ago we updated our infrastructure from Train to Yoga > > > > This was an offline Fast Forward Update: we went through the > > intermediate releases just to do the dbsyncs. > > > > Since this update the instantiation of VMs with flavors with the > > size=big property doesn't work anymore > > > > > > > > > > > > This is what I see in nova-scheduler log: > > > > 2022-05-27 08:38:02.058 5273 INFO nova.filters > > [req-f92c0e38-262a-4d22-a7fd-8874c4265401 e237e43716fb490db5bda4b777835669 > > 32b5d42c02b0411b8ebf2c33079eeecf - default default] Filtering removed all > > hosts for the request with instance ID > > '2412e188-9d5f-4812-ad21-195769a3c220'. Filter results: > > ['AggregateMultiTenancyIsolation: (start: 59, end: 10)', > > 'AggregateInstanceExtraSpecsFilter: (start: 10, end: 0)'] > > > > > > > > > > > > Only modifying the property of the BigVMs HA using the > > filter_tenant_id adding the relevant project: > > > > > > > > [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs | grep prop > > > > > properties | > > filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='big' > > | > > > > > > > > the scheduling works > > > > > > > > Specifying each project and keeping the list up-to-date would be a > > problem. Moreover if I am not wrong there is a maximum length for the > > property field. > > > > > > > > Any hints ? > > > > I didn't find anything related to this issue in the nova release notes > > for Openstack releases > Train > > > > > > > > > > > > These are the filters that we enabled: > > > > > > > > [filter_scheduler] > > > > enabled_filters = > > AggregateMultiTenancyIsolation,AggregateInstanceExtraSpecsFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGro\ > > > > upAffinityFilter,PciPassthroughFilter,NUMATopologyFilter > > > > > > > > > > > > Thanks a lot, Massimo > > > > > > > > > > > > [*] > > > > E.g. > > > > [root at cld-ctrl-01 ~]# openstack flavor show cldareapd.medium | grep > > prope > > > > > properties | > > aggregate_instance_extra_specs:size='normal' |[root at cld-ctrl-01 ~]# > > openstack flavor show cloudvenetocloudveneto.40cores128GB25-bigunipd | grep > > prop > > > > > properties | > > aggregate_instance_extra_specs:size='big' | > > > > > > > > > > > > [**] > > > > > > > > [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs > > > > > > +-------------------+---------------------------------------------------------------------------------------+ > > > > > Field | Value > > | > > > > > > +-------------------+---------------------------------------------------------------------------------------+ > > > > > availability_zone | nova > > | > > > > > created_at | 2018-06-20T06:54:51.000000 > > | > > > > > deleted_at | None > > | > > > > > hosts | cld-blu-08.cloud.pd.infn.it, > > cld-blu-09.cloud.pd.infn.it, cld-blu-10.cloud.pd.infn.it | > > > > > id | 135 > > | > > > > > is_deleted | False > > | > > > > > name | BigVMs > > | > > > > > properties |size='big' | > > > > > updated_at | None > > | > > > > > uuid | 4b593395-1c76-441c-9022-d421f4ea2dfb > > | > > > > > > +-------------------+---------------------------------------------------------------------------------------+ > > > > > > > > > > > > [***] > > > > [root at cld-ctrl-01 ~]# openstack aggregate show Unipd-AdminTesting-Unipd > > > > > > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > > Field | Value > > > > > > > > | > > > > > > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > > availability_zone | nova > > > > > > > > | > > > > > created_at | 2018-04-12T07:31:01.000000 > > > > > > > > | > > > > > deleted_at | None > > > > > > > > | > > > > > hosts | cld-blu-01.cloud.pd.infn.it, > > cld-blu-02.cloud.pd.infn.it, cld-blu-05.cloud.pd.infn.it, > > cld-blu-06.cloud.pd.infn.it, cld-blu-07.cloud.pd.infn.it, > > cld-blu-11.cloud.pd.infn.it, cld-blu-12.cloud.pd.infn.it, > > cld-blu-13.cloud.pd.infn.it, cld-blu-14.cloud.pd.infn.it, > > cld-blu-15.cloud.pd.infn.it, cld-blu-16.cloud.pd.infn.it | > > > > > id | 126 > > > > > > > > | > > > > > is_deleted | False > > > > > > > > | > > > > > name | Unipd-AdminTesting-Unipd > > > > > > > > | > > > > > properties | > > filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='normal' > > > > > > > > | > > > > > updated_at | 2018-06-08T09:06:20.000000 > > > > > > > > | > > > > > uuid | 38f6a0d4-77ab-42e0-abeb-57e06ba13cca > > > > > > > > | > > > > > > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > [root at cld-ctrl-01 ~]# > > > > > > > > > > -- > > Mohammed Naser > > VEXXHOST, Inc. > > From smooney at redhat.com Mon May 30 11:37:19 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 30 May 2022 12:37:19 +0100 Subject: [all][tc] Lets talk about Flake8 E501 In-Reply-To: <25bc62986cc9590322d879a749a57bd62fd1cee2.camel@redhat.com> References: <1810b691354.b34ae5fb169369.5612306160139015283@ghanshyammann.com> <82dc2a68b1e49f78525b31eeabd988a050d282aa.camel@redhat.com> <25bc62986cc9590322d879a749a57bd62fd1cee2.camel@redhat.com> Message-ID: <6140205bdc2c5dc03bcf0d7bed7b1ce9a653479a.camel@redhat.com> On Mon, 2022-05-30 at 12:23 +0100, Stephen Finucane wrote: > On Mon, 2022-05-30 at 11:57 +0100, Sean Mooney wrote: > > On Sat, 2022-05-28 at 11:04 -0500, Ghanshyam Mann wrote: > > > ---- On Fri, 27 May 2022 21:24:32 -0500 Miro Tomaska wrote ---- > > > > Hello All, > > > > This is probably going to be a hot topic but I was wondering if the community ever considered raising the default 79 characters line limit. I have seen some places where even a very innocent line of code needs to be split into two lines. I have also seen some code where I feel like variable names were abbreviated on purpose to squeeze everything into one line. > > > > How does the community feel about raising the E501 limit to 119 characters? The 119 character limit is the second most popular limit besides the default one. It's long enough to give a developer enough room for descriptive variables without being forced to break lines too much. And it is short enough for a diff between two files to look OK. > > > > The only downside I can see right now is that it's not easy to convert an existing code. So we will end up with files where the new code is 79+ characters and the "old" code is <=79. I can also see an argument where someone might have trouble reviewing a patch on a laptop screen (assuming standard 14" screen) ? > > > > > > This is a good point and having such in-consistency will be more problems than having it more than > > > 79 char for the reason you mentioned above. And I do not think we will be able to convert all the > > > existing code to the new limit. > > > > we would not convert it that woudl break git blame. > > This isn't true. You can configure a '.git-blame-ignore-revs' file that will > cause git-blame to ignore particular revision(s). The Django community recently > used this feature when they ran black over the Django codebase [2] and I added > one such file to sqlalchemy a few weeks back for the same reason [3] (they did > their black'ifying a few years ago). GitHub will now parse these files also [4] > though I don't know if gitea has this functionality yet. In any case, my point > is there are certainly valid arguments against larger wrapping width but > breaking git-blame isn't really one of them anymore. is this new as that has been a sticking point to me making nova's automatic code formating stricter/better i have some patches locally to have pre-commit fix more things automtically but i was holding off on them as one of the fixes was normalising your use of single and double quotes. git blame was why it took like 3 years to finally get even basic auto code formating to land. i think there are are more valid argument in favor of longer lines then against but git blame had always been the main sticking point that kills any efforts to move to code formating been doen exclivivly via tooling in the past. backports is the other but i can live with fixing the formatign for older brnahces wehn we backport once when going to yoga if it means we can automate formating on master. this is a tangent to the can we use 120 lines or not question. > Stephen > > [1] https://stackoverflow.com/questions/34957237/ > [2] https://news.ycombinator.com/item?id=30258530 > [3] https://github.com/sqlalchemy/sqlalchemy/commit/27828d668dd > [4] https://github.blog/changelog/2022-03-24-ignore-commits-in-the-blame-view-beta/ > > > but i dont see the consonsitnce being an issue we would just update teh older code whenever it was next touched if needed. > > its not as if all lines are 79 chars today the code with varies anyway > > > > > > > > I feel more comfortable with 79 char when I review on a small or large screen. If we end up scrolling horizontally > > > then it is definitely not good. Log files are good examples of it. > > even on my phone which i alsmost never use use 80 dose not really help much as i would generally put my phone in landscape in that case > > and i would have more then enough space for 2 120 char colums side by side. > > when i used ot code on my 13 inch dell xps i also used ot have multiple terminal/sbuffer side by side. > > granted it was a 4k dispaly but even on my work laptop which is only 1080p or my ipad generally have room for 2 150 ish cahrater terminals > > > > > > > > For long line/var name, we do split line due to the 79 char limit but I do not think that is more cases in our code, > > > maybe ~1% of our existing code? > > it deffinetly more common then that. > > i know i at least often end up rewriging my code locally befor ei push it to fit in the currently limit on most patches > > i write. i stongly belive our current file limits are reduceing readblity not helping it. i however dont want to break git > > blames usfully ness so any cahnge we woudl make in the fucutre would have to not touch the exsitng code and be gradual. > > > > like for example swaping to black i would be strongly against since it fundemtaly chagnes how code is formated and breaks git blame > > but i was infavor of adding autopep8 which had minimal change to nova when we added it since it only fixes pep8 issue and does not reformat all the > > code. > > > > if we were to adopt a new longer line limit i would hope we ensure that we only use it for new code an let the code organcialy convert. > > im not actully saying we shoudl change our line limits on a per project or comunity wide basis just that outside fo readbilyt there > > is a maintainablity aspect to consider. > > > > with that said i really prefer when tools enforce style not humans so any change would have to be enforcable by tooling/ci too or its a non > > starter in my book. flake8 does htave teh ablity to set teh line lenght > > https://flake8.pycqa.org/en/latest/user/options.html#cmdoption-flake8-max-line-length > > and tools like autopep8 can also consume that but not all ides or editors will. > > > > > > > > I think keeping consistency in code is important which is what flake8/pep8 checks are all about. > > it is but i dont think applies ot lines that are shorter the the new line lenght. > > i.e. removeign lines that were split and replacing them with unsplit lines that now fit. > > that is something that should only be done if you are modifying that line for a different reason. > > > > > > -gmann > > > > > > > > > > > Here is an example of one extreme, a diff of two files maxing out at 119 characters > > > > https://review.opendev.org/c/opendev/sandbox/+/843697/1..2 > > > > Thank you for your time and I am looking forward to this conversation :) > > > > -- > > > > Miro Tomaskairc: mtomaskaRed Hat > > > > > > > > From smooney at redhat.com Mon May 30 11:53:09 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 30 May 2022 12:53:09 +0100 Subject: Novice question In-Reply-To: <83EE0BBE-7FE2-4815-912F-A6383A0FFFA8@coote.org> References: <83EE0BBE-7FE2-4815-912F-A6383A0FFFA8@coote.org> Message-ID: On Mon, 2022-05-30 at 12:06 +0100, tim+openstack.org at coote.org wrote: > Thanks, Michael. Very reassuring. I?ll have a look and comment back. if you have an m1 mac i woudl suggest using utm with the ubuntu 20.04 image https://mac.getutm.app/gallery/ubuntu-20-04 to create a qemu vm whihc will use macos's hypervirio api to hardware acclearte the l1 vm. the l2 vms will still use qemu but by using arm based images for the host os you can get pretty good perforamce in teh vm and spin up arm based cirrous iamge with nova and get ok performance. it will be slower then nested virt but the apple silicon macs dont support that a the hardware level. i have mostly for my own use being developing https://github.com/SeanMooney/ansible_role_devstack im probably goign to rename that to "ard" by the way in the future if that link does nto work later. this repo currently has ansible playbooks that will use the upstream devstack roles we use in ci to deploy multi node devstack it can create vms using molecule/vagrant and then run the devstack install. so on a linux laptop you can do bootstrap-repo.sh molecule create molecule converge and that will create a 2 node openstack based on centos 9 stream with master devstack installed and deployed. currently the molecule role creates two 8gb vms but i have an example of using it to deploy onto externally provisioned host as a pr https://github.com/SeanMooney/ansible_role_devstack/pull/4/files if you continue to have troble deploying devstack by hand perhaps that will be of use to you. it not really ready for primetime/general use but if people find it intersting ye are welcome to fork and use as ye see fit. the molecule template seamed to set the licence to "BSD" by default. i was plaaing ot make it apache but i guess bsd is just as good so i shoudl fix that. anyway i just wanted to point out that UTM is proably beter the virtual box form a performace point of view. > > > On 27 May 2022, at 20:11, Michael Johnson wrote: > > > > Hi Tim, > > > > This should work fine. You will want a localrc/local.conf file to > > configure devstack. I didn't see that mentioned in your steps. > > See this section in the docs: > > https://docs.openstack.org/devstack/latest/#create-a-local-conf > > > > The only caveat I would mention is the VM instances in Nova will run > > super slow on virtualbox as it lacks the required "nested > > virtualization" support and will run them all in software emulation. > > > > To find the root cause of the issue in nova, I would look through the > > devstack at n-cpu log file (journal -u devstack at n-cpui) and the > > devstack at n-sch logs. > > > > Also, you might have a look at one of the nova test localrc file as an example: > > https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_da7/843127/6/check/tempest-integrated-compute-centos-9-stream/da7bebc/controller/logs/local_conf.txt > > > > Michael > > > > > > On Fri, May 27, 2022 at 4:16 AM wrote: > > > > > > Hullo > > > > > > Na?ve question follows. Sorry. > > > > > > I?m trying a minimal OS install on a Virtualbox VM on a mac. I?d like to get to the point where I can launch a compute node. I?ve failed with `packstack` and I?m trying `devstack`. > > > > > > Should this work out of the box: ie > > > > > > Spin up a vm with vagrant: I?m using Centos Stream 9 to ensure that I get a current(ish) version of Python. It has 9GB RAM > > > Ensure that SELinux and firewalls are turned off > > > Clone devstack, cd to the directory and run `stack.sh` as user `vagrant` (this fails 1/3 of the time as some repo or other resets a connection. `stack.sh` doesn?t seem to be idempotent as reinvoking it may or may not install and run the OS environment > > > Upload a ssh keypair through the web interface > > > Use the web interface to launch the m1.nano flavor with Cirros image (I think that this flavor is quite new as some of the documentation refers to creating such a flavor with 64MB, whereas this one has 128MB. I did try the 64MB route [with `packstack`] and concluded that at least 96MB was needed and the documentation was wrong. I couldn?t log into launchpad.net to report this ? > > > At this point the launch process fails with the error message: ?Build of instance 157bfa1d-7f8c-4a6c-ba3a-b02fb4f4b6a9 aborted: Failed to allocate the network(s), not rescheduling.? In the web ui > > > > > > > > > Afaict, the vm has enough memory (just: it?s using a bit of swap, but more cache, so it could reclaim that). I?d expected the instance to launch, and I can well believe that I?ve missed something, but the documentation seems to point all over the place for various logs. > > > > > > Should this approach work? Is there an alternative that?s better (e.g. use Ubuntu: I?m not keen on apt/dpkg/.deb based distros as I?ve been tripped up in the past over the dependency handling and systemd integration, so I?ve avoided this, but I can see that Canonical is spending money on OS. But so is IBM/Redhat)? Where can I find info on how to trouble shoot the failing process? > > > > > > tia > > > Tim > > From dpawlik at redhat.com Mon May 30 12:02:15 2022 From: dpawlik at redhat.com (Daniel Pawlik) Date: Mon, 30 May 2022 14:02:15 +0200 Subject: OpenStack logs on Opensearch - storage problem Message-ID: Hello, In the last few days there is increase amount of CI jobs that are running on Zuul CI gates, where later logs are pushed into the Opensearch cluster [1]. The current cluster storage is almost full, which causes one of the actions to be performed: temporarily changing log retention (currently it is set to 14 days) or removing temporary non-essential logs [2]. I recommend removing not important logs, so if projects leaders could check whether entries related to the project are needed, it would be really helpful. Otherwise, we will temporarily change log retention to 12 days. Dan [1] https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global [2] https://opendev.org/openstack/ci-log-processing/src/branch/master/logscraper/config.yaml.sample -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon May 30 12:29:12 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 30 May 2022 12:29:12 +0000 Subject: OpenStack logs on Opensearch - storage problem In-Reply-To: References: Message-ID: <20220530122912.dffadzvt3kga72y2@yuggoth.org> On 2022-05-30 14:02:15 +0200 (+0200), Daniel Pawlik wrote: > In the last few days there is increase amount of CI jobs that are > running on Zuul CI gates, where later logs are pushed into the > Opensearch cluster [1]. The current cluster storage is almost > full, which causes one of the actions to be performed: temporarily > changing log retention (currently it is set to 14 days) or > removing temporary non-essential logs [2]. I recommend removing > not important logs, so if projects leaders could check whether > entries related to the project are needed, it would be really > helpful. [...] For the old system, we filtered out all debug level log lines, only importing lines at info level or greater. If you're not doing that yet on the new system, it would probably help free up a lot of space. We tried to maintain around 7-10 days of retention, so lowering the retention from 14 days isn't really a regression over what we used to provide. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From josephine.seifert at secustack.com Mon May 30 12:57:11 2022 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Mon, 30 May 2022 14:57:11 +0200 Subject: [Image Encryption] No meeting today Message-ID: <617e462c-f4c6-8ab1-9602-d73d401c95fd@secustack.com> Hi, I have a time conflict, so there will be no meeting today. greetings Josephine (Luzi) -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 236 bytes Desc: OpenPGP digital signature URL: From marios at redhat.com Mon May 30 13:16:13 2022 From: marios at redhat.com (Marios Andreou) Date: Mon, 30 May 2022 16:16:13 +0300 Subject: [tripleo] stable/wallaby gate blocker for centos-8 standalone - please hold recheck Message-ID: Hi folks fyi there is a wallaby gate blocker at https://bugs.launchpad.net/tripleo/+bug/1976247 Seems like ansible-role-python_venv_build is trying to install master upper constraints and that is causing a conflict for oslo-db===11.3.0 (comfing via stackviz installation) we are trying to find a workaround but please hold recheck until we do grateful for any pointers or suggestions here or on the launchpad o/ thanks, marios From massimo.sgaravatto at gmail.com Mon May 30 13:35:04 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Mon, 30 May 2022 15:35:04 +0200 Subject: [ops][nova] Problems with AggregateMultiTenancyIsolation/AggregateInstanceExtraSpecsFilter after Train --> Yoga update In-Reply-To: <2d0e8fb7b5cf1e5a98908f2543e3c3868c261d7d.camel@redhat.com> References: <2d0e8fb7b5cf1e5a98908f2543e3c3868c261d7d.camel@redhat.com> Message-ID: It was my fault: there was a new Host Aggregate using filter_tenant_id which includes the 3 hosts Apologies for the noise Now I am going to study what Sean wrote :-) Thanks, Massimo On Mon, May 30, 2022 at 1:27 PM Sean Mooney wrote: > On Mon, 2022-05-30 at 10:44 +0200, Massimo Sgaravatto wrote: > > Thanks Mohammed for your feedback > > > > This [*] is what I see in the log file. The 3 relevant nodes are filtered > > out by the AggregateMultiTenancyIsolation (i.e. they are not among the 10 > > returned hosts) > > we havent actully chage the code that im aware of but > one thing to note. > it is considered bad pratice to use un namespaced flaovr extra specs with > the > AggregateInstanceExtraSpecsFilter or ComputeCapabilitiesFilter > > both have legacy support for unnamespced extra specs but you cannot enable > both filters if you use that > > if you have both the AggregateInstanceExtraSpecsFilter and > ComputeCapabilitiesFilter enabled > you must use namespace custome extra specs. > > so aggregate_instance_extra_specs: and capabilities: > > you have both enabled below so "size=big" is not vaild a a flaovr extra > spec in this configuration. > > yoga by the way the AggregateMultiTenancyIsolation filter is not needed > anymore that can be done with placemnet > > > https://github.com/openstack/nova/blob/stable/yoga/nova/scheduler/request_filter.py#L94-L135= > > by defiening > [scheduler] > limit_tenants_to_placement_aggregate=True > placement_aggregate_required_for_tenants=True > > that was also possible in train > > we litrally ment to deprecate and remove teh AggregateMultiTenancyIsolatio > a few years ago and never got around to it. > it was the first filter implemetn as a placement prefilter in rocky > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-req-filter.html > so we problay should formally deprecate it this cycle and drop it in AA > > > https://docs.openstack.org/nova/latest/admin/aggregates.html#tenant-isolation-with-placement > explains in more relevent detail > how to use placment for this. its basically a drop in replacement. you > dont neeed to update teh aggreate metadta but there a > re some subtlies with regards too what happens to host that are not mapped > to any tenant which is why we have 2 config > options so you can decide what you want to happen. i dont think there is > any reason for an operator to ever use the > AggregateMultiTenancyIsolation after rocky as the placment verions is much > more effienct. > > > you should alos proably read > https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html > while that is mainly inteded to replace the > AggregateImagePropertiesIsolation it can also replace > some of the usecasue enabeld by AggregateInstanceExtraSpecsFilter. > simiarly due to bug 1677217 > https://bugs.launchpad.net/nova/+bug/1677217 you really shoudl not use > the AggregateImagePropertiesIsolation > any more and any enve that can use isolated aggreates should this however > is not a drop in repalcemtn as it requires > changes to the images properties/flavor extra specs to request the triats. > this was intoduced in Train. > > > > > > In the Train --> Yoga updated we also removed the AvailabilityZoneFilter > > (which has been deprecated): I tried to re-add it but this didn't help > > I can't remember other changes that could be relevant with this issue > > > > Cheers, Massimo > > > > > > [*] > > > > 2022-05-30 10:08:32.620 3168092 DEBUG nova.filters > > [req-f698ce3b-18c9-4f3f-9b73-49496e19c237 > e237e43716fb490db5bda4b777835669 > > 32b5d42c02b0411b8ebf2c3\ > > 3079eeecf - default default] Filtering removed all hosts for the request > > with instance ID 'fd6d978a-3739-45c6-b1cc-28fc9e57d381'. Filter results: > > [('\ > > AggregateMultiTenancyIsolation', [('cld-blu-12.cloud.pd.infn.it', ' > > cld-blu-12.cloud.pd.infn.it'), ('cld-blu-02.cloud.pd.infn.it', > > 'cld-blu-02.cloud.p\ > > d.infn.it'), ('cld-blu-15.cloud.pd.infn.it', ' > cld-blu-15.cloud.pd.infn.it'), > > ('cld-blu-11.cloud.pd.infn.it', 'cld-blu-11.cloud.pd.infn.it'), > ('cld-bl\ > > u-14.cloud.pd.infn.it', 'cld-blu-14.cloud.pd.infn.it'), (' > > cld-blu-13.cloud.pd.infn.it', 'cld-blu-13.cloud.pd.infn.it'), (' > > cld-blu-07.cloud.pd.infn.it\ > > ', 'cld-blu-07.cloud.pd.infn.it'), ('cld-blu-06.cloud.pd.infn.it', ' > > cld-blu-06.cloud.pd.infn.it'), ('cld-blu-01.cloud.pd.infn.it', > > 'cld-blu-01.cloud.\ > > pd.infn.it'), ('cld-blu-16.cloud.pd.infn.it', ' > cld-blu-16.cloud.pd.infn.it')]), > > ('AggregateInstanceExtraSpecsFilter', None)] get_filtered_objects /us\ > > r/lib/python3.6/site-packages/nova/filters.py:114 > > 2022-05-30 10:08:32.620 3168092 INFO nova.filters > > [req-f698ce3b-18c9-4f3f-9b73-49496e19c237 > e237e43716fb490db5bda4b777835669 > > 32b5d42c02b0411b8ebf2c33\ > > 079eeecf - default default] Filtering removed all hosts for the request > > with instance ID 'fd6d978a-3739-45c6-b1cc-28fc9e57d381'. Filter results: > > ['Ag\ > > gregateMultiTenancyIsolation: (start: 58, end: 10)', > > 'AggregateInstanceExtraSpecsFilter: (start: 10, end: 0)'] > > > > On Mon, May 30, 2022 at 9:51 AM Mohammed Naser > wrote: > > > > > Hi there, > > > > > > Are you sure you haven't done any other changes to your environment? > > > Both those filters haven't been changed for years (~2017): > > > > > > > > > > https://github.com/openstack/nova/commits/master/nova/scheduler/filters/aggregate_multitenancy_isolation.py > > > > > > > https://github.com/openstack/nova/commits/master/nova/scheduler/filters/aggregate_instance_extra_specs.py > > > > > > I think you've got something else here, I'd suggest enabling debug and > > > checking why these nodes are being filtered out. > > > > > > Mohammed > > > > > > On Mon, May 30, 2022 at 9:39 AM Massimo Sgaravatto > > > wrote: > > > > > > > > It looks like I need now to create a HostAggregate for "size=big" for > > > each project, specifying as properties: > > > > filter_tenant_id=, size='big' > > > > > > > > Is this the expected behaviour ? > > > > > > > > Till Train a single HostAggregate with the property: size='big' was > > > enough > > > > > > > > Thanks, Massimo > > > > > > > > > > > > > > > > On Fri, May 27, 2022 at 10:08 AM Massimo Sgaravatto < > > > massimo.sgaravatto at gmail.com> wrote: > > > > > > > > > > Dear all > > > > > > > > > > We have the following use case: > > > > > > > > > > - reserve 3 hypervisors for VMs with "big" flavors (whatever the > users > > > of these instances are) > > > > > - partition the rest of the hypervisors according to the project > (so > > > projects A1,A2..,An can use only subset S1 of hypervisors, project > > > B1,B2,..,Bm can use only subset S2 of hypervisors) > > > > > > > > > > We implemented this: > > > > > > > > > > 1- by setting an aggregate_instance_extra_spec 'size' property > (with > > > value 'normal' or 'big') for each flavor [*] > > > > > 2- by creating a BigVMs HostAggregate for size=big [**] > > > > > 3- by creating an HostAggregate for size=normal for each project, > such > > > as this one [***] > > > > > > > > > > > > > > > This used to work. > > > > > > > > > > A few days ago we updated our infrastructure from Train to Yoga > > > > > This was an offline Fast Forward Update: we went through the > > > intermediate releases just to do the dbsyncs. > > > > > Since this update the instantiation of VMs with flavors with the > > > size=big property doesn't work anymore > > > > > > > > > > > > > > > This is what I see in nova-scheduler log: > > > > > 2022-05-27 08:38:02.058 5273 INFO nova.filters > > > [req-f92c0e38-262a-4d22-a7fd-8874c4265401 > e237e43716fb490db5bda4b777835669 > > > 32b5d42c02b0411b8ebf2c33079eeecf - default default] Filtering removed > all > > > hosts for the request with instance ID > > > '2412e188-9d5f-4812-ad21-195769a3c220'. Filter results: > > > ['AggregateMultiTenancyIsolation: (start: 59, end: 10)', > > > 'AggregateInstanceExtraSpecsFilter: (start: 10, end: 0)'] > > > > > > > > > > > > > > > Only modifying the property of the BigVMs HA using the > > > filter_tenant_id adding the relevant project: > > > > > > > > > > [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs | grep prop > > > > > > properties | > > > filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='big' > > > | > > > > > > > > > > the scheduling works > > > > > > > > > > Specifying each project and keeping the list up-to-date would be a > > > problem. Moreover if I am not wrong there is a maximum length for the > > > property field. > > > > > > > > > > Any hints ? > > > > > I didn't find anything related to this issue in the nova release > notes > > > for Openstack releases > Train > > > > > > > > > > > > > > > These are the filters that we enabled: > > > > > > > > > > [filter_scheduler] > > > > > enabled_filters = > > > > AggregateMultiTenancyIsolation,AggregateInstanceExtraSpecsFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGro\ > > > > > upAffinityFilter,PciPassthroughFilter,NUMATopologyFilter > > > > > > > > > > > > > > > Thanks a lot, Massimo > > > > > > > > > > > > > > > [*] > > > > > E.g. > > > > > [root at cld-ctrl-01 ~]# openstack flavor show cldareapd.medium | > grep > > > prope > > > > > > properties | > > > aggregate_instance_extra_specs:size='normal' |[root at cld-ctrl-01 ~]# > > > openstack flavor show cloudvenetocloudveneto.40cores128GB25-bigunipd | > grep > > > prop > > > > > > properties | > > > aggregate_instance_extra_specs:size='big' | > > > > > > > > > > > > > > > [**] > > > > > > > > > > [root at cld-ctrl-01 ~]# openstack aggregate show BigVMs > > > > > > > > > +-------------------+---------------------------------------------------------------------------------------+ > > > > > > Field | Value > > > | > > > > > > > > > +-------------------+---------------------------------------------------------------------------------------+ > > > > > > availability_zone | nova > > > | > > > > > > created_at | 2018-06-20T06:54:51.000000 > > > | > > > > > > deleted_at | None > > > | > > > > > > hosts | cld-blu-08.cloud.pd.infn.it, > > > cld-blu-09.cloud.pd.infn.it, cld-blu-10.cloud.pd.infn.it | > > > > > > id | 135 > > > | > > > > > > is_deleted | False > > > | > > > > > > name | BigVMs > > > | > > > > > > properties |size='big' | > > > > > > updated_at | None > > > | > > > > > > uuid | 4b593395-1c76-441c-9022-d421f4ea2dfb > > > | > > > > > > > > > +-------------------+---------------------------------------------------------------------------------------+ > > > > > > > > > > > > > > > [***] > > > > > [root at cld-ctrl-01 ~]# openstack aggregate show > Unipd-AdminTesting-Unipd > > > > > > > > > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > > > Field | Value > > > > > > > > > > > > | > > > > > > > > > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > > > availability_zone | nova > > > > > > > > > > > > | > > > > > > created_at | 2018-04-12T07:31:01.000000 > > > > > > > > > > > > | > > > > > > deleted_at | None > > > > > > > > > > > > | > > > > > > hosts | cld-blu-01.cloud.pd.infn.it, > > > cld-blu-02.cloud.pd.infn.it, cld-blu-05.cloud.pd.infn.it, > > > cld-blu-06.cloud.pd.infn.it, cld-blu-07.cloud.pd.infn.it, > > > cld-blu-11.cloud.pd.infn.it, cld-blu-12.cloud.pd.infn.it, > > > cld-blu-13.cloud.pd.infn.it, cld-blu-14.cloud.pd.infn.it, > > > cld-blu-15.cloud.pd.infn.it, cld-blu-16.cloud.pd.infn.it | > > > > > > id | 126 > > > > > > > > > > > > | > > > > > > is_deleted | False > > > > > > > > > > > > | > > > > > > name | Unipd-AdminTesting-Unipd > > > > > > > > > > > > | > > > > > > properties | > > > filter_tenant_id='32b5d42c02b0411b8ebf2c33079eeecf', size='normal' > > > > > > > > > > > > | > > > > > > updated_at | 2018-06-08T09:06:20.000000 > > > > > > > > > > > > | > > > > > > uuid | 38f6a0d4-77ab-42e0-abeb-57e06ba13cca > > > > > > > > > > > > | > > > > > > > > > +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > > [root at cld-ctrl-01 ~]# > > > > > > > > > > > > > > -- > > > Mohammed Naser > > > VEXXHOST, Inc. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon May 30 14:43:01 2022 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 30 May 2022 16:43:01 +0200 Subject: [all][tc] Lets talk about Flake8 E501 In-Reply-To: <20220528122207.cyokyj2i7yvg5pyo@yuggoth.org> References: <20220528122207.cyokyj2i7yvg5pyo@yuggoth.org> Message-ID: On Sat, May 28, 2022 at 2:29 PM Jeremy Stanley wrote: > On 2022-05-27 21:24:32 -0500 (-0500), Miro Tomaska wrote: > > This is probably going to be a hot topic but I was wondering if > > the community ever considered raising the default 79 characters > > line limit. > [...] > > This is a choice individual projects can make; the TC hasn't > demanded all projects follow a specific coding style. > > That said, I personally do all my coding and code review in > 80-column text terminals, so any lines longer that that end up > wrapping and making indentation harder to follow. Similarly, I usually split my vim in 2 halves, around 90 characters fit into each half. Dmitry > There have been > numerous studies to suggest that shorter lines are easier for people > to read and comprehend, whether it's prose or source code, and the > ideal lengths actually end up being around 50-75 characters. There's > a reason wide-format print media almost always breaks text into > multiple columns: The eyes tend to get lost as you scan across > longer lines of content. (As an aside, you'll probably notice I've > set my MUA to wrap all lines at 68 characters.) > > Just my opinion, but I do really appreciate when projects keep > source code files wrapped at 79 columns. > -- > Jeremy Stanley > -- Red Hat GmbH , Registered seat: Werner von Siemens Ring 14, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Mon May 30 14:48:48 2022 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Mon, 30 May 2022 15:48:48 +0100 Subject: [all][tc] Lets talk about Flake8 E501 In-Reply-To: References: <20220528122207.cyokyj2i7yvg5pyo@yuggoth.org> Message-ID: On Mon, May 30, 2022 at 3:46 PM Dmitry Tantsur wrote: > > > On Sat, May 28, 2022 at 2:29 PM Jeremy Stanley wrote: > >> On 2022-05-27 21:24:32 -0500 (-0500), Miro Tomaska wrote: >> > This is probably going to be a hot topic but I was wondering if >> > the community ever considered raising the default 79 characters >> > line limit. >> [...] >> >> This is a choice individual projects can make; the TC hasn't >> demanded all projects follow a specific coding style. >> >> That said, I personally do all my coding and code review in >> 80-column text terminals, so any lines longer that that end up >> wrapping and making indentation harder to follow. > > > Similarly, I usually split my vim in 2 halves, around 90 characters fit > into each half. > Likewise; I'm still a big fan of shorter line lengths, so I'd prefer 79 columns too. Alex. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon May 30 15:53:24 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 30 May 2022 17:53:24 +0200 Subject: Ansible OpenStack Collection 2.0.0 and OpenStack SDK 1.0.0/0.99.0 In-Reply-To: <79624a41-e280-7134-7c1c-37ecaa526c7d@redhat.com> References: <79624a41-e280-7134-7c1c-37ecaa526c7d@redhat.com> Message-ID: Thank you Jakob for describing the evolution of the collection. Are you planning to release an updated 1.x.x version soon, given that new installs of 1.8.0 will use openstacksdk 0.99.0 which breaks image upload? Thanks, Pierre Riteau (priteau) On Fri, 27 May 2022 at 14:14, Jakob Meng wrote: > Hello contributors and users of the Ansible OpenStack collection [1]! > > This week a release candidate of the upcoming first major release of > OpenStack SDK has been released [2],[3]. It streamlined and improved > large parts of its codebase. For example, its Connection interface now > consistently uses the Resource interfaces under the hood. This required > breaking changes from older SDK releases though. > > The Ansible OpenStack collection is heavily based on OpenStack SDK. With > OpenStack SDK becoming backward incompatible (for the better), so does > our Ansible OpenStack collection. We simply lack the devpower to > maintain a backward compatible interface in Ansible OpenStack collection > across several SDK releases. > > We already split our codebase into two separate git branches: master and > stable/1.0.0. The former will track the upcoming 2.x.x releases of > Ansible OpenStack collection which will be compatible with OpenStack SDK > 1.x.x (and its rcs 0.99.x) *only*. Our stable/1.0.0 branch will track > the current 1.x.x releases of Ansible OpenStack collection which is > compatible with OpenStack SDK prior to 0.99.0 *only*. Both branches will > be developed in parallel for the time being. > > Our 2.0.0 release is currently under development and we still have a > long way to go. "We" mainly are a couple of Red Hat employees working > part-time on the collection. If you use modules of Ansible OpenStack > collection and want to help us with porting them to the new SDK, please > contact us! > > If you want to help, please reach out to us (e.g. [7],[8]) and we can > give you a quick introduction into everything. We have extensive > documentation on why, what and how we are adopting and reviewing the new > modules [4], how to set up a working DevStack environment for hacking on > the collection [5] and, most importantly, a list of modules where we are > coordinating our porting efforts [6]. We are also hanging around on > irc.oftc.net/#openstack-ansible-sig and #oooq ? > > [1] https://opendev.org/openstack/ansible-collections-openstack > [2] https://github.com/openstack/openstacksdk/releases/tag/0.99.0 > [3] https://pypi.org/project/openstacksdk/0.99.0/ > [4] https://hackmd.io/szgyWa5qSUOWw3JJBXLmOQ?view > [5] https://hackmd.io/PI10x-iCTBuO09duvpeWgQ?view > [6] https://hackmd.io/7NtovjRkRn-tKraBXfz9jw?view > [7] Rafael Castillo (rcastillo) > [8] Jakob Meng , (jm1) > > Best, > Jakob > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon May 30 16:02:48 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 30 May 2022 18:02:48 +0200 Subject: [nova][stable] nova-stable-core team changes Message-ID: Hi, After discussing with all our Nova stable cores, we agreed the below : - Welcome Sean Mooney as a new nova-stable-core team member - Welcome Balazs Gibizer as a new nova-stable-core team member - Welcome Stephen Finucane as a new nova-stable-core team member. I would also like to take the opportunity to thank Matt Riedemann and Lee Yarwood for their important and awesome contributions to the Nova Stable Team. Without them, we saw the impact of their misses, hence the above ^. Lee, Matt, you can anytime return to the nova-stable-core team if you express so, but in the meantime, I'll remove you from the members list. Changes to https://review.opendev.org/admin/groups/540,members will occur literally seconds after I send this email. -Sylvain (on behalf of the Nova Stable team) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon May 30 16:22:12 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 30 May 2022 18:22:12 +0200 Subject: [nova][stable] nova-stable-core team changes In-Reply-To: References: Message-ID: Le lun. 30 mai 2022 ? 18:02, Sylvain Bauza a ?crit : > Hi, > > After discussing with all our Nova stable cores, we agreed the below : > - Welcome Sean Mooney as a new nova-stable-core team member > - Welcome Balazs Gibizer as a new nova-stable-core team member > - Welcome Stephen Finucane as a new nova-stable-core team member. > > I would also like to take the opportunity to thank Matt Riedemann and Lee > Yarwood for their important and awesome contributions to the Nova Stable > Team. Without them, we saw the impact of their misses, hence the above ^. > Lee, Matt, you can anytime return to the nova-stable-core team if you > express so, but in the meantime, I'll remove you from the members list. > > Changes to https://review.opendev.org/admin/groups/540,members will occur > literally seconds after I send this email. > Ergh, looks like I promised something I was unable to provide. Due to some ownership permissions, nova-stable-maint team members are unable to add themselves new members, only stable-maint-core (and Release Managers group) can. If someone from https://review.opendev.org/admin/groups/5c75219bf2ace95cdea009c82df26ca199e04d59,members or https://review.opendev.org/admin/groups/2267a5998d4224dd0acf1081eb2ee7b11573b7ea,members could modify nova-stable-maint accordingly, this would be greatly appreciated. -Sylvain -Sylvain (on behalf of the Nova Stable team) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon May 30 16:23:12 2022 From: marios at redhat.com (Marios Andreou) Date: Mon, 30 May 2022 19:23:12 +0300 Subject: [tripleo] stable/wallaby gate blocker for centos-8 standalone - please hold recheck In-Reply-To: References: Message-ID: On Mon, May 30, 2022 at 4:16 PM Marios Andreou wrote: > > Hi folks > > fyi there is a wallaby gate blocker at > https://bugs.launchpad.net/tripleo/+bug/1976247 > Seems like ansible-role-python_venv_build is trying to install master > upper constraints and that is causing a conflict for oslo-db===11.3.0 > (comfing via stackviz installation) > > we are trying to find a workaround but please hold recheck until we do > FYI trending resolved (ish) - the tripleo wallaby gate should unblocked for now with https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/843856 (waiting for zuul to report and ci team will try get into the gate). However we will still need a better solution to re-enable stackviz thanks for your patience, thanks to various folks who helped especially ysandeep, bhagyashris, chkumar marios > grateful for any pointers or suggestions here or on the launchpad o/ > > thanks, marios From lokendrarathour at gmail.com Mon May 30 05:36:13 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Mon, 30 May 2022 11:06:13 +0530 Subject: ERROR openstack [-] Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat.| Openstack tripleo wallaby version In-Reply-To: References: Message-ID: Hi Swogat, Thanks once again. with the files as shown below I am running the overcloud deploy for wallaby using this command: (undercloud) [stack at undercloud ~]$ cat deploy_overcloud_working_1.sh openstack overcloud deploy --templates \ -n /home/stack/templates/network_data.yaml \ -r /home/stack/templates/roles_data.yaml \ -e /home/stack/templates/environment.yaml \ -e /home/stack/templates/environments/network-isolation.yaml \ -e /home/stack/templates/environments/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml \ -e /home/stack/templates/ironic-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /home/stack/containers-prepare-parameter.yaml (undercloud) [stack at undercloud ~]$ This deployment is on ipv6 using triple0 wallaby, templates, as mentioned below, are generated using rendering steps and the network_data.yaml the roles_data.yaml Steps used to render the templates: cd /usr/share/openstack-tripleo-heat-templates/ ./tools/process-templates.py -o ~/openstack-tripleo-heat-templates-rendered_at_wallaby -n /home/stack/templates/network_data.yaml -r /home/stack/templates/roles_data.yaml *Now if i try adding the related to VIP port I do get the error as:* 2022-05-30 10:37:12.792 979387 WARNING tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template to file: /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/controller-role.yaml 2022-05-30 10:37:12.792 979387 WARNING tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template to file: /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/compute-role.yaml 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured while running the command: ValueError: The environment is not a valid YAML mapping data type. 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last): 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, self).run(parsed_args) 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in run 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, self).run(parsed_args) 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = self.take_action(parsed_args) or 0 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 1189, in take_action 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud stack, parsed_args, new_tht_root, user_tht_root) 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 227, in create_env_files 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud created_env_files, parsed_args, new_tht_root, user_tht_root) 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 204, in build_image_params 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud cleanup=(not parsed_args.no_cleanup)) 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1929, in process_multiple_environments 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud env_path=env_path, include_env_in_files=include_env_in_files) 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heatclient/common/template_utils.py", line 326, in process_environment_and_files 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud env = environment_format.parse(raw_env) 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/heatclient/common/environment_format.py", line 50, in parse 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud raise ValueError(_('The environment is not a valid ' 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud ValueError: The environment is not a valid YAML mapping data type. 2022-05-30 10:37:14.455 979387 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2022-05-30 10:37:14.457 979387 ERROR openstack [-] The environment is not a valid YAML mapping data type. 2022-05-30 10:37:14.457 979387 INFO osc_lib.shell [-] END return value: 1 (undercloud) [stack at undercloud ~]$ This is more of a syntax error where it is not able to understand the passed VIP data file: undercloud) [stack at undercloud ~]$ cat /home/stack/templates/vip-data-default-network-isolation.yaml - dns_name: overcloud network: internal_api - dns_name: overcloud network: external - dns_name: overcloud network: ctlplane - dns_name: overcloud network: oc_provisioning - dns_name: overcloud network: j3mgmt Please advise, also please note that similar templates generated in prior releases such as train/ussuri works perfectly. Please check the list of *templates *files: drwxr-xr-x. 2 stack stack 68 May 30 09:22 environments -rw-r--r--. 1 stack stack 265 May 27 13:47 environment.yaml -rw-rw-r--. 1 stack stack 297 May 27 13:47 init-repo.yaml -rw-r--r--. 1 stack stack 570 May 27 13:47 ironic-config.yaml drwxrwxr-x. 4 stack stack 4096 May 27 13:53 network -rw-r--r--. 1 stack stack 6370 May 27 14:26 network_data.yaml -rw-r--r--. 1 stack stack 11137 May 27 13:53 roles_data.yaml -rw-r--r--. 1 stack stack 234 May 30 09:23 vip-data-default-network-isolation.yaml (undercloud) [stack at undercloud templates]$ cat environment.yaml parameter_defaults: OvercloudControllerFlavor: control OvercloudComputeFlavor: compute ControllerCount: 3 ComputeCount: 1 TimeZone: 'Asia/Kolkata' NtpServer: ['30.30.30.3'] NeutronBridgeMappings: datacentre:br-tenant NeutronFlatNetworks: datacentre (undercloud) [stack at undercloud templates]$ (undercloud) [stack at undercloud templates]$ cat ironic-config.yaml parameter_defaults: NovaSchedulerDefaultFilters: - AggregateInstanceExtraSpecsFilter - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter IronicEnabledHardwareTypes: - ipmi - redfish IronicEnabledPowerInterfaces: - ipmitool - redfish IronicEnabledManagementInterfaces: - ipmitool - redfish IronicCleaningDiskErase: metadata IronicIPXEEnabled: true IronicInspectorSubnets: - ip_range: 172.23.3.100,172.23.3.150 (undercloud) [stack at undercloud templates]$ cat network_data.yaml - name: J3Mgmt name_lower: j3mgmt vip: true vlan: 400 ipv6: true ipv6_subnet: 'fd80:fd00:fd00:4000::/64' ipv6_allocation_pools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] mtu: 9000 - name: InternalApi name_lower: internal_api vip: true vlan: 418 ipv6: true ipv6_subnet: 'fd00:fd00:fd00:2000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] mtu: 9000 - name: External vip: true name_lower: external vlan: 408 ipv6: true gateway_ipv6: 'fd00:fd00:fd00:9900::1' ipv6_subnet: 'fd00:fd00:fd00:9900::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] mtu: 9000 - name: OCProvisioning vip: true name_lower: oc_provisioning vlan: 412 ip_subnet: '172.23.3.0/24' allocation_pools: [{'start': '172.23.3.10', 'end': '172.23.3.50'}] mtu: 9000 (undercloud) [stack at undercloud templates]$ cat roles_data.yaml ############################################################################### # File generated by TripleO ############################################################################### ############################################################################### # Role: Controller # ############################################################################### - name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging, and Network functions. CountDefault: 1 tags: - primary - controller # Create external Neutron bridge for SNAT (and floating IPs when using # ML2/OVS without DVR) - external_bridge networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet OCProvisioning: subnet: oc_provisioning_subnet J3Mgmt: subnet: j3mgmt_subnet # For systems with both IPv4 and IPv6, you may specify a gateway network for # each, such as ['ControlPlane', 'External'] default_route_networks: ['External'] HostnameFormatDefault: '%stackname%-controller-%index%' RoleParametersDefault: OVNCMSOptions: "enable-chassis-as-gw" # Deprecated & backward-compatible values (FIXME: Make parameters consistent) # Set uses_deprecated_params to True if any deprecated params are used. uses_deprecated_params: True deprecated_param_extraconfig: 'controllerExtraConfig' deprecated_param_flavor: 'OvercloudControlFlavor' deprecated_param_image: 'controllerImage' deprecated_nic_config_name: 'controller.yaml' update_serial: 1 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AodhApi - OS::TripleO::Services::AodhEvaluator .. . ..############################################################################### # Role: Compute # ############################################################################### - name: Compute description: | Basic Compute Node role CountDefault: 1 # Create external Neutron bridge (unset if using ML2/OVS without DVR) tags: - compute - external_bridge networks: InternalApi: subnet: internal_api_subnet J3Mgmt: subnet: j3mgmt_subnet HostnameFormatDefault: '%stackname%-novacompute-%index%' RoleParametersDefault: FsAioMaxNumber: 1048576 TunedProfileName: "virtual-host" # Deprecated & backward-compatible values (FIXME: Make parameters consistent) # Set uses_deprecated_params to True if any deprecated params are used. # These deprecated_params only need to be used for existing roles and not for # composable roles. uses_deprecated_params: True deprecated_param_image: 'NovaImage' deprecated_param_extraconfig: 'NovaComputeExtraConfig' deprecated_param_metadata: 'NovaComputeServerMetadata' deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' deprecated_param_ips: 'NovaComputeIPs' deprecated_server_resource_name: 'NovaCompute' deprecated_nic_config_name: 'compute.yaml' update_serial: 25 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams (undercloud) [stack at undercloud templates]$ cat environments/network-environment.yaml #This file is an example of an environment file for defining the isolated #networks and related parameters. resource_registry: # Network Interface templates to use (these files must exist). You can # override these by including one of the net-*.yaml environment files, # such as net-bond-with-vlans.yaml, or modifying the list here. # Port assignments for the Controller OS::TripleO::Controller::Net::SoftwareConfig: OS::Heat::None # Port assignments for the Compute OS::TripleO::Compute::Net::SoftwareConfig: OS::Heat::None parameter_defaults: # This section is where deployment-specific configuration is done # ServiceNetMap: IronicApiNetwork: oc_provisioning IronicNetwork: oc_provisioning # This section is where deployment-specific configuration is done ControllerNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' ComputeNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' # Customize the IP subnet to match the local environment J3MgmtNetCidr: 'fd80:fd00:fd00:4000::/64' # Customize the IP range to use for static IPs and VIPs J3MgmtAllocationPools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] # Customize the VLAN ID to match the local environment J3MgmtNetworkVlanID: 400 # Customize the IP subnet to match the local environment InternalApiNetCidr: 'fd00:fd00:fd00:2000::/64' # Customize the IP range to use for static IPs and VIPs InternalApiAllocationPools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] # Customize the VLAN ID to match the local environment InternalApiNetworkVlanID: 418 # Customize the IP subnet to match the local environment ExternalNetCidr: 'fd00:fd00:fd00:9900::/64' # Customize the IP range to use for static IPs and VIPs # Leave room if the external network is also used for floating IPs ExternalAllocationPools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] # Gateway router for routable networks ExternalInterfaceDefaultRoute: 'fd00:fd00:fd00:9900::1' # Customize the VLAN ID to match the local environment ExternalNetworkVlanID: 408 # Customize the IP subnet to match the local environment OCProvisioningNetCidr: '172.23.3.0/24' # Customize the IP range to use for static IPs and VIPs OCProvisioningAllocationPools: [{'start': '172.23.3.10', 'end': '172.23.3.50'}] # Customize the VLAN ID to match the local environment OCProvisioningNetworkVlanID: 412 # List of Neutron network types for tenant networks (will be used in order) NeutronNetworkType: 'geneve,vlan' # Neutron VLAN ranges per network, for example 'datacentre:1:499,tenant:500:1000': NeutronNetworkVLANRanges: 'datacentre:1:1000' # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 miimon=100" # for Linux bonds w/LACP, or "bond_mode=active-backup" for OVS active/backup. BondInterfaceOvsOptions: "bond_mode=active-backup" (undercloud) [stack at undercloud templates]$ (undercloud) [stack at undercloud templates]$ cat environments/network-isolation.yaml # NOTE: This template is now deprecated, and is only included for compatibility # when upgrading a deployment where this template was originally used. For new # deployments, set "ipv6: true" on desired networks in network_data.yaml, and # include network-isolation.yaml. # # Enable the creation of Neutron networks for isolated Overcloud # traffic and configure each role to assign ports (related # to that role) on these networks. resource_registry: # networks as defined in network_data.yaml OS::TripleO::Network::J3Mgmt: ../network/j3mgmt_v6.yaml OS::TripleO::Network::InternalApi: ../network/internal_api_v6.yaml OS::TripleO::Network::External: ../network/external_v6.yaml OS::TripleO::Network::OCProvisioning: ../network/oc_provisioning.yaml # Port assignments for the VIPs OS::TripleO::Network::Ports::J3MgmtVipPort: ../network/ports/j3mgmt_v6.yaml OS::TripleO::Network::Ports::InternalApiVipPort: ../network/ports/internal_api_v6.yaml OS::TripleO::Network::Ports::ExternalVipPort: ../network/ports/external_v6.yaml OS::TripleO::Network::Ports::OCProvisioningVipPort: ../network/ports/oc_provisioning.yaml # Port assignments by role, edit role definition to assign networks to roles. # Port assignments for the Controller OS::TripleO::Controller::Ports::J3MgmtPort: ../network/ports/j3mgmt_v6.yaml OS::TripleO::Controller::Ports::InternalApiPort: ../network/ports/internal_api_v6.yaml OS::TripleO::Controller::Ports::ExternalPort: ../network/ports/external_v6.yaml OS::TripleO::Controller::Ports::OCProvisioningPort: ../network/ports/oc_provisioning.yaml # Port assignments for the Compute OS::TripleO::Compute::Ports::J3MgmtPort: ../network/ports/j3mgmt_v6.yaml OS::TripleO::Compute::Ports::InternalApiPort: ../network/ports/internal_api_v6.yaml parameter_defaults: # Enable IPv6 environment for Manila ManilaIPv6: True (undercloud) [stack at undercloud templates]$ On Tue, May 24, 2022 at 5:04 PM Lokendra Rathour wrote: > Thanks, I'll check them out. > will let you know in case it works out. > > On Tue, May 24, 2022 at 2:37 PM Swogat Pradhan > wrote: > >> Hi, >> Please find the below templates: >> These are for openstack wallaby release: >> >> (undercloud) [stack at hkg2director workplace]$ cat custom_network_data.yaml >> - name: Storage >> name_lower: storage >> vip: true >> mtu: 1500 >> subnets: >> storage_subnet: >> ip_subnet: 172.25.202.0/26 >> allocation_pools: >> - start: 172.25.202.6 >> end: 172.25.202.20 >> vlan: 1105 >> - name: StorageMgmt >> name_lower: storage_mgmt >> vip: true >> mtu: 1500 >> subnets: >> storage_mgmt_subnet: >> ip_subnet: 172.25.202.64/26 >> allocation_pools: >> - start: 172.25.202.72 >> end: 172.25.202.87 >> vlan: 1106 >> - name: InternalApi >> name_lower: internal_api >> vip: true >> mtu: 1500 >> subnets: >> internal_api_subnet: >> ip_subnet: 172.25.201.192/26 >> allocation_pools: >> - start: 172.25.201.198 >> end: 172.25.201.212 >> vlan: 1104 >> - name: Tenant >> vip: false # Tenant network does not use VIPs >> mtu: 1500 >> name_lower: tenant >> subnets: >> tenant_subnet: >> ip_subnet: 172.25.202.128/26 >> allocation_pools: >> - start: 172.25.202.135 >> end: 172.25.202.150 >> vlan: 1108 >> - name: External >> name_lower: external >> vip: true >> mtu: 1500 >> subnets: >> external_subnet: >> ip_subnet: 172.25.201.128/26 >> allocation_pools: >> - start: 172.25.201.135 >> end: 172.25.201.150 >> gateway_ip: 172.25.201.129 >> vlan: 1103 >> >> (undercloud) [stack at hkg2director workplace]$ cat custom_vip_data.yaml >> - network: ctlplane >> #dns_name: overcloud >> ip_address: 172.25.201.91 >> subnet: ctlplane-subnet >> - network: external >> #dns_name: overcloud >> ip_address: 172.25.201.150 >> subnet: external_subnet >> - network: internal_api >> #dns_name: overcloud >> ip_address: 172.25.201.250 >> subnet: internal_api_subnet >> - network: storage >> #dns_name: overcloud >> ip_address: 172.25.202.50 >> subnet: storage_subnet >> - network: storage_mgmt >> #dns_name: overcloud >> ip_address: 172.25.202.90 >> subnet: storage_mgmt_subnet >> >> (undercloud) [stack at hkg2director workplace]$ cat >> overcloud-baremetal-deploy.yaml >> - name: Controller >> count: 4 >> defaults: >> networks: >> - network: ctlplane >> vif: true >> - network: external >> subnet: external_subnet >> - network: internal_api >> subnet: internal_api_subnet >> - network: storage >> subnet: storage_subnet >> - network: storage_mgmt >> subnet: storage_mgmt_subnet >> - network: tenant >> subnet: tenant_subnet >> network_config: >> template: /home/stack/templates/controller.j2 >> default_route_network: >> - external >> instances: >> - hostname: overcloud-controller-0 >> name: dc1-controller2 >> #provisioned: false >> - hostname: overcloud-controller-1 >> name: dc2-controller2 >> #provisioned: false >> - hostname: overcloud-controller-2 >> name: dc1-controller1 >> #provisioned: false >> - hostname: overcloud-controller-no-ceph-3 >> name: dc2-ceph2 >> #provisioned: false >> #- hostname: overcloud-controller-3 >> #name: dc2-compute3 >> #provisioned: false >> >> - name: Compute >> count: 5 >> defaults: >> networks: >> - network: ctlplane >> vif: true >> - network: internal_api >> subnet: internal_api_subnet >> - network: tenant >> subnet: tenant_subnet >> - network: storage >> subnet: storage_subnet >> network_config: >> template: /home/stack/templates/compute.j2 >> instances: >> - hostname: overcloud-novacompute-0 >> name: dc2-compute1 >> #provisioned: false >> - hostname: overcloud-novacompute-1 >> name: dc2-ceph1 >> #provisioned: false >> - hostname: overcloud-novacompute-2 >> name: dc1-compute1 >> #provisioned: false >> - hostname: overcloud-novacompute-3 >> name: dc1-compute2 >> #provisioned: false >> - hostname: overcloud-novacompute-4 >> name: dc2-compute3 >> #provisioned: false >> >> - name: CephStorage >> count: 4 >> defaults: >> networks: >> - network: ctlplane >> vif: true >> - network: internal_api >> subnet: internal_api_subnet >> - network: storage >> subnet: storage_subnet >> - network: storage_mgmt >> subnet: storage_mgmt_subnet >> network_config: >> template: /home/stack/templates/ceph-storage.j2 >> instances: >> - hostname: overcloud-cephstorage-0 >> name: dc2-controller1 >> #provisioned: false >> # - hostname: overcloud-cephstorage-1 >> # name: dc2-ceph2 >> - hostname: overcloud-cephstorage-1 >> name: dc1-ceph1 >> # provisioned: false >> - hostname: overcloud-cephstorage-2 >> name: dc1-ceph2 >> #provisioned: false >> - hostname: overcloud-cephstorage-3 >> name: dc2-compute2 >> #provisioned: false >> >> >> You must use these templates to provision network, vip and nodes. >> You must use the output files generated during the provisioning step in >> openstack overcloud deploy command using -e parameter. >> >> With regards, >> Swogat Pradhan >> >> >> On Mon, May 23, 2022 at 8:33 PM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Hi Swogat, >>> I tried checking your solution and my templates but could not relate >>> much. >>> But issue seems the same >>> >>> http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html >>> >>> I tried somemore ways but looks like some issue with templates. >>> Can you please share the templates used to deploy the overcloud. >>> >>> Mysetup have 3 controller and 1 compute. >>> >>> Thanks once again for reading my mail. >>> >>> Waiting for your reply. >>> >>> -Lokendra >>> >>> On Fri, 20 May 2022, 08:25 Swogat Pradhan, >>> wrote: >>> >>>> Hi, >>>> Yes I was able to find the issue and fix it. >>>> The issue was with the overcloud-baremetal-deployed.yaml file i was >>>> trying to provision controller-0, controller-1 and controller-3 and kept >>>> controller-2 aside for later, but the tripleo scripts are built in such a >>>> way that they were taking controller- 0, 1 and 2 inplace of controller-3, >>>> so the network ports and vip were created for controller 0,1 and 2 but not >>>> for 3 , so this error was popping off. Also i would request you to check >>>> the jinja nic templates and once the node provisioning is done check the >>>> /etc/os-net-config/config.json/yaml file for syntax if using bonded nic >>>> template. >>>> If you need any more infor please let me know. >>>> >>>> With regards, >>>> Swogat Pradhan >>>> >>>> >>>> >>>> On Fri, May 20, 2022 at 8:01 AM Lokendra Rathour < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Swogat, >>>>> Thanks for raising this issue. >>>>> Did you find any solution? to this problem ? >>>>> >>>>> Please let me know it might be helpful >>>>> >>>>> >>>>> On Tue, Apr 19, 2022 at 12:43 PM Swogat Pradhan < >>>>> swogatpradhan22 at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> I am currently trying to deploy openstack wallaby using tripleo arch. >>>>>> I created the network jinja templates, ran the following commands >>>>>> also: >>>>>> >>>>>> #openstack overcloud network provision --stack overcloud --output >>>>>> networks-deployed-environment.yaml custom_network_data.yaml >>>>>> # openstack overcloud network vip provision --stack overcloud >>>>>> --output vip-deployed-environment.yaml custom_vip_data.yaml >>>>>> # openstack overcloud node provision --stack overcloud >>>>>> --overcloud-ssh-key /home/stack/sshkey/id_rsa >>>>>> overcloud-baremetal-deploy.yaml >>>>>> >>>>>> and used the environment files in the openstack overcloud deploy >>>>>> command: >>>>>> >>>>>> (undercloud) [stack at hkg2director ~]$ cat deploy.sh >>>>>> #!/bin/bash >>>>>> THT=/usr/share/openstack-tripleo-heat-templates/ >>>>>> CNF=/home/stack/ >>>>>> openstack overcloud deploy --templates $THT \ >>>>>> -r $CNF/templates/roles_data.yaml \ >>>>>> -n $CNF/workplace/custom_network_data.yaml \ >>>>>> -e ~/containers-prepare-parameter.yaml \ >>>>>> -e $CNF/templates/node-info.yaml \ >>>>>> -e $CNF/templates/scheduler-hints.yaml \ >>>>>> -e $CNF/workplace/networks-deployed-environment.yaml \ >>>>>> -e $CNF/workplace/vip-deployed-environment.yaml \ >>>>>> -e $CNF/workplace/overcloud-baremetal-deployed.yaml \ >>>>>> -e $CNF/workplace/custom-net-bond-with-vlans.yaml >>>>>> >>>>>> Now when i run the ./deploy.sh script i encounter an error stating: >>>>>> >>>>>> ERROR openstack [-] Resource >>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>> included with the deployment command.: >>>>>> tripleoclient.exceptions.InvalidConfiguration: Resource >>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>> included with the deployment command. >>>>>> 2022-04-19 13:47:16.582 735924 INFO osc_lib.shell [-] END return >>>>>> value: 1 >>>>>> >>>>>> Can someone tell me where the mistake is? >>>>>> >>>>>> With regards, >>>>>> Swogat Pradhan >>>>>> >>>>> >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon May 30 16:43:19 2022 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 30 May 2022 13:43:19 -0300 Subject: [ironic] Group Dinner in Berlin - OIS In-Reply-To: References: Message-ID: Hello everyone, We will have our dinner on Tuesday June 07 at 19:00 I've made a reservation for 10 people - Location: Hofbr?u Wirtshaus Berlin [1] The information is also in the etherpad [2] [1] https://www.hofbraeu-wirtshaus.de/en/berlin/ [2] https://etherpad.opendev.org/p/ironic-dinner-ois2022Berlin Em seg., 16 de mai. de 2022 ?s 10:00, Iury Gregory escreveu: > Hello Ironicers and fellow stackers! > > Since some of us will be at the Open Infrastructure Summit in Berlin, I > think this would be a great opportunity to have a group dinner in Berlin > with our friends (like the one we had during the Ironic Mid-Cycle at CERN). > > I've created an etherpad to track who would be interested in it [1] and > also choose the best day. > > [1] https://etherpad.opendev.org/p/ironic-dinner-ois2022Berlin > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the ironic-core and puppet-manager-core team in OpenStack* > *Senior Software Engineer at Red Hat Brasil* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Mon May 30 16:49:48 2022 From: gfidente at redhat.com (Giulio Fidente) Date: Mon, 30 May 2022 18:49:48 +0200 Subject: SSL verify failed | Overcloud deploy step 4 | Wallaby | tripleo In-Reply-To: References: Message-ID: On 5/23/22 15:06, Swogat Pradhan wrote: > Hi, > I am facing the below issue in step4 '?TASK | Clean up legacy Cinder > keystone catalog entries': [...] the issue you are hitting looks like this https://bugzilla.redhat.com/show_bug.cgi?id=2078898 it's currently still being investigated and I suggest monitoring the bugzilla page for updates; I don't think there is a Launchpad bug for this yet (unless I missed it) as important: in the future, please don't post multiple messages to the mailing list just to fix the formatting, it has possibly caused more harm than good -- Giulio Fidente GPG KEY: 08D733BA From tim+openstack.org at coote.org Mon May 30 17:26:58 2022 From: tim+openstack.org at coote.org (tim+openstack.org at coote.org) Date: Mon, 30 May 2022 18:26:58 +0100 Subject: Novice question In-Reply-To: References: Message-ID: Hullo again. > On 27 May 2022, at 20:11, Michael Johnson wrote: > > Hi Tim, > > This should work fine. You will want a localrc/local.conf file to > configure devstack. I didn't see that mentioned in your steps. > See this section in the docs: > https://docs.openstack.org/devstack/latest/#create-a-local-conf Sorry, I missed that step in the description. local.conf is being created. > > The only caveat I would mention is the VM instances in Nova will run > super slow on virtualbox as it lacks the required "nested > virtualization" support and will run them all in software emulation. I?m not worried about this atm. Once I?ve got something working, I can sort that out. > > To find the root cause of the issue in nova, I would look through the > devstack at n-cpu log file (journal -u devstack at n-cpui) and the > devstack at n-sch logs. Neither of these logs contain errors. However, if I look for ?ERROR? in `sudo journalctl`, I do get ~400 keystone errors, where users, domains, roles and services cannot be found. I?m not sure whether this is expected or not. They occur at the beginning of the log and may be significant, or just an artefact of the `stack.sh` script. Here?s a few (filtered by `sudo journalctl |grep keyston |grep exception|grep "Could not find"|grep -v "None admin") examples: May 30 15:57:39 node1.example.dd devstack at keystone.service[94392]: ERROR keystone.server.flask.application keystone.exception.ProjectNotFound: Could not find project: service. May 30 15:57:41 node1.example.dd devstack at keystone.service[94392]: ERROR keystone.server.flask.application keystone.exception.ServiceNotFound: Could not find service: glance. May 30 15:57:41 node1.example.dd devstack at keystone.service[94392]: ERROR keystone.server.flask.application keystone.exception.ServiceNotFound: Could not find service: glance. May 30 15:57:42 node1.example.dd devstack at keystone.service[94393]: ERROR keystone.server.flask.application keystone.exception.UserNotFound: Could not find user: cinder. Subsequently, this is what I suspect is the smoking gun, as it precedes the timeout: May 30 16:14:45 node1.example.dd nova-compute[133470]: ERROR os_brick.initiator.connectors.iscsi Command: iscsiadm -m discoverydb -o show -P 1 May 30 16:14:45 node1.example.dd nova-compute[133470]: ERROR os_brick.initiator.connectors.iscsi Exit code: 21 May 30 16:14:45 node1.example.dd nova-compute[133470]: ERROR os_brick.initiator.connectors.iscsi Stdout: 'SENDTARGETS:\nNo targets found.\niSNS:\nNo targets found.\nSTATIC:\nNo targets found.\nFIRMWARE:\nNo targets found.\n' I have no idea what that means. My guess is that there?s supposed to be a fake iscsi device and it?s not on a local network, but that?s really just a guess. > > Also, you might have a look at one of the nova test localrc file as an example: > https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_da7/843127/6/check/tempest-integrated-compute-centos-9-stream/da7bebc/controller/logs/local_conf.txt Presumably, this example would need things like the various IP addresses changing to be useful. What would one then do with it? > > Michael > Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 30 20:32:58 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 30 May 2022 15:32:58 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 2, 2022 at 1500 UTC Message-ID: <18116abf909.bcbc5c9a267396.2648997117806913882@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for June 2, 2022, at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, June 1, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From gmann at ghanshyammann.com Tue May 31 00:13:27 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 30 May 2022 19:13:27 -0500 Subject: [sdk][stable][gate] openstacksdk-functional-devstack is broken in stable/ussuri branch Message-ID: <1811775d766.c7dd1989270604.2413644432969236480@ghanshyammann.com> Hello Everyone, With the recent update in 'coverage' constraints to 6.4[1], openstacksdk-functional-devstack is broken in stable/ussuri branch. openstacksdk-functional-devstack running on stable branches use the master SDK and master constraints and in stable/ussuri this job runs on bionic/py3.6 and now fail for 'coverage' 6.4 version conflict. - https://zuul.opendev.org/t/openstack/build/3d3c21d804ec498e957c1c64470c991b/log/job-output.txt#43332 - https://storyboard.openstack.org/#!/story/2010057 As we know requirements generate constraints for py3.8|9 so if we want this job to run with master constraints then it needs to run on compatible python and distro. This is why I am proposing to run this job on focal - https://review.opendev.org/c/openstack/openstacksdk/+/843968 But clark made a good point in IRC that we should test this job with all python versions supported by the stable branch which we can do by testing this job with stable branch constraints and see if still, master SDK runs on those python versions and stable constraints. This is something we can improve if SDK team agrees or have the bandwidth. Anyways please do not recheck on stable/ussuri until this job is fixed (843968 is merged). [1] https://review.opendev.org/c/openstack/requirements/+/843722/3/upper-constraints.txt#273 -gmann From gmann at ghanshyammann.com Tue May 31 01:28:01 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 30 May 2022 20:28:01 -0500 Subject: [sdk][stable][requirements][gate] openstacksdk-functional-devstack is broken in stable/ussuri branch In-Reply-To: <1811775d766.c7dd1989270604.2413644432969236480@ghanshyammann.com> References: <1811775d766.c7dd1989270604.2413644432969236480@ghanshyammann.com> Message-ID: <18117ba1b3f.f0278bde263314.8669840273606884050@ghanshyammann.com> ---- On Mon, 30 May 2022 19:13:27 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > With the recent update in 'coverage' constraints to 6.4[1], openstacksdk-functional-devstack is > broken in stable/ussuri branch. openstacksdk-functional-devstack running on stable branches > use the master SDK and master constraints and in stable/ussuri this job runs on bionic/py3.6 > and now fail for 'coverage' 6.4 version conflict. > > - https://zuul.opendev.org/t/openstack/build/3d3c21d804ec498e957c1c64470c991b/log/job-output.txt#43332 > - https://storyboard.openstack.org/#!/story/2010057 > > As we know requirements generate constraints for py3.8|9 so if we want this job to run with master > constraints then it needs to run on compatible python and distro. This is why I am proposing to > run this job on focal > > - https://review.opendev.org/c/openstack/openstacksdk/+/843968 It did not work as stable/ussuri devstack does not support the ubuntu focal. I am pinning this job to use stable/ussuri openstacksdk and constraints. I know that is what we wanted to test openstacksdk against stable branches but please propose other ideas if you have. - https://review.opendev.org/c/openstack/openstacksdk/+/843978 -gmann > > But clark made a good point in IRC that we should test this job with all python versions supported by > the stable branch which we can do by testing this job with stable branch constraints and see if > still, master SDK runs on those python versions and stable constraints. This is something we can > improve if SDK team agrees or have the bandwidth. > > Anyways please do not recheck on stable/ussuri until this job is fixed (843968 is merged). > > [1] https://review.opendev.org/c/openstack/requirements/+/843722/3/upper-constraints.txt#273 > > -gmann > > From dpawlik at redhat.com Tue May 31 06:24:34 2022 From: dpawlik at redhat.com (Daniel Pawlik) Date: Tue, 31 May 2022 08:24:34 +0200 Subject: OpenStack logs on Opensearch - storage problem In-Reply-To: <20220530122912.dffadzvt3kga72y2@yuggoth.org> References: <20220530122912.dffadzvt3kga72y2@yuggoth.org> Message-ID: Hi, thank you for your advice. Created a PS for logsender [1]. Hope it helps. Dan [1] https://review.opendev.org/c/openstack/ci-log-processing/+/843929 On Mon, May 30, 2022 at 2:39 PM Jeremy Stanley wrote: > On 2022-05-30 14:02:15 +0200 (+0200), Daniel Pawlik wrote: > > In the last few days there is increase amount of CI jobs that are > > running on Zuul CI gates, where later logs are pushed into the > > Opensearch cluster [1]. The current cluster storage is almost > > full, which causes one of the actions to be performed: temporarily > > changing log retention (currently it is set to 14 days) or > > removing temporary non-essential logs [2]. I recommend removing > > not important logs, so if projects leaders could check whether > > entries related to the project are needed, it would be really > > helpful. > [...] > > For the old system, we filtered out all debug level log lines, only > importing lines at info level or greater. If you're not doing that > yet on the new system, it would probably help free up a lot of > space. > > We tried to maintain around 7-10 days of retention, so lowering the > retention from 14 days isn't really a regression over what we used > to provide. > -- > Jeremy Stanley > -- Regards, Daniel Pawlik -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanguangyu2 at gmail.com Tue May 31 06:27:49 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Tue, 31 May 2022 14:27:49 +0800 Subject: [dev][nova] How to add a column in table of Nova Database In-Reply-To: <151e69a269dff998b33c019e717bc71d6a3f2b3f.camel@redhat.com> References: <151e69a269dff998b33c019e717bc71d6a3f2b3f.camel@redhat.com> Message-ID: Hi, Stephen Thank you so so much for your help. That's very helpful. I'm trying to do this following https://docs.openstack.org/nova/latest/reference/database-migrations.html. Best wishes, Han Stephen Finucane ?2022?5?25??? 17:32??? > > On Sun, 2022-05-22 at 17:27 +0800, ??? wrote: > > Hi, > > > > I'm a beginner in OpenStack development. > > > > I would like to try modifying by adding a property to the 'Instances' > > database table. But I didn't find description in the documentation of > > the data model mechanism and how to extend the database. > > https://docs.openstack.org/nova/latest/ > > > > Now, I know that it involves versined object model and alembic. > > > > My question is: > > What is the process of adding a column to a table in the database? > > > > Could someone show me the process of modifying a database table or > > recommend me the relevant documentation > > > > Best wishes, > > Han > > This is pretty well documented: > > https://docs.openstack.org/nova/latest/reference/database-migrations.html > > The tl;dr: is a) make your changes to 'nova/db/{main|api}/models.py' then (b) > either auto-generate a schema using alembic's autogeneration functionality or > write one yourself. We have tests in place that will ensure the migrations and > models don't diverge. > > You should know that making changes to the database schema will generally > require a spec. You can find information on the purpose of specs and how to > write one here: > > https://specs.openstack.org/openstack/nova-specs/readme.html > > Be careful not to treat this as a downstream-only thing (i.e. by forking nova). > If you do, you are likely to cause yourself a lot of pain in the future as the > database schema of upstream nova will invariably diverge. > > If you have any questions, please ask here or (better) on #openstack-nova on > OFTC IRC. > > Hope this helps, > Stephen > From jmeng at redhat.com Tue May 31 06:39:29 2022 From: jmeng at redhat.com (Jakob Meng) Date: Tue, 31 May 2022 08:39:29 +0200 Subject: Ansible OpenStack Collection 2.0.0 and OpenStack SDK 1.0.0/0.99.0 In-Reply-To: References: <79624a41-e280-7134-7c1c-37ecaa526c7d@redhat.com> Message-ID: <5883506f-7112-04e8-3292-ebda35c5f79a@redhat.com> 1.x.x releases of Ansible OpenStack collection are compatible to openstacksdk<0.99.0 only. The upcoming 1.9.0 release will declare its incompatibility and raise an error with SDK >=0.99.0 [1]. Our current 1.8.0 release on Ansible Galaxy is partially broken with the new SDK but some parts still work (without any warranty). Once we release 1.9.0 it will refuse to work with latest SDK and we might immediately break use cases which kinda-work-with-new-sdk. We had this situation with our master branch where this safety check broke TripleO [2] which is why we shy away from releasing 1.9.0 for now. This is open for discussion, feedback is appreciated! [1] https://opendev.org/openstack/ansible-collections-openstack/commit/75558c5c2e970d40133273432ac77bbb161ff4ed [2] https://opendev.org/openstack/ansible-collections-openstack/commit/1b59c19a24c55aa236d80552dcbf70c9c7b5088e Best, Jakob On 30.05.22 17:53, Pierre Riteau wrote: > Thank you Jakob for describing the evolution?of the collection. > > Are you planning to release an updated 1.x.x version soon, given that > new?installs of 1.8.0 will use openstacksdk 0.99.0 which breaks?image > upload? > > Thanks, > Pierre Riteau (priteau) > > On Fri, 27 May 2022 at 14:14, Jakob Meng wrote: > > Hello contributors and users of the Ansible OpenStack collection [1]! > > This week a release candidate of the upcoming first major release of > OpenStack SDK has been released [2],[3]. It streamlined and improved > large parts of its codebase. For example, its Connection interface > now > consistently uses the Resource interfaces under the hood. This > required > breaking changes from older SDK releases though. > > The Ansible OpenStack collection is heavily based on OpenStack > SDK. With > OpenStack SDK becoming backward incompatible (for the better), so > does > our Ansible OpenStack collection. We simply lack the devpower to > maintain a backward compatible interface in Ansible OpenStack > collection > across several SDK releases. > > We already split our codebase into two separate git branches: > master and > stable/1.0.0. The former will track the upcoming > 2.x.x releases of > Ansible OpenStack collection which will be compatible with > OpenStack SDK > 1.x.x (and its rcs 0.99.x) *only*. Our stable/1.0.0 branch will track > the current 1.x.x releases of Ansible OpenStack collection which is > compatible with OpenStack SDK prior to 0.99.0 *only*. Both > branches will > be developed in parallel for the time being. > > Our 2.0.0 release is currently under development and we still have a > long way to go. "We" mainly are a couple of Red Hat employees working > part-time on the collection. If you use modules of Ansible OpenStack > collection and want to help us with porting them to the new SDK, > please > contact us! > > If you want to help, please reach out to us (e.g. [7],[8]) and we can > give you a quick introduction into everything. We have extensive > documentation on why, what and how we are adopting and reviewing > the new > modules [4], how to set up a working DevStack environment for > hacking on > the collection [5] and, most importantly, a list of modules where > we are > coordinating our porting efforts [6]. We are also hanging around on > irc.oftc.net/#openstack-ansible-sig > and #oooq ? > > [1] https://opendev.org/openstack/ansible-collections-openstack > [2] https://github.com/openstack/openstacksdk/releases/tag/0.99.0 > [3] https://pypi.org/project/openstacksdk/0.99.0/ > [4] https://hackmd.io/szgyWa5qSUOWw3JJBXLmOQ?view > [5] https://hackmd.io/PI10x-iCTBuO09duvpeWgQ?view > [6] https://hackmd.io/7NtovjRkRn-tKraBXfz9jw?view > [7] Rafael Castillo (rcastillo) > [8] Jakob Meng , (jm1) > > Best, > Jakob > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue May 31 06:49:23 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 31 May 2022 12:19:23 +0530 Subject: SSL verify failed | Overcloud deploy step 4 | Wallaby | tripleo In-Reply-To: References: Message-ID: Hi, I was able to find a solution for this issue. Please check https://bugzilla.redhat.com/show_bug.cgi?id=2089442 With regards, Swogat Pradhan On Mon, May 30, 2022 at 10:19 PM Giulio Fidente wrote: > On 5/23/22 15:06, Swogat Pradhan wrote: > > Hi, > > I am facing the below issue in step4 ' TASK | Clean up legacy Cinder > > keystone catalog entries': > > [...] > > the issue you are hitting looks like this > https://bugzilla.redhat.com/show_bug.cgi?id=2078898 > > it's currently still being investigated and I suggest monitoring the > bugzilla page for updates; I don't think there is a Launchpad bug for > this yet (unless I missed it) > > as important: in the future, please don't post multiple messages to the > mailing list just to fix the formatting, it has possibly caused more > harm than good > -- > Giulio Fidente > GPG KEY: 08D733BA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Tue May 31 11:50:06 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 31 May 2022 17:20:06 +0530 Subject: [cinder] Zed R-18 virtual mid cycle on 1st June Message-ID: Hello Argonauts, The first Zed mid cycle R-18 will be held on 1st June with the following details: Date: 01st June 2022 Time: 1400-1600 UTC Meeting link: https://bluejeans.com/556681290 Etherpad: https://etherpad.opendev.org/p/cinder-zed-midcycles Please add topics to the etherpad and also mark the date and time on your calendar. Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue May 31 03:29:12 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 31 May 2022 08:59:12 +0530 Subject: ERROR openstack [-] Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat.| Openstack tripleo wallaby version In-Reply-To: References: Message-ID: Hi Swogat, I tried generating the scripts as used by you in your deployments using the #openstack overcloud network provision --stack overcloud --output networks-deployed-environment.yaml custom_network_data.yaml # openstack overcloud network vip provision --stack overcloud --output vip-deployed-environment.yaml custom_vip_data.yaml # openstack overcloud node provision --stack overcloud --overcloud-ssh-key /home/stack/sshkey/id_rsa overcloud-baremetal-deploy.yaml and used the first two in the final deployment script, but it gives the error: heatclient.exc.HTTPInternalServerError: ERROR: Internal Error 2022-05-30 14:14:39.772 479668 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last):\n', ' File "/usr/lib/python3.6/ted_stack\n nested_stack.validate()\n', ' File "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrapper\n result = f(*args, ine 969, in validate\n result = res.validate()\n', ' File "/usr/lib/python3.6/site-packages/heat/engine/resources/openstack/neutron/port.py", line 454site-packages/heat/engine/resources/openstack/neutron/neutron.py", line 43, in validate\n res = super(NeutronResource, self).validate()\n', ' File "/un return self.validate_template()\n', ' File "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in validate_template\n self.t.rpy", line 200, in _validate_service_availability\n raise ex\n', 'heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron is not avaieutron network endpoint is not in service catalog.\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recens/stack_resource.py", line 75, in validate_nested_stack\n nested_stack.validate()\n', ' File "/usr/lib/python3.6/site-packages/osprofiler/profiler.py"thon3.6/site-packages/heat/engine/stack.py", line 969, in validate\n result = res.validate()\n', ' File "/usr/lib/python3.6/site-packages/heat/engine/ateResource, self).validate()\n', ' File "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", line 65, in validate\n self.validources/stack_resource.py", line 81, in validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', 'heat.common.exception.StackValidationFaileeploy/overcloud/tripleo-heat-templates/deployed-server/deployed-server.yaml>: HEAT-E99001 Service neutron is not available for resource type OS::TripleO::vice catalog.\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', ' File "/usr/lib/pline 320, in validate_nested_stack\n nested_stack.validate()\n', ' File "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrappe/heat/engine/stack.py", line 969, in validate\n result = res.validate()\n', ' File "/usr/lib/python3.6/site-packages/heat/engine/resources/template_relidate()\n', ' File "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", line 65, in validate\n self.validate_nested_stack()\n'.py", line 81, in validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', 'heat.common.exception.StackValidationFailed: ResourceTypeUnavaimplates/puppet/compute-role.yaml>.resources.NovaCompute.resources.0.resources.NovaCompute: HEAT-E9900::ControlPlanePort, reason: neutron network endpoint is not in service catalog.\n']. Request you to check once, please. On Mon, May 30, 2022 at 11:06 AM Lokendra Rathour wrote: > Hi Swogat, > Thanks once again. > > with the files as shown below I am running the overcloud deploy for > wallaby using this command: > > (undercloud) [stack at undercloud ~]$ cat deploy_overcloud_working_1.sh > openstack overcloud deploy --templates \ > -n /home/stack/templates/network_data.yaml \ > -r /home/stack/templates/roles_data.yaml \ > -e /home/stack/templates/environment.yaml \ > -e /home/stack/templates/environments/network-isolation.yaml \ > -e /home/stack/templates/environments/network-environment.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml > \ > -e /home/stack/templates/ironic-config.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ > -e /home/stack/containers-prepare-parameter.yaml > (undercloud) [stack at undercloud ~]$ > > > This deployment is on ipv6 using triple0 wallaby, templates, as mentioned > below, are generated using rendering steps and the network_data.yaml the > roles_data.yaml > Steps used to render the templates: > cd /usr/share/openstack-tripleo-heat-templates/ > ./tools/process-templates.py -o > ~/openstack-tripleo-heat-templates-rendered_at_wallaby -n > /home/stack/templates/network_data.yaml -r > /home/stack/templates/roles_data.yaml > > *Now if i try adding the related to VIP port I do get the error as:* > > 2022-05-30 10:37:12.792 979387 WARNING > tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template > to file: > /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/controller-role.yaml > 2022-05-30 10:37:12.792 979387 WARNING > tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template > to file: > /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/compute-role.yaml > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured > while running the command: ValueError: The environment is not a valid YAML > mapping data type. > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent > call last): > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, > self).run(parsed_args) > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in > run > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, > self).run(parsed_args) > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = > self.take_action(parsed_args) or 0 > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 1189, in take_action > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud stack, parsed_args, > new_tht_root, user_tht_root) > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 227, in create_env_files > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud created_env_files, > parsed_args, new_tht_root, user_tht_root) > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 204, in build_image_params > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud cleanup=(not > parsed_args.no_cleanup)) > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1929, in > process_multiple_environments > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud env_path=env_path, > include_env_in_files=include_env_in_files) > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heatclient/common/template_utils.py", > line 326, in process_environment_and_files > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud env = > environment_format.parse(raw_env) > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/heatclient/common/environment_format.py", > line 50, in parse > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud raise > ValueError(_('The environment is not a valid ' > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud ValueError: The > environment is not a valid YAML mapping data type. > 2022-05-30 10:37:14.455 979387 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2022-05-30 10:37:14.457 979387 ERROR openstack [-] The environment is not > a valid YAML mapping data type. > 2022-05-30 10:37:14.457 979387 INFO osc_lib.shell [-] END return value: 1 > (undercloud) [stack at undercloud ~]$ > > This is more of a syntax error where it is not able to understand the > passed VIP data file: > > undercloud) [stack at undercloud ~]$ cat > /home/stack/templates/vip-data-default-network-isolation.yaml > - > dns_name: overcloud > network: internal_api > - > dns_name: overcloud > network: external > - > dns_name: overcloud > network: ctlplane > - > dns_name: overcloud > network: oc_provisioning > - > dns_name: overcloud > network: j3mgmt > > > Please advise, also please note that similar templates generated in prior > releases such as train/ussuri works perfectly. > > > > Please check the list of *templates *files: > > drwxr-xr-x. 2 stack stack 68 May 30 09:22 environments > -rw-r--r--. 1 stack stack 265 May 27 13:47 environment.yaml > -rw-rw-r--. 1 stack stack 297 May 27 13:47 init-repo.yaml > -rw-r--r--. 1 stack stack 570 May 27 13:47 ironic-config.yaml > drwxrwxr-x. 4 stack stack 4096 May 27 13:53 network > -rw-r--r--. 1 stack stack 6370 May 27 14:26 network_data.yaml > -rw-r--r--. 1 stack stack 11137 May 27 13:53 roles_data.yaml > -rw-r--r--. 1 stack stack 234 May 30 09:23 > vip-data-default-network-isolation.yaml > > > > (undercloud) [stack at undercloud templates]$ cat environment.yaml > > parameter_defaults: > OvercloudControllerFlavor: control > OvercloudComputeFlavor: compute > ControllerCount: 3 > ComputeCount: 1 > TimeZone: 'Asia/Kolkata' > NtpServer: ['30.30.30.3'] > NeutronBridgeMappings: datacentre:br-tenant > NeutronFlatNetworks: datacentre > (undercloud) [stack at undercloud templates]$ > > > > (undercloud) [stack at undercloud templates]$ cat ironic-config.yaml > > parameter_defaults: > NovaSchedulerDefaultFilters: > - AggregateInstanceExtraSpecsFilter > - AvailabilityZoneFilter > - ComputeFilter > - ComputeCapabilitiesFilter > - ImagePropertiesFilter > IronicEnabledHardwareTypes: > - ipmi > - redfish > IronicEnabledPowerInterfaces: > - ipmitool > - redfish > IronicEnabledManagementInterfaces: > - ipmitool > - redfish > IronicCleaningDiskErase: metadata > IronicIPXEEnabled: true > IronicInspectorSubnets: > - ip_range: 172.23.3.100,172.23.3.150 > > (undercloud) [stack at undercloud templates]$ cat network_data.yaml > > - name: J3Mgmt > name_lower: j3mgmt > vip: true > vlan: 400 > ipv6: true > ipv6_subnet: 'fd80:fd00:fd00:4000::/64' > ipv6_allocation_pools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': > 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] > mtu: 9000 > > > > - name: InternalApi > name_lower: internal_api > vip: true > vlan: 418 > ipv6: true > ipv6_subnet: 'fd00:fd00:fd00:2000::/64' > ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': > 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] > mtu: 9000 > > > - name: External > vip: true > name_lower: external > vlan: 408 > ipv6: true > gateway_ipv6: 'fd00:fd00:fd00:9900::1' > ipv6_subnet: 'fd00:fd00:fd00:9900::/64' > ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': > 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] > mtu: 9000 > > > - name: OCProvisioning > vip: true > name_lower: oc_provisioning > vlan: 412 > ip_subnet: '172.23.3.0/24' > allocation_pools: [{'start': '172.23.3.10', 'end': '172.23.3.50'}] > mtu: 9000 > > > > > (undercloud) [stack at undercloud templates]$ cat roles_data.yaml > > > ############################################################################### > # File generated by TripleO > > ############################################################################### > > ############################################################################### > # Role: Controller > # > > ############################################################################### > - name: Controller > description: | > Controller role that has all the controller services loaded and handles > Database, Messaging, and Network functions. > CountDefault: 1 > tags: > - primary > - controller > # Create external Neutron bridge for SNAT (and floating IPs when using > # ML2/OVS without DVR) > - external_bridge > networks: > External: > subnet: external_subnet > InternalApi: > subnet: internal_api_subnet > OCProvisioning: > subnet: oc_provisioning_subnet > J3Mgmt: > subnet: j3mgmt_subnet > > > # For systems with both IPv4 and IPv6, you may specify a gateway network > for > # each, such as ['ControlPlane', 'External'] > default_route_networks: ['External'] > HostnameFormatDefault: '%stackname%-controller-%index%' > RoleParametersDefault: > OVNCMSOptions: "enable-chassis-as-gw" > # Deprecated & backward-compatible values (FIXME: Make parameters > consistent) > # Set uses_deprecated_params to True if any deprecated params are used. > uses_deprecated_params: True > deprecated_param_extraconfig: 'controllerExtraConfig' > deprecated_param_flavor: 'OvercloudControlFlavor' > deprecated_param_image: 'controllerImage' > deprecated_nic_config_name: 'controller.yaml' > update_serial: 1 > ServicesDefault: > - OS::TripleO::Services::Aide > - OS::TripleO::Services::AodhApi > - OS::TripleO::Services::AodhEvaluator > > .. > . > > > ..############################################################################### > # Role: Compute > # > > ############################################################################### > - name: Compute > description: | > Basic Compute Node role > CountDefault: 1 > # Create external Neutron bridge (unset if using ML2/OVS without DVR) > tags: > - compute > - external_bridge > networks: > InternalApi: > subnet: internal_api_subnet > J3Mgmt: > subnet: j3mgmt_subnet > HostnameFormatDefault: '%stackname%-novacompute-%index%' > RoleParametersDefault: > FsAioMaxNumber: 1048576 > TunedProfileName: "virtual-host" > # Deprecated & backward-compatible values (FIXME: Make parameters > consistent) > # Set uses_deprecated_params to True if any deprecated params are used. > # These deprecated_params only need to be used for existing roles and > not for > # composable roles. > uses_deprecated_params: True > deprecated_param_image: 'NovaImage' > deprecated_param_extraconfig: 'NovaComputeExtraConfig' > deprecated_param_metadata: 'NovaComputeServerMetadata' > deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' > deprecated_param_ips: 'NovaComputeIPs' > deprecated_server_resource_name: 'NovaCompute' > > deprecated_nic_config_name: 'compute.yaml' > update_serial: 25 > ServicesDefault: > - OS::TripleO::Services::Aide > - OS::TripleO::Services::AuditD > - OS::TripleO::Services::BootParams > > > (undercloud) [stack at undercloud templates]$ cat > environments/network-environment.yaml > > #This file is an example of an environment file for defining the isolated > #networks and related parameters. > resource_registry: > # Network Interface templates to use (these files must exist). You can > # override these by including one of the net-*.yaml environment files, > # such as net-bond-with-vlans.yaml, or modifying the list here. > # Port assignments for the Controller > OS::TripleO::Controller::Net::SoftwareConfig: OS::Heat::None > # Port assignments for the Compute > OS::TripleO::Compute::Net::SoftwareConfig: OS::Heat::None > > > parameter_defaults: > # This section is where deployment-specific configuration is done > # > ServiceNetMap: > IronicApiNetwork: oc_provisioning > IronicNetwork: oc_provisioning > > > > # This section is where deployment-specific configuration is done > ControllerNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' > ComputeNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' > > > > # Customize the IP subnet to match the local environment > J3MgmtNetCidr: 'fd80:fd00:fd00:4000::/64' > # Customize the IP range to use for static IPs and VIPs > J3MgmtAllocationPools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': > 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] > # Customize the VLAN ID to match the local environment > J3MgmtNetworkVlanID: 400 > > > > # Customize the IP subnet to match the local environment > InternalApiNetCidr: 'fd00:fd00:fd00:2000::/64' > # Customize the IP range to use for static IPs and VIPs > InternalApiAllocationPools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': > 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] > # Customize the VLAN ID to match the local environment > InternalApiNetworkVlanID: 418 > > > > # Customize the IP subnet to match the local environment > ExternalNetCidr: 'fd00:fd00:fd00:9900::/64' > # Customize the IP range to use for static IPs and VIPs > # Leave room if the external network is also used for floating IPs > ExternalAllocationPools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': > 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] > # Gateway router for routable networks > ExternalInterfaceDefaultRoute: 'fd00:fd00:fd00:9900::1' > # Customize the VLAN ID to match the local environment > ExternalNetworkVlanID: 408 > > > > # Customize the IP subnet to match the local environment > OCProvisioningNetCidr: '172.23.3.0/24' > # Customize the IP range to use for static IPs and VIPs > OCProvisioningAllocationPools: [{'start': '172.23.3.10', 'end': > '172.23.3.50'}] > # Customize the VLAN ID to match the local environment > OCProvisioningNetworkVlanID: 412 > > > > # List of Neutron network types for tenant networks (will be used in > order) > NeutronNetworkType: 'geneve,vlan' > # Neutron VLAN ranges per network, for example > 'datacentre:1:499,tenant:500:1000': > NeutronNetworkVLANRanges: 'datacentre:1:1000' > # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 > miimon=100" > # for Linux bonds w/LACP, or "bond_mode=active-backup" for OVS > active/backup. > BondInterfaceOvsOptions: "bond_mode=active-backup" > > (undercloud) [stack at undercloud templates]$ > > > (undercloud) [stack at undercloud templates]$ cat > environments/network-isolation.yaml > > # NOTE: This template is now deprecated, and is only included for > compatibility > # when upgrading a deployment where this template was originally used. For > new > # deployments, set "ipv6: true" on desired networks in network_data.yaml, > and > # include network-isolation.yaml. > # > # Enable the creation of Neutron networks for isolated Overcloud > # traffic and configure each role to assign ports (related > # to that role) on these networks. > resource_registry: > # networks as defined in network_data.yaml > OS::TripleO::Network::J3Mgmt: ../network/j3mgmt_v6.yaml > OS::TripleO::Network::InternalApi: ../network/internal_api_v6.yaml > OS::TripleO::Network::External: ../network/external_v6.yaml > OS::TripleO::Network::OCProvisioning: ../network/oc_provisioning.yaml > > > # Port assignments for the VIPs > OS::TripleO::Network::Ports::J3MgmtVipPort: > ../network/ports/j3mgmt_v6.yaml > OS::TripleO::Network::Ports::InternalApiVipPort: > ../network/ports/internal_api_v6.yaml > OS::TripleO::Network::Ports::ExternalVipPort: > ../network/ports/external_v6.yaml > OS::TripleO::Network::Ports::OCProvisioningVipPort: > ../network/ports/oc_provisioning.yaml > > > > # Port assignments by role, edit role definition to assign networks to > roles. > # Port assignments for the Controller > OS::TripleO::Controller::Ports::J3MgmtPort: > ../network/ports/j3mgmt_v6.yaml > OS::TripleO::Controller::Ports::InternalApiPort: > ../network/ports/internal_api_v6.yaml > OS::TripleO::Controller::Ports::ExternalPort: > ../network/ports/external_v6.yaml > OS::TripleO::Controller::Ports::OCProvisioningPort: > ../network/ports/oc_provisioning.yaml > # Port assignments for the Compute > OS::TripleO::Compute::Ports::J3MgmtPort: ../network/ports/j3mgmt_v6.yaml > OS::TripleO::Compute::Ports::InternalApiPort: > ../network/ports/internal_api_v6.yaml > > > > parameter_defaults: > # Enable IPv6 environment for Manila > ManilaIPv6: True > > (undercloud) [stack at undercloud templates]$ > > > > > > > > > > On Tue, May 24, 2022 at 5:04 PM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Thanks, I'll check them out. >> will let you know in case it works out. >> >> On Tue, May 24, 2022 at 2:37 PM Swogat Pradhan >> wrote: >> >>> Hi, >>> Please find the below templates: >>> These are for openstack wallaby release: >>> >>> (undercloud) [stack at hkg2director workplace]$ cat >>> custom_network_data.yaml >>> - name: Storage >>> name_lower: storage >>> vip: true >>> mtu: 1500 >>> subnets: >>> storage_subnet: >>> ip_subnet: 172.25.202.0/26 >>> allocation_pools: >>> - start: 172.25.202.6 >>> end: 172.25.202.20 >>> vlan: 1105 >>> - name: StorageMgmt >>> name_lower: storage_mgmt >>> vip: true >>> mtu: 1500 >>> subnets: >>> storage_mgmt_subnet: >>> ip_subnet: 172.25.202.64/26 >>> allocation_pools: >>> - start: 172.25.202.72 >>> end: 172.25.202.87 >>> vlan: 1106 >>> - name: InternalApi >>> name_lower: internal_api >>> vip: true >>> mtu: 1500 >>> subnets: >>> internal_api_subnet: >>> ip_subnet: 172.25.201.192/26 >>> allocation_pools: >>> - start: 172.25.201.198 >>> end: 172.25.201.212 >>> vlan: 1104 >>> - name: Tenant >>> vip: false # Tenant network does not use VIPs >>> mtu: 1500 >>> name_lower: tenant >>> subnets: >>> tenant_subnet: >>> ip_subnet: 172.25.202.128/26 >>> allocation_pools: >>> - start: 172.25.202.135 >>> end: 172.25.202.150 >>> vlan: 1108 >>> - name: External >>> name_lower: external >>> vip: true >>> mtu: 1500 >>> subnets: >>> external_subnet: >>> ip_subnet: 172.25.201.128/26 >>> allocation_pools: >>> - start: 172.25.201.135 >>> end: 172.25.201.150 >>> gateway_ip: 172.25.201.129 >>> vlan: 1103 >>> >>> (undercloud) [stack at hkg2director workplace]$ cat custom_vip_data.yaml >>> - network: ctlplane >>> #dns_name: overcloud >>> ip_address: 172.25.201.91 >>> subnet: ctlplane-subnet >>> - network: external >>> #dns_name: overcloud >>> ip_address: 172.25.201.150 >>> subnet: external_subnet >>> - network: internal_api >>> #dns_name: overcloud >>> ip_address: 172.25.201.250 >>> subnet: internal_api_subnet >>> - network: storage >>> #dns_name: overcloud >>> ip_address: 172.25.202.50 >>> subnet: storage_subnet >>> - network: storage_mgmt >>> #dns_name: overcloud >>> ip_address: 172.25.202.90 >>> subnet: storage_mgmt_subnet >>> >>> (undercloud) [stack at hkg2director workplace]$ cat >>> overcloud-baremetal-deploy.yaml >>> - name: Controller >>> count: 4 >>> defaults: >>> networks: >>> - network: ctlplane >>> vif: true >>> - network: external >>> subnet: external_subnet >>> - network: internal_api >>> subnet: internal_api_subnet >>> - network: storage >>> subnet: storage_subnet >>> - network: storage_mgmt >>> subnet: storage_mgmt_subnet >>> - network: tenant >>> subnet: tenant_subnet >>> network_config: >>> template: /home/stack/templates/controller.j2 >>> default_route_network: >>> - external >>> instances: >>> - hostname: overcloud-controller-0 >>> name: dc1-controller2 >>> #provisioned: false >>> - hostname: overcloud-controller-1 >>> name: dc2-controller2 >>> #provisioned: false >>> - hostname: overcloud-controller-2 >>> name: dc1-controller1 >>> #provisioned: false >>> - hostname: overcloud-controller-no-ceph-3 >>> name: dc2-ceph2 >>> #provisioned: false >>> #- hostname: overcloud-controller-3 >>> #name: dc2-compute3 >>> #provisioned: false >>> >>> - name: Compute >>> count: 5 >>> defaults: >>> networks: >>> - network: ctlplane >>> vif: true >>> - network: internal_api >>> subnet: internal_api_subnet >>> - network: tenant >>> subnet: tenant_subnet >>> - network: storage >>> subnet: storage_subnet >>> network_config: >>> template: /home/stack/templates/compute.j2 >>> instances: >>> - hostname: overcloud-novacompute-0 >>> name: dc2-compute1 >>> #provisioned: false >>> - hostname: overcloud-novacompute-1 >>> name: dc2-ceph1 >>> #provisioned: false >>> - hostname: overcloud-novacompute-2 >>> name: dc1-compute1 >>> #provisioned: false >>> - hostname: overcloud-novacompute-3 >>> name: dc1-compute2 >>> #provisioned: false >>> - hostname: overcloud-novacompute-4 >>> name: dc2-compute3 >>> #provisioned: false >>> >>> - name: CephStorage >>> count: 4 >>> defaults: >>> networks: >>> - network: ctlplane >>> vif: true >>> - network: internal_api >>> subnet: internal_api_subnet >>> - network: storage >>> subnet: storage_subnet >>> - network: storage_mgmt >>> subnet: storage_mgmt_subnet >>> network_config: >>> template: /home/stack/templates/ceph-storage.j2 >>> instances: >>> - hostname: overcloud-cephstorage-0 >>> name: dc2-controller1 >>> #provisioned: false >>> # - hostname: overcloud-cephstorage-1 >>> # name: dc2-ceph2 >>> - hostname: overcloud-cephstorage-1 >>> name: dc1-ceph1 >>> # provisioned: false >>> - hostname: overcloud-cephstorage-2 >>> name: dc1-ceph2 >>> #provisioned: false >>> - hostname: overcloud-cephstorage-3 >>> name: dc2-compute2 >>> #provisioned: false >>> >>> >>> You must use these templates to provision network, vip and nodes. >>> You must use the output files generated during the provisioning step in >>> openstack overcloud deploy command using -e parameter. >>> >>> With regards, >>> Swogat Pradhan >>> >>> >>> On Mon, May 23, 2022 at 8:33 PM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Swogat, >>>> I tried checking your solution and my templates but could not relate >>>> much. >>>> But issue seems the same >>>> >>>> http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html >>>> >>>> I tried somemore ways but looks like some issue with templates. >>>> Can you please share the templates used to deploy the overcloud. >>>> >>>> Mysetup have 3 controller and 1 compute. >>>> >>>> Thanks once again for reading my mail. >>>> >>>> Waiting for your reply. >>>> >>>> -Lokendra >>>> >>>> On Fri, 20 May 2022, 08:25 Swogat Pradhan, >>>> wrote: >>>> >>>>> Hi, >>>>> Yes I was able to find the issue and fix it. >>>>> The issue was with the overcloud-baremetal-deployed.yaml file i was >>>>> trying to provision controller-0, controller-1 and controller-3 and kept >>>>> controller-2 aside for later, but the tripleo scripts are built in such a >>>>> way that they were taking controller- 0, 1 and 2 inplace of controller-3, >>>>> so the network ports and vip were created for controller 0,1 and 2 but not >>>>> for 3 , so this error was popping off. Also i would request you to check >>>>> the jinja nic templates and once the node provisioning is done check the >>>>> /etc/os-net-config/config.json/yaml file for syntax if using bonded nic >>>>> template. >>>>> If you need any more infor please let me know. >>>>> >>>>> With regards, >>>>> Swogat Pradhan >>>>> >>>>> >>>>> >>>>> On Fri, May 20, 2022 at 8:01 AM Lokendra Rathour < >>>>> lokendrarathour at gmail.com> wrote: >>>>> >>>>>> Hi Swogat, >>>>>> Thanks for raising this issue. >>>>>> Did you find any solution? to this problem ? >>>>>> >>>>>> Please let me know it might be helpful >>>>>> >>>>>> >>>>>> On Tue, Apr 19, 2022 at 12:43 PM Swogat Pradhan < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> I am currently trying to deploy openstack wallaby using tripleo arch. >>>>>>> I created the network jinja templates, ran the following commands >>>>>>> also: >>>>>>> >>>>>>> #openstack overcloud network provision --stack overcloud --output >>>>>>> networks-deployed-environment.yaml custom_network_data.yaml >>>>>>> # openstack overcloud network vip provision --stack overcloud >>>>>>> --output vip-deployed-environment.yaml custom_vip_data.yaml >>>>>>> # openstack overcloud node provision --stack overcloud >>>>>>> --overcloud-ssh-key /home/stack/sshkey/id_rsa >>>>>>> overcloud-baremetal-deploy.yaml >>>>>>> >>>>>>> and used the environment files in the openstack overcloud deploy >>>>>>> command: >>>>>>> >>>>>>> (undercloud) [stack at hkg2director ~]$ cat deploy.sh >>>>>>> #!/bin/bash >>>>>>> THT=/usr/share/openstack-tripleo-heat-templates/ >>>>>>> CNF=/home/stack/ >>>>>>> openstack overcloud deploy --templates $THT \ >>>>>>> -r $CNF/templates/roles_data.yaml \ >>>>>>> -n $CNF/workplace/custom_network_data.yaml \ >>>>>>> -e ~/containers-prepare-parameter.yaml \ >>>>>>> -e $CNF/templates/node-info.yaml \ >>>>>>> -e $CNF/templates/scheduler-hints.yaml \ >>>>>>> -e $CNF/workplace/networks-deployed-environment.yaml \ >>>>>>> -e $CNF/workplace/vip-deployed-environment.yaml \ >>>>>>> -e $CNF/workplace/overcloud-baremetal-deployed.yaml \ >>>>>>> -e $CNF/workplace/custom-net-bond-with-vlans.yaml >>>>>>> >>>>>>> Now when i run the ./deploy.sh script i encounter an error stating: >>>>>>> >>>>>>> ERROR openstack [-] Resource >>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>> included with the deployment command.: >>>>>>> tripleoclient.exceptions.InvalidConfiguration: Resource >>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>> included with the deployment command. >>>>>>> 2022-04-19 13:47:16.582 735924 INFO osc_lib.shell [-] END return >>>>>>> value: 1 >>>>>>> >>>>>>> Can someone tell me where the mistake is? >>>>>>> >>>>>>> With regards, >>>>>>> Swogat Pradhan >>>>>>> >>>>>> >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue May 31 06:10:17 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 31 May 2022 11:40:17 +0530 Subject: ERROR openstack [-] Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat.| Openstack tripleo wallaby version In-Reply-To: References: Message-ID: Hi Lokendra, You need to generate another file also in the following step: openstack overcloud node provision --stack overcloud --overcloud-ssh-key /home/stack/sshkey/id_rsa overcloud-baremetal-deploy.yaml also you need to pass another parameter --network-config. example: openstack overcloud node provision --stack overcloud --overcloud-ssh-key /home/stack/sshkey/id_rsa *--network-config* *--output overcloud-baremetal-deployed.yaml* overcloud-baremetal-deploy.yaml And then all these output files will be passed on to the openstack overcloud deploy command. NOTE: when passing --network-config parameter in node provision step, it creates a directory in /etc/os-net-config and in it creates a file config.yaml, do check the indentation of that file. (in my case the indentation was wrong when i was using bondind everytime, so i had to manually change the script and run a while loop and then my node provision step was successful) On Tue, May 31, 2022 at 8:59 AM Lokendra Rathour wrote: > Hi Swogat, > I tried generating the scripts as used by you in your deployments using the > > > #openstack overcloud network provision --stack overcloud --output > networks-deployed-environment.yaml custom_network_data.yaml > # openstack overcloud network vip provision --stack overcloud --output > vip-deployed-environment.yaml custom_vip_data.yaml > # openstack overcloud node provision --stack overcloud > --overcloud-ssh-key /home/stack/sshkey/id_rsa > overcloud-baremetal-deploy.yaml > > and used the first two in the final deployment script, but it gives the > error: > > heatclient.exc.HTTPInternalServerError: ERROR: Internal Error > 2022-05-30 14:14:39.772 479668 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent > call last):\n', ' File "/usr/lib/python3.6/ted_stack\n > nested_stack.validate()\n', ' File > "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in > wrapper\n result = f(*args, ine 969, in validate\n result = > res.validate()\n', ' File > "/usr/lib/python3.6/site-packages/heat/engine/resources/openstack/neutron/port.py", > line 454site-packages/heat/engine/resources/openstack/neutron/neutron.py", > line 43, in validate\n res = super(NeutronResource, self).validate()\n', > ' File "/un return self.validate_template()\n', ' File > "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in > validate_template\n self.t.rpy", line 200, in > _validate_service_availability\n raise ex\n', > 'heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron > is not avaieutron network endpoint is not in service catalog.\n', '\nDuring > handling of the above exception, another exception occurred:\n\n', > 'Traceback (most recens/stack_resource.py", line 75, in > validate_nested_stack\n nested_stack.validate()\n', ' File > "/usr/lib/python3.6/site-packages/osprofiler/profiler.py"thon3.6/site-packages/heat/engine/stack.py", > line 969, in validate\n result = res.validate()\n', ' File > "/usr/lib/python3.6/site-packages/heat/engine/ateResource, > self).validate()\n', ' File > "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", > line 65, in validate\n self.validources/stack_resource.py", line 81, in > validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', > 'heat.common.exception.StackValidationFaileeploy/overcloud/tripleo-heat-templates/deployed-server/deployed-server.yaml>: > HEAT-E99001 Service neutron is not available for resource type > OS::TripleO::vice catalog.\n', '\nDuring handling of the above exception, > another exception occurred:\n\n', 'Traceback (most recent call last):\n', ' > File "/usr/lib/pline 320, in validate_nested_stack\n > nested_stack.validate()\n', ' File > "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in > wrappe/heat/engine/stack.py", line 969, in validate\n result = > res.validate()\n', ' File > "/usr/lib/python3.6/site-packages/heat/engine/resources/template_relidate()\n', > ' File > "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", > line 65, in validate\n self.validate_nested_stack()\n'.py", line 81, in > validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', > 'heat.common.exception.StackValidationFailed: > ResourceTypeUnavaimplates/puppet/compute-role.yaml>.resources.NovaCompute reason: neutron network endpoint is not in service catalog.\n', '\nDuring > handling of the above exception, > another/lib/python3.6/site-packages/heat/common/context.py", line 416, in > wrapped\n return func(self, ctx, *args, **kwargs)\n', ' File > "/usr/lib/python3.6/sirce_name, template_id)\n', ' File > "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 756, in > _parse_template_and_validate_stack\n stack.v line 160, in wrapper\n > result = f(*args, **kwargs)\n', ' File > "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 969, in > validate\n resesources/stack_resource.py", line 65, in validate\n > self.validate_nested_stack()\n', ' File > "/usr/lib/python3.6/site-packages/heat/engine/resources/oph=[self.stack.t.RESOURCES, > path])\n', 'heat.common.exception.StackValidationFailed: > ResourceTypeUnavailable: > resources.Compute.resources.0.resources.NovaCompute: > HEAT-E9900::ControlPlanePort, reason: neutron network endpoint is not in > service catalog.\n']. > > Request you to check once, please. > > > > > On Mon, May 30, 2022 at 11:06 AM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Swogat, >> Thanks once again. >> >> with the files as shown below I am running the overcloud deploy for >> wallaby using this command: >> >> (undercloud) [stack at undercloud ~]$ cat deploy_overcloud_working_1.sh >> openstack overcloud deploy --templates \ >> -n /home/stack/templates/network_data.yaml \ >> -r /home/stack/templates/roles_data.yaml \ >> -e /home/stack/templates/environment.yaml \ >> -e /home/stack/templates/environments/network-isolation.yaml \ >> -e /home/stack/templates/environments/network-environment.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >> \ >> -e /home/stack/templates/ironic-config.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >> -e /home/stack/containers-prepare-parameter.yaml >> (undercloud) [stack at undercloud ~]$ >> >> >> This deployment is on ipv6 using triple0 wallaby, templates, as mentioned >> below, are generated using rendering steps and the network_data.yaml the >> roles_data.yaml >> Steps used to render the templates: >> cd /usr/share/openstack-tripleo-heat-templates/ >> ./tools/process-templates.py -o >> ~/openstack-tripleo-heat-templates-rendered_at_wallaby -n >> /home/stack/templates/network_data.yaml -r >> /home/stack/templates/roles_data.yaml >> >> *Now if i try adding the related to VIP port I do get the error as:* >> >> 2022-05-30 10:37:12.792 979387 WARNING >> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template >> to file: >> /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/controller-role.yaml >> 2022-05-30 10:37:12.792 979387 WARNING >> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template >> to file: >> /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/compute-role.yaml >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured >> while running the command: ValueError: The environment is not a valid YAML >> mapping data type. >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent >> call last): >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, >> self).run(parsed_args) >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in >> run >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, >> self).run(parsed_args) >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = >> self.take_action(parsed_args) or 0 >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >> line 1189, in take_action >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud stack, parsed_args, >> new_tht_root, user_tht_root) >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >> line 227, in create_env_files >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud created_env_files, >> parsed_args, new_tht_root, user_tht_root) >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >> line 204, in build_image_params >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud cleanup=(not >> parsed_args.no_cleanup)) >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1929, in >> process_multiple_environments >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud env_path=env_path, >> include_env_in_files=include_env_in_files) >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/heatclient/common/template_utils.py", >> line 326, in process_environment_and_files >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud env = >> environment_format.parse(raw_env) >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/heatclient/common/environment_format.py", >> line 50, in parse >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud raise >> ValueError(_('The environment is not a valid ' >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud ValueError: The >> environment is not a valid YAML mapping data type. >> 2022-05-30 10:37:14.455 979387 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud >> 2022-05-30 10:37:14.457 979387 ERROR openstack [-] The environment is not >> a valid YAML mapping data type. >> 2022-05-30 10:37:14.457 979387 INFO osc_lib.shell [-] END return value: 1 >> (undercloud) [stack at undercloud ~]$ >> >> This is more of a syntax error where it is not able to understand the >> passed VIP data file: >> >> undercloud) [stack at undercloud ~]$ cat >> /home/stack/templates/vip-data-default-network-isolation.yaml >> - >> dns_name: overcloud >> network: internal_api >> - >> dns_name: overcloud >> network: external >> - >> dns_name: overcloud >> network: ctlplane >> - >> dns_name: overcloud >> network: oc_provisioning >> - >> dns_name: overcloud >> network: j3mgmt >> >> >> Please advise, also please note that similar templates generated in prior >> releases such as train/ussuri works perfectly. >> >> >> >> Please check the list of *templates *files: >> >> drwxr-xr-x. 2 stack stack 68 May 30 09:22 environments >> -rw-r--r--. 1 stack stack 265 May 27 13:47 environment.yaml >> -rw-rw-r--. 1 stack stack 297 May 27 13:47 init-repo.yaml >> -rw-r--r--. 1 stack stack 570 May 27 13:47 ironic-config.yaml >> drwxrwxr-x. 4 stack stack 4096 May 27 13:53 network >> -rw-r--r--. 1 stack stack 6370 May 27 14:26 network_data.yaml >> -rw-r--r--. 1 stack stack 11137 May 27 13:53 roles_data.yaml >> -rw-r--r--. 1 stack stack 234 May 30 09:23 >> vip-data-default-network-isolation.yaml >> >> >> >> (undercloud) [stack at undercloud templates]$ cat environment.yaml >> >> parameter_defaults: >> OvercloudControllerFlavor: control >> OvercloudComputeFlavor: compute >> ControllerCount: 3 >> ComputeCount: 1 >> TimeZone: 'Asia/Kolkata' >> NtpServer: ['30.30.30.3'] >> NeutronBridgeMappings: datacentre:br-tenant >> NeutronFlatNetworks: datacentre >> (undercloud) [stack at undercloud templates]$ >> >> >> >> (undercloud) [stack at undercloud templates]$ cat ironic-config.yaml >> >> parameter_defaults: >> NovaSchedulerDefaultFilters: >> - AggregateInstanceExtraSpecsFilter >> - AvailabilityZoneFilter >> - ComputeFilter >> - ComputeCapabilitiesFilter >> - ImagePropertiesFilter >> IronicEnabledHardwareTypes: >> - ipmi >> - redfish >> IronicEnabledPowerInterfaces: >> - ipmitool >> - redfish >> IronicEnabledManagementInterfaces: >> - ipmitool >> - redfish >> IronicCleaningDiskErase: metadata >> IronicIPXEEnabled: true >> IronicInspectorSubnets: >> - ip_range: 172.23.3.100,172.23.3.150 >> >> (undercloud) [stack at undercloud templates]$ cat network_data.yaml >> >> - name: J3Mgmt >> name_lower: j3mgmt >> vip: true >> vlan: 400 >> ipv6: true >> ipv6_subnet: 'fd80:fd00:fd00:4000::/64' >> ipv6_allocation_pools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': >> 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] >> mtu: 9000 >> >> >> >> - name: InternalApi >> name_lower: internal_api >> vip: true >> vlan: 418 >> ipv6: true >> ipv6_subnet: 'fd00:fd00:fd00:2000::/64' >> ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': >> 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] >> mtu: 9000 >> >> >> - name: External >> vip: true >> name_lower: external >> vlan: 408 >> ipv6: true >> gateway_ipv6: 'fd00:fd00:fd00:9900::1' >> ipv6_subnet: 'fd00:fd00:fd00:9900::/64' >> ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': >> 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] >> mtu: 9000 >> >> >> - name: OCProvisioning >> vip: true >> name_lower: oc_provisioning >> vlan: 412 >> ip_subnet: '172.23.3.0/24' >> allocation_pools: [{'start': '172.23.3.10', 'end': '172.23.3.50'}] >> mtu: 9000 >> >> >> >> >> (undercloud) [stack at undercloud templates]$ cat roles_data.yaml >> >> >> ############################################################################### >> # File generated by TripleO >> >> ############################################################################### >> >> ############################################################################### >> # Role: Controller >> # >> >> ############################################################################### >> - name: Controller >> description: | >> Controller role that has all the controller services loaded and >> handles >> Database, Messaging, and Network functions. >> CountDefault: 1 >> tags: >> - primary >> - controller >> # Create external Neutron bridge for SNAT (and floating IPs when using >> # ML2/OVS without DVR) >> - external_bridge >> networks: >> External: >> subnet: external_subnet >> InternalApi: >> subnet: internal_api_subnet >> OCProvisioning: >> subnet: oc_provisioning_subnet >> J3Mgmt: >> subnet: j3mgmt_subnet >> >> >> # For systems with both IPv4 and IPv6, you may specify a gateway >> network for >> # each, such as ['ControlPlane', 'External'] >> default_route_networks: ['External'] >> HostnameFormatDefault: '%stackname%-controller-%index%' >> RoleParametersDefault: >> OVNCMSOptions: "enable-chassis-as-gw" >> # Deprecated & backward-compatible values (FIXME: Make parameters >> consistent) >> # Set uses_deprecated_params to True if any deprecated params are used. >> uses_deprecated_params: True >> deprecated_param_extraconfig: 'controllerExtraConfig' >> deprecated_param_flavor: 'OvercloudControlFlavor' >> deprecated_param_image: 'controllerImage' >> deprecated_nic_config_name: 'controller.yaml' >> update_serial: 1 >> ServicesDefault: >> - OS::TripleO::Services::Aide >> - OS::TripleO::Services::AodhApi >> - OS::TripleO::Services::AodhEvaluator >> >> .. >> . >> >> >> ..############################################################################### >> # Role: Compute >> # >> >> ############################################################################### >> - name: Compute >> description: | >> Basic Compute Node role >> CountDefault: 1 >> # Create external Neutron bridge (unset if using ML2/OVS without DVR) >> tags: >> - compute >> - external_bridge >> networks: >> InternalApi: >> subnet: internal_api_subnet >> J3Mgmt: >> subnet: j3mgmt_subnet >> HostnameFormatDefault: '%stackname%-novacompute-%index%' >> RoleParametersDefault: >> FsAioMaxNumber: 1048576 >> TunedProfileName: "virtual-host" >> # Deprecated & backward-compatible values (FIXME: Make parameters >> consistent) >> # Set uses_deprecated_params to True if any deprecated params are used. >> # These deprecated_params only need to be used for existing roles and >> not for >> # composable roles. >> uses_deprecated_params: True >> deprecated_param_image: 'NovaImage' >> deprecated_param_extraconfig: 'NovaComputeExtraConfig' >> deprecated_param_metadata: 'NovaComputeServerMetadata' >> deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' >> deprecated_param_ips: 'NovaComputeIPs' >> deprecated_server_resource_name: 'NovaCompute' >> >> deprecated_nic_config_name: 'compute.yaml' >> update_serial: 25 >> ServicesDefault: >> - OS::TripleO::Services::Aide >> - OS::TripleO::Services::AuditD >> - OS::TripleO::Services::BootParams >> >> >> (undercloud) [stack at undercloud templates]$ cat >> environments/network-environment.yaml >> >> #This file is an example of an environment file for defining the isolated >> #networks and related parameters. >> resource_registry: >> # Network Interface templates to use (these files must exist). You can >> # override these by including one of the net-*.yaml environment files, >> # such as net-bond-with-vlans.yaml, or modifying the list here. >> # Port assignments for the Controller >> OS::TripleO::Controller::Net::SoftwareConfig: OS::Heat::None >> # Port assignments for the Compute >> OS::TripleO::Compute::Net::SoftwareConfig: OS::Heat::None >> >> >> parameter_defaults: >> # This section is where deployment-specific configuration is done >> # >> ServiceNetMap: >> IronicApiNetwork: oc_provisioning >> IronicNetwork: oc_provisioning >> >> >> >> # This section is where deployment-specific configuration is done >> ControllerNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' >> ComputeNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' >> >> >> >> # Customize the IP subnet to match the local environment >> J3MgmtNetCidr: 'fd80:fd00:fd00:4000::/64' >> # Customize the IP range to use for static IPs and VIPs >> J3MgmtAllocationPools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': >> 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] >> # Customize the VLAN ID to match the local environment >> J3MgmtNetworkVlanID: 400 >> >> >> >> # Customize the IP subnet to match the local environment >> InternalApiNetCidr: 'fd00:fd00:fd00:2000::/64' >> # Customize the IP range to use for static IPs and VIPs >> InternalApiAllocationPools: [{'start': 'fd00:fd00:fd00:2000::10', >> 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] >> # Customize the VLAN ID to match the local environment >> InternalApiNetworkVlanID: 418 >> >> >> >> # Customize the IP subnet to match the local environment >> ExternalNetCidr: 'fd00:fd00:fd00:9900::/64' >> # Customize the IP range to use for static IPs and VIPs >> # Leave room if the external network is also used for floating IPs >> ExternalAllocationPools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': >> 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] >> # Gateway router for routable networks >> ExternalInterfaceDefaultRoute: 'fd00:fd00:fd00:9900::1' >> # Customize the VLAN ID to match the local environment >> ExternalNetworkVlanID: 408 >> >> >> >> # Customize the IP subnet to match the local environment >> OCProvisioningNetCidr: '172.23.3.0/24' >> # Customize the IP range to use for static IPs and VIPs >> OCProvisioningAllocationPools: [{'start': '172.23.3.10', 'end': >> '172.23.3.50'}] >> # Customize the VLAN ID to match the local environment >> OCProvisioningNetworkVlanID: 412 >> >> >> >> # List of Neutron network types for tenant networks (will be used in >> order) >> NeutronNetworkType: 'geneve,vlan' >> # Neutron VLAN ranges per network, for example >> 'datacentre:1:499,tenant:500:1000': >> NeutronNetworkVLANRanges: 'datacentre:1:1000' >> # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 >> miimon=100" >> # for Linux bonds w/LACP, or "bond_mode=active-backup" for OVS >> active/backup. >> BondInterfaceOvsOptions: "bond_mode=active-backup" >> >> (undercloud) [stack at undercloud templates]$ >> >> >> (undercloud) [stack at undercloud templates]$ cat >> environments/network-isolation.yaml >> >> # NOTE: This template is now deprecated, and is only included for >> compatibility >> # when upgrading a deployment where this template was originally used. >> For new >> # deployments, set "ipv6: true" on desired networks in network_data.yaml, >> and >> # include network-isolation.yaml. >> # >> # Enable the creation of Neutron networks for isolated Overcloud >> # traffic and configure each role to assign ports (related >> # to that role) on these networks. >> resource_registry: >> # networks as defined in network_data.yaml >> OS::TripleO::Network::J3Mgmt: ../network/j3mgmt_v6.yaml >> OS::TripleO::Network::InternalApi: ../network/internal_api_v6.yaml >> OS::TripleO::Network::External: ../network/external_v6.yaml >> OS::TripleO::Network::OCProvisioning: ../network/oc_provisioning.yaml >> >> >> # Port assignments for the VIPs >> OS::TripleO::Network::Ports::J3MgmtVipPort: >> ../network/ports/j3mgmt_v6.yaml >> OS::TripleO::Network::Ports::InternalApiVipPort: >> ../network/ports/internal_api_v6.yaml >> OS::TripleO::Network::Ports::ExternalVipPort: >> ../network/ports/external_v6.yaml >> OS::TripleO::Network::Ports::OCProvisioningVipPort: >> ../network/ports/oc_provisioning.yaml >> >> >> >> # Port assignments by role, edit role definition to assign networks to >> roles. >> # Port assignments for the Controller >> OS::TripleO::Controller::Ports::J3MgmtPort: >> ../network/ports/j3mgmt_v6.yaml >> OS::TripleO::Controller::Ports::InternalApiPort: >> ../network/ports/internal_api_v6.yaml >> OS::TripleO::Controller::Ports::ExternalPort: >> ../network/ports/external_v6.yaml >> OS::TripleO::Controller::Ports::OCProvisioningPort: >> ../network/ports/oc_provisioning.yaml >> # Port assignments for the Compute >> OS::TripleO::Compute::Ports::J3MgmtPort: ../network/ports/j3mgmt_v6.yaml >> OS::TripleO::Compute::Ports::InternalApiPort: >> ../network/ports/internal_api_v6.yaml >> >> >> >> parameter_defaults: >> # Enable IPv6 environment for Manila >> ManilaIPv6: True >> >> (undercloud) [stack at undercloud templates]$ >> >> >> >> >> >> >> >> >> >> On Tue, May 24, 2022 at 5:04 PM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Thanks, I'll check them out. >>> will let you know in case it works out. >>> >>> On Tue, May 24, 2022 at 2:37 PM Swogat Pradhan < >>> swogatpradhan22 at gmail.com> wrote: >>> >>>> Hi, >>>> Please find the below templates: >>>> These are for openstack wallaby release: >>>> >>>> (undercloud) [stack at hkg2director workplace]$ cat >>>> custom_network_data.yaml >>>> - name: Storage >>>> name_lower: storage >>>> vip: true >>>> mtu: 1500 >>>> subnets: >>>> storage_subnet: >>>> ip_subnet: 172.25.202.0/26 >>>> allocation_pools: >>>> - start: 172.25.202.6 >>>> end: 172.25.202.20 >>>> vlan: 1105 >>>> - name: StorageMgmt >>>> name_lower: storage_mgmt >>>> vip: true >>>> mtu: 1500 >>>> subnets: >>>> storage_mgmt_subnet: >>>> ip_subnet: 172.25.202.64/26 >>>> allocation_pools: >>>> - start: 172.25.202.72 >>>> end: 172.25.202.87 >>>> vlan: 1106 >>>> - name: InternalApi >>>> name_lower: internal_api >>>> vip: true >>>> mtu: 1500 >>>> subnets: >>>> internal_api_subnet: >>>> ip_subnet: 172.25.201.192/26 >>>> allocation_pools: >>>> - start: 172.25.201.198 >>>> end: 172.25.201.212 >>>> vlan: 1104 >>>> - name: Tenant >>>> vip: false # Tenant network does not use VIPs >>>> mtu: 1500 >>>> name_lower: tenant >>>> subnets: >>>> tenant_subnet: >>>> ip_subnet: 172.25.202.128/26 >>>> allocation_pools: >>>> - start: 172.25.202.135 >>>> end: 172.25.202.150 >>>> vlan: 1108 >>>> - name: External >>>> name_lower: external >>>> vip: true >>>> mtu: 1500 >>>> subnets: >>>> external_subnet: >>>> ip_subnet: 172.25.201.128/26 >>>> allocation_pools: >>>> - start: 172.25.201.135 >>>> end: 172.25.201.150 >>>> gateway_ip: 172.25.201.129 >>>> vlan: 1103 >>>> >>>> (undercloud) [stack at hkg2director workplace]$ cat custom_vip_data.yaml >>>> - network: ctlplane >>>> #dns_name: overcloud >>>> ip_address: 172.25.201.91 >>>> subnet: ctlplane-subnet >>>> - network: external >>>> #dns_name: overcloud >>>> ip_address: 172.25.201.150 >>>> subnet: external_subnet >>>> - network: internal_api >>>> #dns_name: overcloud >>>> ip_address: 172.25.201.250 >>>> subnet: internal_api_subnet >>>> - network: storage >>>> #dns_name: overcloud >>>> ip_address: 172.25.202.50 >>>> subnet: storage_subnet >>>> - network: storage_mgmt >>>> #dns_name: overcloud >>>> ip_address: 172.25.202.90 >>>> subnet: storage_mgmt_subnet >>>> >>>> (undercloud) [stack at hkg2director workplace]$ cat >>>> overcloud-baremetal-deploy.yaml >>>> - name: Controller >>>> count: 4 >>>> defaults: >>>> networks: >>>> - network: ctlplane >>>> vif: true >>>> - network: external >>>> subnet: external_subnet >>>> - network: internal_api >>>> subnet: internal_api_subnet >>>> - network: storage >>>> subnet: storage_subnet >>>> - network: storage_mgmt >>>> subnet: storage_mgmt_subnet >>>> - network: tenant >>>> subnet: tenant_subnet >>>> network_config: >>>> template: /home/stack/templates/controller.j2 >>>> default_route_network: >>>> - external >>>> instances: >>>> - hostname: overcloud-controller-0 >>>> name: dc1-controller2 >>>> #provisioned: false >>>> - hostname: overcloud-controller-1 >>>> name: dc2-controller2 >>>> #provisioned: false >>>> - hostname: overcloud-controller-2 >>>> name: dc1-controller1 >>>> #provisioned: false >>>> - hostname: overcloud-controller-no-ceph-3 >>>> name: dc2-ceph2 >>>> #provisioned: false >>>> #- hostname: overcloud-controller-3 >>>> #name: dc2-compute3 >>>> #provisioned: false >>>> >>>> - name: Compute >>>> count: 5 >>>> defaults: >>>> networks: >>>> - network: ctlplane >>>> vif: true >>>> - network: internal_api >>>> subnet: internal_api_subnet >>>> - network: tenant >>>> subnet: tenant_subnet >>>> - network: storage >>>> subnet: storage_subnet >>>> network_config: >>>> template: /home/stack/templates/compute.j2 >>>> instances: >>>> - hostname: overcloud-novacompute-0 >>>> name: dc2-compute1 >>>> #provisioned: false >>>> - hostname: overcloud-novacompute-1 >>>> name: dc2-ceph1 >>>> #provisioned: false >>>> - hostname: overcloud-novacompute-2 >>>> name: dc1-compute1 >>>> #provisioned: false >>>> - hostname: overcloud-novacompute-3 >>>> name: dc1-compute2 >>>> #provisioned: false >>>> - hostname: overcloud-novacompute-4 >>>> name: dc2-compute3 >>>> #provisioned: false >>>> >>>> - name: CephStorage >>>> count: 4 >>>> defaults: >>>> networks: >>>> - network: ctlplane >>>> vif: true >>>> - network: internal_api >>>> subnet: internal_api_subnet >>>> - network: storage >>>> subnet: storage_subnet >>>> - network: storage_mgmt >>>> subnet: storage_mgmt_subnet >>>> network_config: >>>> template: /home/stack/templates/ceph-storage.j2 >>>> instances: >>>> - hostname: overcloud-cephstorage-0 >>>> name: dc2-controller1 >>>> #provisioned: false >>>> # - hostname: overcloud-cephstorage-1 >>>> # name: dc2-ceph2 >>>> - hostname: overcloud-cephstorage-1 >>>> name: dc1-ceph1 >>>> # provisioned: false >>>> - hostname: overcloud-cephstorage-2 >>>> name: dc1-ceph2 >>>> #provisioned: false >>>> - hostname: overcloud-cephstorage-3 >>>> name: dc2-compute2 >>>> #provisioned: false >>>> >>>> >>>> You must use these templates to provision network, vip and nodes. >>>> You must use the output files generated during the provisioning step in >>>> openstack overcloud deploy command using -e parameter. >>>> >>>> With regards, >>>> Swogat Pradhan >>>> >>>> >>>> On Mon, May 23, 2022 at 8:33 PM Lokendra Rathour < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Swogat, >>>>> I tried checking your solution and my templates but could not relate >>>>> much. >>>>> But issue seems the same >>>>> >>>>> http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html >>>>> >>>>> I tried somemore ways but looks like some issue with templates. >>>>> Can you please share the templates used to deploy the overcloud. >>>>> >>>>> Mysetup have 3 controller and 1 compute. >>>>> >>>>> Thanks once again for reading my mail. >>>>> >>>>> Waiting for your reply. >>>>> >>>>> -Lokendra >>>>> >>>>> On Fri, 20 May 2022, 08:25 Swogat Pradhan, >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> Yes I was able to find the issue and fix it. >>>>>> The issue was with the overcloud-baremetal-deployed.yaml file i was >>>>>> trying to provision controller-0, controller-1 and controller-3 and kept >>>>>> controller-2 aside for later, but the tripleo scripts are built in such a >>>>>> way that they were taking controller- 0, 1 and 2 inplace of controller-3, >>>>>> so the network ports and vip were created for controller 0,1 and 2 but not >>>>>> for 3 , so this error was popping off. Also i would request you to check >>>>>> the jinja nic templates and once the node provisioning is done check the >>>>>> /etc/os-net-config/config.json/yaml file for syntax if using bonded nic >>>>>> template. >>>>>> If you need any more infor please let me know. >>>>>> >>>>>> With regards, >>>>>> Swogat Pradhan >>>>>> >>>>>> >>>>>> >>>>>> On Fri, May 20, 2022 at 8:01 AM Lokendra Rathour < >>>>>> lokendrarathour at gmail.com> wrote: >>>>>> >>>>>>> Hi Swogat, >>>>>>> Thanks for raising this issue. >>>>>>> Did you find any solution? to this problem ? >>>>>>> >>>>>>> Please let me know it might be helpful >>>>>>> >>>>>>> >>>>>>> On Tue, Apr 19, 2022 at 12:43 PM Swogat Pradhan < >>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> I am currently trying to deploy openstack wallaby using tripleo >>>>>>>> arch. >>>>>>>> I created the network jinja templates, ran the following commands >>>>>>>> also: >>>>>>>> >>>>>>>> #openstack overcloud network provision --stack overcloud --output >>>>>>>> networks-deployed-environment.yaml custom_network_data.yaml >>>>>>>> # openstack overcloud network vip provision --stack overcloud >>>>>>>> --output vip-deployed-environment.yaml custom_vip_data.yaml >>>>>>>> # openstack overcloud node provision --stack overcloud >>>>>>>> --overcloud-ssh-key /home/stack/sshkey/id_rsa >>>>>>>> overcloud-baremetal-deploy.yaml >>>>>>>> >>>>>>>> and used the environment files in the openstack overcloud deploy >>>>>>>> command: >>>>>>>> >>>>>>>> (undercloud) [stack at hkg2director ~]$ cat deploy.sh >>>>>>>> #!/bin/bash >>>>>>>> THT=/usr/share/openstack-tripleo-heat-templates/ >>>>>>>> CNF=/home/stack/ >>>>>>>> openstack overcloud deploy --templates $THT \ >>>>>>>> -r $CNF/templates/roles_data.yaml \ >>>>>>>> -n $CNF/workplace/custom_network_data.yaml \ >>>>>>>> -e ~/containers-prepare-parameter.yaml \ >>>>>>>> -e $CNF/templates/node-info.yaml \ >>>>>>>> -e $CNF/templates/scheduler-hints.yaml \ >>>>>>>> -e $CNF/workplace/networks-deployed-environment.yaml \ >>>>>>>> -e $CNF/workplace/vip-deployed-environment.yaml \ >>>>>>>> -e $CNF/workplace/overcloud-baremetal-deployed.yaml \ >>>>>>>> -e $CNF/workplace/custom-net-bond-with-vlans.yaml >>>>>>>> >>>>>>>> Now when i run the ./deploy.sh script i encounter an error stating: >>>>>>>> >>>>>>>> ERROR openstack [-] Resource >>>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>>> included with the deployment command.: >>>>>>>> tripleoclient.exceptions.InvalidConfiguration: Resource >>>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>>> included with the deployment command. >>>>>>>> 2022-04-19 13:47:16.582 735924 INFO osc_lib.shell [-] END return >>>>>>>> value: 1 >>>>>>>> >>>>>>>> Can someone tell me where the mistake is? >>>>>>>> >>>>>>>> With regards, >>>>>>>> Swogat Pradhan >>>>>>>> >>>>>>> >>>>>>> >>>>>>> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Russell.Stather at ignitiongroup.co.za Tue May 31 12:07:19 2022 From: Russell.Stather at ignitiongroup.co.za (Russell Stather) Date: Tue, 31 May 2022 12:07:19 +0000 Subject: error creating image from volume Message-ID: Hi Trying to create an image from a volume. Getting this very unhelpful error message. igadmin at ig-umh-maas:~$ openstack image create --volume 57bc3efa-6cdc-4e2d-a0e2-262340cc6180 commvaultimage upload_to_image() got an unexpected keyword argument 'visibility' Anyone seen this before? Thanks Russell -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue May 31 13:21:50 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 31 May 2022 15:21:50 +0200 Subject: [all][tc] Lets talk about Flake8 E501 In-Reply-To: References: Message-ID: Le sam. 28 mai 2022 ? 04:36, Miro Tomaska a ?crit : > Hello All, > > This is probably going to be a hot topic but I was wondering if the > community ever considered raising the default 79 characters line limit. I > have seen some places where even a very innocent line of code needs to be > split into two lines. I have also seen some code where I feel like variable > names were abbreviated on purpose to squeeze everything into one line. > > How does the community feel about raising the E501 limit to 119 > characters? The 119 character limit is the second most popular limit > besides the default one. It's long enough to give a developer enough room > for descriptive variables without being forced to break lines too much. And > it is short enough for a diff between two files to look OK. > > The only downside I can see right now is that it's not easy to convert an > existing code. So we will end up with files where the new code is 79+ > characters and the "old" code is <=79. I can also see an argument where > someone might have trouble reviewing a patch on a laptop screen (assuming > standard 14" screen) ? > > Here is an example of one extreme, a diff of two files maxing out at 119 > characters > > https://review.opendev.org/c/opendev/sandbox/+/843697/1..2 > > Thank you for your time and I am looking forward to this conversation :) > > My personal opinion is while I understand the reasoning behind, my concern is about the community support. We don't have a lot of contributors at the moment, and if we would like to modify the default limit, I'm pretty sure it would take a lot of time for folks at least reviewing some changes that are not really priorities... Also, as said by some folks, changing the default would also create some problems for some of our contributors and eventually, if some projects would accept 119 characters while some others not, it would also be a new community difference between projects... (and yet again some new nit that new contributors need to understand) Sorry, -1 for those reasons. -- > Miro Tomaska > irc: mtomaska > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 31 15:19:18 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 31 May 2022 10:19:18 -0500 Subject: [qa][stable][gate] Gate fix for stable/ussuri and Tempest pin for stable/victoria In-Reply-To: <1810d0b9544.1158f226d173555.1038431972596585840@ghanshyammann.com> References: <1810d0b9544.1158f226d173555.1038431972596585840@ghanshyammann.com> Message-ID: <1811ab32c25.10933e2e6323890.2341991101744739097@ghanshyammann.com> ---- On Sat, 28 May 2022 18:41:11 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > You might know the stable/ussuri gate is broken because we dropped the py36 support in zed > cycle and Tempest master is used to test it which uses master constraints. > > To fix this, I am pinning Tempest 26.1.0 in stable/ussuri and tested it with compatible stable/ussuri > constraints and tempest plugins. > > - https://review.opendev.org/q/topic:ussuri-pin-tempest QA patches are merged and the gate should be fixed now, please recheck. There are a few plugin patches still in the gate for their respective jobs. -gmann > > at the same time, I am also pinning tempest in stable/victoria too as it is also in EM state > > - https://review.opendev.org/q/topic:victoria-pin-tempest > > Both stable branches pining is working fine and ready to merge now. I have proposed the fixes for a > few tempest plugins and stable branches, and request them to merge because we are going > to merge the tempest and devstack changes. Below is the complete set of changes we need: > > QA patches need to go first: > - Tempest > - https://review.opendev.org/c/openstack/tempest/+/843045 > - https://review.opendev.org/c/openstack/tempest/+/843293/2 > - Devstack: > - https://review.opendev.org/c/openstack/devstack/+/838051 > - https://review.opendev.org/c/openstack/devstack/+/843295 > - Devstack-plugin-ceph > - https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/843354 > - https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/843355 (it is +A and waiting for depends-on devstack patch to merge) > - Devstack-gate > - https://review.opendev.org/c/openstack/devstack-gate/+/843689 > > Cinder: > - https://review.opendev.org/c/openstack/cinder/+/843305 > - https://review.opendev.org/c/openstack/cinder/+/843092 (it is +A and waiting for depends-on devstack patch to merge) > - https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/843319 (merged) > > Cyborg: no action is needed > - https://review.opendev.org/c/openstack/cyborg-tempest-plugin/+/843329 (it is +A and waiting for depends-on devstack patch to merge) > > Neutron: need review > - https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/838053 > > Designate: need review > - https://review.opendev.org/c/openstack/designate/+/843709 > - https://review.opendev.org/c/openstack/designate/+/843710 > > -gmann > > From kennelson11 at gmail.com Tue May 31 15:53:35 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 31 May 2022 10:53:35 -0500 Subject: Fwd: [Forum] Meet the Projects Session! In-Reply-To: References: Message-ID: Hello :) I wanted to take a moment to invite project maintainers, core contributors, PTLs, governance officials, etc to a meet the projects (projects being the OIF top level ones- Kata, Zuul, OpenStack, StarlingX, and Airship) session during the Forum at the upcoming Summit in Berlin. It will take place Tue, June 7, 11:20am - 12:30pm local in A - A06. The idea is to gather in a single place so that newer members of the community or people unfamiliar with the project can come and meet some of the active participants and ask questions. It's all really informal so there is no need to prepare anything ahead of time. I realize it is pretty late in everyone's planning their schedules for the summit, but if you have a few minutes to spare to come hang out and meet some people and network, please do! We would love to see you there. -Kendall -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishwanath.ne at gmail.com Tue May 31 16:59:28 2022 From: vishwanath.ne at gmail.com (Vishwanath) Date: Tue, 31 May 2022 09:59:28 -0700 Subject: [kolla][Yoga][ubuntu][kolla-toolbox] kolla-toolbox build image Failed with status: error Message-ID: Hello, I am trying to upgrade Xena to Yoga, as a per-requisite i have upgraded kolla to 14.0.0 and started kolla-build for 14.0.0. *kolla: 14.0.0* *Distributor ID: UbuntuDescription: Ubuntu 20.04.3 LTSRelease: 20.04Codename: focal* *upgrading to : Yoga* *upgrading from: xena* *kolla-build -t source kolla-toolbox -b ubuntu* following is the error message during the image build, issue occurs only on ubuntu whereas centos has no issues, In my environment we use ubuntu. How can this issue be resolved? INFO:kolla.common.utils.kolla-toolbox:ca-certificates is already the newest version (20210119~20.04.2). INFO:kolla.common.utils.kolla-toolbox:Some packages could not be installed. This may mean that you have INFO:kolla.common.utils.kolla-toolbox:requested an impossible situation or if you are using the unstable INFO:kolla.common.utils.kolla-toolbox:distribution that some required packages have not yet been created INFO:kolla.common.utils.kolla-toolbox:or been moved out of Incoming. INFO:kolla.common.utils.kolla-toolbox:The following information may help to resolve the situation: INFO:kolla.common.utils.kolla-toolbox:The following packages have unmet dependencies: INFO:kolla.common.utils.kolla-toolbox: rabbitmq-server : Depends: erlang-base (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: erlang-base-hipe (< 1:25.0) but it is not going to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-crypto (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-eldap (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-inets (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-mnesia (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-os-mon (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-parsetools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-public-key (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-runtime-tools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-ssl (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-syntax-tools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-tools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-xmerl (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox:E: Unable to correct problems, you have held broken packages. INFO:kolla.common.utils.kolla-toolbox: INFO:kolla.common.utils.kolla-toolbox:Removing intermediate container 7ea161d0dbbd ERROR:kolla.common.utils.kolla-toolbox:Error'd with the following message ERROR:kolla.common.utils.kolla-toolbox:The command '/bin/sh -c apt-get update && apt-get -y install --no-install-recommends build-essential ca-certificates crudini gdisk git jq libffi-dev libssl-dev libxslt1-dev openvswitch-switch python3-dev rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100 INFO:kolla.common.utils:========================= INFO:kolla.common.utils:Successfully built images INFO:kolla.common.utils:========================= INFO:kolla.common.utils:base INFO:kolla.common.utils:=========================== INFO:kolla.common.utils:Images that failed to build INFO:kolla.common.utils:=========================== ERROR:kolla.common.utils:kolla-toolbox Failed with status: error INFO:kolla.common.utils:========================================= Thanks Vish -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue May 31 17:25:41 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 31 May 2022 19:25:41 +0200 Subject: [nova][placement] June 7th Meeting is CANCELLED Message-ID: Some of us will be traveling at the OIF Summit so we won't have a quorum for next week. The nova meeting will be back on June 15th. Thanks, -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue May 31 19:58:28 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 31 May 2022 21:58:28 +0200 Subject: [blazar] IRC meeting of June 2 is cancelled Message-ID: Hello, Due to unavailability of the usual participants, the Blazar IRC meeting of June 2 is cancelled. The next meeting will be on June 16. Thanks, Pierre Riteau (priteau) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue May 31 20:20:40 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 31 May 2022 22:20:40 +0200 Subject: [kolla][Yoga][ubuntu][kolla-toolbox] kolla-toolbox build image Failed with status: error In-Reply-To: References: Message-ID: Hello, I imagine it is fixed by https://review.opendev.org/c/openstack/kolla/+/843644 which merged a couple of days ago. In general, I advise to use the latest stable branch commit (e.g. stable/yoga) rather than a tagged version such as 14.0.0. Due to the many dependencies of Kolla container images, past releases are often broken. Best wishes, Pierre Riteau (priteau) On Tue, 31 May 2022 at 19:06, Vishwanath wrote: > Hello, > > I am trying to upgrade Xena to Yoga, as a per-requisite i have upgraded > kolla to 14.0.0 and started kolla-build for 14.0.0. > > *kolla: 14.0.0* > > > > *Distributor ID: UbuntuDescription: Ubuntu 20.04.3 LTSRelease: > 20.04Codename: focal* > *upgrading to : Yoga* > *upgrading from: xena* > > *kolla-build -t source kolla-toolbox -b ubuntu* > > following is the error message during the image build, issue occurs only > on ubuntu whereas centos has no issues, In my environment we use ubuntu. > How can this issue be resolved? > > INFO:kolla.common.utils.kolla-toolbox:ca-certificates is already the > newest version (20210119~20.04.2). > INFO:kolla.common.utils.kolla-toolbox:Some packages could not be > installed. This may mean that you have > INFO:kolla.common.utils.kolla-toolbox:requested an impossible situation or > if you are using the unstable > INFO:kolla.common.utils.kolla-toolbox:distribution that some required > packages have not yet been created > INFO:kolla.common.utils.kolla-toolbox:or been moved out of Incoming. > INFO:kolla.common.utils.kolla-toolbox:The following information may help > to resolve the situation: > INFO:kolla.common.utils.kolla-toolbox:The following packages have unmet > dependencies: > INFO:kolla.common.utils.kolla-toolbox: rabbitmq-server : Depends: > erlang-base (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > erlang-base-hipe (< 1:25.0) but it is not going to be installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-crypto (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-eldap (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-inets (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-mnesia (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-os-mon (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-parsetools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-public-key (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-runtime-tools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-ssl (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed > or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-syntax-tools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-tools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox: Depends: > erlang-xmerl (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be > installed or > INFO:kolla.common.utils.kolla-toolbox: > esl-erlang (< 1:25.0) but it is not installable > INFO:kolla.common.utils.kolla-toolbox:E: Unable to correct problems, you > have held broken packages. > INFO:kolla.common.utils.kolla-toolbox: > INFO:kolla.common.utils.kolla-toolbox:Removing intermediate container > 7ea161d0dbbd > ERROR:kolla.common.utils.kolla-toolbox:Error'd with the following message > ERROR:kolla.common.utils.kolla-toolbox:The command '/bin/sh -c apt-get > update && apt-get -y install --no-install-recommends build-essential > ca-certificates crudini gdisk git jq libffi-dev libssl-dev libxslt1-dev > openvswitch-switch python3-dev rabbitmq-server && apt-get clean && rm -rf > /var/lib/apt/lists/*' returned a non-zero code: 100 > INFO:kolla.common.utils:========================= > INFO:kolla.common.utils:Successfully built images > INFO:kolla.common.utils:========================= > INFO:kolla.common.utils:base > INFO:kolla.common.utils:=========================== > INFO:kolla.common.utils:Images that failed to build > INFO:kolla.common.utils:=========================== > ERROR:kolla.common.utils:kolla-toolbox Failed with status: error > INFO:kolla.common.utils:========================================= > > > Thanks > Vish > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishwanath.ne at gmail.com Tue May 31 22:20:55 2022 From: vishwanath.ne at gmail.com (Vishwanath) Date: Tue, 31 May 2022 15:20:55 -0700 Subject: [kolla][Yoga][ubuntu][kolla-toolbox] kolla-toolbox build image Failed with status: error In-Reply-To: References: Message-ID: <58BCAB52-6E77-41E3-AC67-16B4F67D0A9C@gmail.com> Pierre Riteau, I have changed kolla to stable/yoga,all containers deployed without errors.Thank you for your guidance. Regards Vish > On May 31, 2022, at 1:21 PM, Pierre Riteau wrote: > > ? > Hello, > > I imagine it is fixed by https://review.opendev.org/c/openstack/kolla/+/843644 which merged a couple of days ago. > > In general, I advise to use the latest stable branch commit (e.g. stable/yoga) rather than a tagged version such as 14.0.0. > Due to the many dependencies of Kolla container images, past releases are often broken. > > Best wishes, > Pierre Riteau (priteau) > >> On Tue, 31 May 2022 at 19:06, Vishwanath wrote: >> Hello, >> >> I am trying to upgrade Xena to Yoga, as a per-requisite i have upgraded kolla to 14.0.0 and started kolla-build for 14.0.0. >> >> kolla: 14.0.0 >> Distributor ID: Ubuntu >> Description: Ubuntu 20.04.3 LTS >> Release: 20.04 >> Codename: focal >> upgrading to : Yoga >> upgrading from: xena >> >> kolla-build -t source kolla-toolbox -b ubuntu >> >> following is the error message during the image build, issue occurs only on ubuntu whereas centos has no issues, In my environment we use ubuntu. How can this issue be resolved? >> >> INFO:kolla.common.utils.kolla-toolbox:ca-certificates is already the newest version (20210119~20.04.2). >> INFO:kolla.common.utils.kolla-toolbox:Some packages could not be installed. This may mean that you have >> INFO:kolla.common.utils.kolla-toolbox:requested an impossible situation or if you are using the unstable >> INFO:kolla.common.utils.kolla-toolbox:distribution that some required packages have not yet been created >> INFO:kolla.common.utils.kolla-toolbox:or been moved out of Incoming. >> INFO:kolla.common.utils.kolla-toolbox:The following information may help to resolve the situation: >> INFO:kolla.common.utils.kolla-toolbox:The following packages have unmet dependencies: >> INFO:kolla.common.utils.kolla-toolbox: rabbitmq-server : Depends: erlang-base (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: erlang-base-hipe (< 1:25.0) but it is not going to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-crypto (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-eldap (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-inets (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-mnesia (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-os-mon (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-parsetools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-public-key (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-runtime-tools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-ssl (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-syntax-tools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-tools (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-xmerl (< 1:25.0) but 1:25.0-1rmq1ppa1~ubuntu20.04.1 is to be installed or >> INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:25.0) but it is not installable >> INFO:kolla.common.utils.kolla-toolbox:E: Unable to correct problems, you have held broken packages. >> INFO:kolla.common.utils.kolla-toolbox: >> INFO:kolla.common.utils.kolla-toolbox:Removing intermediate container 7ea161d0dbbd >> ERROR:kolla.common.utils.kolla-toolbox:Error'd with the following message >> ERROR:kolla.common.utils.kolla-toolbox:The command '/bin/sh -c apt-get update && apt-get -y install --no-install-recommends build-essential ca-certificates crudini gdisk git jq libffi-dev libssl-dev libxslt1-dev openvswitch-switch python3-dev rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100 >> INFO:kolla.common.utils:========================= >> INFO:kolla.common.utils:Successfully built images >> INFO:kolla.common.utils:========================= >> INFO:kolla.common.utils:base >> INFO:kolla.common.utils:=========================== >> INFO:kolla.common.utils:Images that failed to build >> INFO:kolla.common.utils:=========================== >> ERROR:kolla.common.utils:kolla-toolbox Failed with status: error >> INFO:kolla.common.utils:========================================= >> >> >> Thanks >> Vish -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue May 31 18:25:44 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 31 May 2022 23:55:44 +0530 Subject: ERROR openstack [-] Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat.| Openstack tripleo wallaby version In-Reply-To: References: Message-ID: Hi Swogat, Thanks once again for your input it really helped much. instead of running mentioned those three provisioning steps i used alternate method and passed directly in deploy command. now my current deploy command is: openstack overcloud deploy --templates \ --networks-file /home/stack/templates/custom_network_data.yaml \ --vip-file /home/stack/templates/custom_vip_data.yaml \ --baremetal-deployment /home/stack/templates/overcloud-baremetal-deploy.yaml \ --network-config \ -e /home/stack/templates/environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml \ -e /home/stack/templates/ironic-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /home/stack/containers-prepare-parameter.yaml The files as suggested by you are well created. But once we run the deployment I get this error for all nodes: 0:00:16.064781 | 1.16s 2022-05-31 19:13:00.276954 | 525400ef-b928-9ded-fecc-000000000094 | TASK | Run tripleo_os_net_config_module with network_config 2022-05-31 19:40:30.061582 | 525400ef-b928-9ded-fecc-000000000094 | FATAL | Run tripleo_os_net_config_module with network_config | overcloud-controller-1 | error={"msg": "Data could not be sent to remote host \"30.30.30.117\". Make sure this host can be reached over ssh: ssh: connect to host 30.30.30.117 port 22: No route to host\r\n"} 2022- Baremetal node list are showing in as active. (undercloud) [stack at undercloud ~]$ openstack baremetal node list /usr/lib64/python3.6/site-packages/_yaml/__init__.py:23: DeprecationWarning: The _yaml extension module is now located at yaml._yaml and its location is subject to change. To use the LibYAML-based parser and emitter, import from `yaml`: `from yaml import CLoader as Loader, CDumper as Dumper`. DeprecationWarning +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | 1a4d873c-f9f7-4504-a3af-92c11f954171 | node-a | 901453a1-183f-4de8-aaab-0f38be2be455 | power on | active | False | | d18610fc-9532-410c-918e-8efc326c89f8 | node-b | d059b94a-8357-4f8e-a0d8-15a24b0c1afe | power on | active | False | | b69f2d5a-5b18-4453-8843-15c6af79aca0 | node-c | f196ef3a-7950-47b9-a5ae-751f06b18f75 | power on | active | False | | 8a38c584-f812-4ebc-a0b1-4299f0917637 | node-d | 1636517c-2ab2-43d7-8205-9f02c5290207 | power on | active | False | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ Some config is missing it seems, please check once and advise. On Tue, May 31, 2022 at 11:40 AM Swogat Pradhan wrote: > Hi Lokendra, > You need to generate another file also in the following step: openstack > overcloud node provision --stack overcloud --overcloud-ssh-key > /home/stack/sshkey/id_rsa overcloud-baremetal-deploy.yaml also you need > to pass another parameter --network-config. > example: > openstack overcloud node provision --stack overcloud --overcloud-ssh-key > /home/stack/sshkey/id_rsa *--network-config* *--output > overcloud-baremetal-deployed.yaml* overcloud-baremetal-deploy.yaml > > And then all these output files will be passed on to the openstack > overcloud deploy command. > NOTE: when passing --network-config parameter in node provision step, it > creates a directory in /etc/os-net-config and in it creates a file > config.yaml, do check the indentation of that file. (in my case the > indentation was wrong when i was using bondind everytime, so i had to > manually change the script and run a while loop and then my node provision > step was successful) > > On Tue, May 31, 2022 at 8:59 AM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Swogat, >> I tried generating the scripts as used by you in your deployments using >> the >> >> >> #openstack overcloud network provision --stack overcloud --output >> networks-deployed-environment.yaml custom_network_data.yaml >> # openstack overcloud network vip provision --stack overcloud --output >> vip-deployed-environment.yaml custom_vip_data.yaml >> # openstack overcloud node provision --stack overcloud >> --overcloud-ssh-key /home/stack/sshkey/id_rsa >> overcloud-baremetal-deploy.yaml >> >> and used the first two in the final deployment script, but it gives the >> error: >> >> heatclient.exc.HTTPInternalServerError: ERROR: Internal Error >> 2022-05-30 14:14:39.772 479668 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent >> call last):\n', ' File "/usr/lib/python3.6/ted_stack\n >> nested_stack.validate()\n', ' File >> "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in >> wrapper\n result = f(*args, ine 969, in validate\n result = >> res.validate()\n', ' File >> "/usr/lib/python3.6/site-packages/heat/engine/resources/openstack/neutron/port.py", >> line 454site-packages/heat/engine/resources/openstack/neutron/neutron.py", >> line 43, in validate\n res = super(NeutronResource, self).validate()\n', >> ' File "/un return self.validate_template()\n', ' File >> "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in >> validate_template\n self.t.rpy", line 200, in >> _validate_service_availability\n raise ex\n', >> 'heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron >> is not avaieutron network endpoint is not in service catalog.\n', '\nDuring >> handling of the above exception, another exception occurred:\n\n', >> 'Traceback (most recens/stack_resource.py", line 75, in >> validate_nested_stack\n nested_stack.validate()\n', ' File >> "/usr/lib/python3.6/site-packages/osprofiler/profiler.py"thon3.6/site-packages/heat/engine/stack.py", >> line 969, in validate\n result = res.validate()\n', ' File >> "/usr/lib/python3.6/site-packages/heat/engine/ateResource, >> self).validate()\n', ' File >> "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", >> line 65, in validate\n self.validources/stack_resource.py", line 81, in >> validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', >> 'heat.common.exception.StackValidationFaileeploy/overcloud/tripleo-heat-templates/deployed-server/deployed-server.yaml>: >> HEAT-E99001 Service neutron is not available for resource type >> OS::TripleO::vice catalog.\n', '\nDuring handling of the above exception, >> another exception occurred:\n\n', 'Traceback (most recent call last):\n', ' >> File "/usr/lib/pline 320, in validate_nested_stack\n >> nested_stack.validate()\n', ' File >> "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in >> wrappe/heat/engine/stack.py", line 969, in validate\n result = >> res.validate()\n', ' File >> "/usr/lib/python3.6/site-packages/heat/engine/resources/template_relidate()\n', >> ' File >> "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", >> line 65, in validate\n self.validate_nested_stack()\n'.py", line 81, in >> validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', >> 'heat.common.exception.StackValidationFailed: >> ResourceTypeUnavaimplates/puppet/compute-role.yaml>.resources.NovaCompute> reason: neutron network endpoint is not in service catalog.\n', '\nDuring >> handling of the above exception, >> another/lib/python3.6/site-packages/heat/common/context.py", line 416, in >> wrapped\n return func(self, ctx, *args, **kwargs)\n', ' File >> "/usr/lib/python3.6/sirce_name, template_id)\n', ' File >> "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 756, in >> _parse_template_and_validate_stack\n stack.v line 160, in wrapper\n >> result = f(*args, **kwargs)\n', ' File >> "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 969, in >> validate\n resesources/stack_resource.py", line 65, in validate\n >> self.validate_nested_stack()\n', ' File >> "/usr/lib/python3.6/site-packages/heat/engine/resources/oph=[self.stack.t.RESOURCES, >> path])\n', 'heat.common.exception.StackValidationFailed: >> ResourceTypeUnavailable: >> resources.Compute.resources.0.resources.NovaCompute: >> HEAT-E9900::ControlPlanePort, reason: neutron network endpoint is not in >> service catalog.\n']. >> >> Request you to check once, please. >> >> >> >> >> On Mon, May 30, 2022 at 11:06 AM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Hi Swogat, >>> Thanks once again. >>> >>> with the files as shown below I am running the overcloud deploy for >>> wallaby using this command: >>> >>> (undercloud) [stack at undercloud ~]$ cat deploy_overcloud_working_1.sh >>> openstack overcloud deploy --templates \ >>> -n /home/stack/templates/network_data.yaml \ >>> -r /home/stack/templates/roles_data.yaml \ >>> -e /home/stack/templates/environment.yaml \ >>> -e /home/stack/templates/environments/network-isolation.yaml \ >>> -e /home/stack/templates/environments/network-environment.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>> \ >>> -e /home/stack/templates/ironic-config.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>> -e /home/stack/containers-prepare-parameter.yaml >>> (undercloud) [stack at undercloud ~]$ >>> >>> >>> This deployment is on ipv6 using triple0 wallaby, templates, as >>> mentioned below, are generated using rendering steps and the >>> network_data.yaml the roles_data.yaml >>> Steps used to render the templates: >>> cd /usr/share/openstack-tripleo-heat-templates/ >>> ./tools/process-templates.py -o >>> ~/openstack-tripleo-heat-templates-rendered_at_wallaby -n >>> /home/stack/templates/network_data.yaml -r >>> /home/stack/templates/roles_data.yaml >>> >>> *Now if i try adding the related to VIP port I do get the error as:* >>> >>> 2022-05-30 10:37:12.792 979387 WARNING >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template >>> to file: >>> /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/controller-role.yaml >>> 2022-05-30 10:37:12.792 979387 WARNING >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template >>> to file: >>> /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/compute-role.yaml >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured >>> while running the command: ValueError: The environment is not a valid YAML >>> mapping data type. >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent >>> call last): >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>> "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, >>> self).run(parsed_args) >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>> "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in >>> run >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, >>> self).run(parsed_args) >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>> "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = >>> self.take_action(parsed_args) or 0 >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >>> line 1189, in take_action >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud stack, parsed_args, >>> new_tht_root, user_tht_root) >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >>> line 227, in create_env_files >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud created_env_files, >>> parsed_args, new_tht_root, user_tht_root) >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >>> line 204, in build_image_params >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud cleanup=(not >>> parsed_args.no_cleanup)) >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>> "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1929, in >>> process_multiple_environments >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud env_path=env_path, >>> include_env_in_files=include_env_in_files) >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>> "/usr/lib/python3.6/site-packages/heatclient/common/template_utils.py", >>> line 326, in process_environment_and_files >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud env = >>> environment_format.parse(raw_env) >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>> "/usr/lib/python3.6/site-packages/heatclient/common/environment_format.py", >>> line 50, in parse >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud raise >>> ValueError(_('The environment is not a valid ' >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud ValueError: The >>> environment is not a valid YAML mapping data type. >>> 2022-05-30 10:37:14.455 979387 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud >>> 2022-05-30 10:37:14.457 979387 ERROR openstack [-] The environment is >>> not a valid YAML mapping data type. >>> 2022-05-30 10:37:14.457 979387 INFO osc_lib.shell [-] END return value: 1 >>> (undercloud) [stack at undercloud ~]$ >>> >>> This is more of a syntax error where it is not able to understand the >>> passed VIP data file: >>> >>> undercloud) [stack at undercloud ~]$ cat >>> /home/stack/templates/vip-data-default-network-isolation.yaml >>> - >>> dns_name: overcloud >>> network: internal_api >>> - >>> dns_name: overcloud >>> network: external >>> - >>> dns_name: overcloud >>> network: ctlplane >>> - >>> dns_name: overcloud >>> network: oc_provisioning >>> - >>> dns_name: overcloud >>> network: j3mgmt >>> >>> >>> Please advise, also please note that similar templates generated in >>> prior releases such as train/ussuri works perfectly. >>> >>> >>> >>> Please check the list of *templates *files: >>> >>> drwxr-xr-x. 2 stack stack 68 May 30 09:22 environments >>> -rw-r--r--. 1 stack stack 265 May 27 13:47 environment.yaml >>> -rw-rw-r--. 1 stack stack 297 May 27 13:47 init-repo.yaml >>> -rw-r--r--. 1 stack stack 570 May 27 13:47 ironic-config.yaml >>> drwxrwxr-x. 4 stack stack 4096 May 27 13:53 network >>> -rw-r--r--. 1 stack stack 6370 May 27 14:26 network_data.yaml >>> -rw-r--r--. 1 stack stack 11137 May 27 13:53 roles_data.yaml >>> -rw-r--r--. 1 stack stack 234 May 30 09:23 >>> vip-data-default-network-isolation.yaml >>> >>> >>> >>> (undercloud) [stack at undercloud templates]$ cat environment.yaml >>> >>> parameter_defaults: >>> OvercloudControllerFlavor: control >>> OvercloudComputeFlavor: compute >>> ControllerCount: 3 >>> ComputeCount: 1 >>> TimeZone: 'Asia/Kolkata' >>> NtpServer: ['30.30.30.3'] >>> NeutronBridgeMappings: datacentre:br-tenant >>> NeutronFlatNetworks: datacentre >>> (undercloud) [stack at undercloud templates]$ >>> >>> >>> >>> (undercloud) [stack at undercloud templates]$ cat ironic-config.yaml >>> >>> parameter_defaults: >>> NovaSchedulerDefaultFilters: >>> - AggregateInstanceExtraSpecsFilter >>> - AvailabilityZoneFilter >>> - ComputeFilter >>> - ComputeCapabilitiesFilter >>> - ImagePropertiesFilter >>> IronicEnabledHardwareTypes: >>> - ipmi >>> - redfish >>> IronicEnabledPowerInterfaces: >>> - ipmitool >>> - redfish >>> IronicEnabledManagementInterfaces: >>> - ipmitool >>> - redfish >>> IronicCleaningDiskErase: metadata >>> IronicIPXEEnabled: true >>> IronicInspectorSubnets: >>> - ip_range: 172.23.3.100,172.23.3.150 >>> >>> (undercloud) [stack at undercloud templates]$ cat network_data.yaml >>> >>> - name: J3Mgmt >>> name_lower: j3mgmt >>> vip: true >>> vlan: 400 >>> ipv6: true >>> ipv6_subnet: 'fd80:fd00:fd00:4000::/64' >>> ipv6_allocation_pools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': >>> 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] >>> mtu: 9000 >>> >>> >>> >>> - name: InternalApi >>> name_lower: internal_api >>> vip: true >>> vlan: 418 >>> ipv6: true >>> ipv6_subnet: 'fd00:fd00:fd00:2000::/64' >>> ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': >>> 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] >>> mtu: 9000 >>> >>> >>> - name: External >>> vip: true >>> name_lower: external >>> vlan: 408 >>> ipv6: true >>> gateway_ipv6: 'fd00:fd00:fd00:9900::1' >>> ipv6_subnet: 'fd00:fd00:fd00:9900::/64' >>> ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': >>> 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] >>> mtu: 9000 >>> >>> >>> - name: OCProvisioning >>> vip: true >>> name_lower: oc_provisioning >>> vlan: 412 >>> ip_subnet: '172.23.3.0/24' >>> allocation_pools: [{'start': '172.23.3.10', 'end': '172.23.3.50'}] >>> mtu: 9000 >>> >>> >>> >>> >>> (undercloud) [stack at undercloud templates]$ cat roles_data.yaml >>> >>> >>> ############################################################################### >>> # File generated by TripleO >>> >>> ############################################################################### >>> >>> ############################################################################### >>> # Role: Controller >>> # >>> >>> ############################################################################### >>> - name: Controller >>> description: | >>> Controller role that has all the controller services loaded and >>> handles >>> Database, Messaging, and Network functions. >>> CountDefault: 1 >>> tags: >>> - primary >>> - controller >>> # Create external Neutron bridge for SNAT (and floating IPs when >>> using >>> # ML2/OVS without DVR) >>> - external_bridge >>> networks: >>> External: >>> subnet: external_subnet >>> InternalApi: >>> subnet: internal_api_subnet >>> OCProvisioning: >>> subnet: oc_provisioning_subnet >>> J3Mgmt: >>> subnet: j3mgmt_subnet >>> >>> >>> # For systems with both IPv4 and IPv6, you may specify a gateway >>> network for >>> # each, such as ['ControlPlane', 'External'] >>> default_route_networks: ['External'] >>> HostnameFormatDefault: '%stackname%-controller-%index%' >>> RoleParametersDefault: >>> OVNCMSOptions: "enable-chassis-as-gw" >>> # Deprecated & backward-compatible values (FIXME: Make parameters >>> consistent) >>> # Set uses_deprecated_params to True if any deprecated params are used. >>> uses_deprecated_params: True >>> deprecated_param_extraconfig: 'controllerExtraConfig' >>> deprecated_param_flavor: 'OvercloudControlFlavor' >>> deprecated_param_image: 'controllerImage' >>> deprecated_nic_config_name: 'controller.yaml' >>> update_serial: 1 >>> ServicesDefault: >>> - OS::TripleO::Services::Aide >>> - OS::TripleO::Services::AodhApi >>> - OS::TripleO::Services::AodhEvaluator >>> >>> .. >>> . >>> >>> >>> ..############################################################################### >>> # Role: Compute >>> # >>> >>> ############################################################################### >>> - name: Compute >>> description: | >>> Basic Compute Node role >>> CountDefault: 1 >>> # Create external Neutron bridge (unset if using ML2/OVS without DVR) >>> tags: >>> - compute >>> - external_bridge >>> networks: >>> InternalApi: >>> subnet: internal_api_subnet >>> J3Mgmt: >>> subnet: j3mgmt_subnet >>> HostnameFormatDefault: '%stackname%-novacompute-%index%' >>> RoleParametersDefault: >>> FsAioMaxNumber: 1048576 >>> TunedProfileName: "virtual-host" >>> # Deprecated & backward-compatible values (FIXME: Make parameters >>> consistent) >>> # Set uses_deprecated_params to True if any deprecated params are used. >>> # These deprecated_params only need to be used for existing roles and >>> not for >>> # composable roles. >>> uses_deprecated_params: True >>> deprecated_param_image: 'NovaImage' >>> deprecated_param_extraconfig: 'NovaComputeExtraConfig' >>> deprecated_param_metadata: 'NovaComputeServerMetadata' >>> deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' >>> deprecated_param_ips: 'NovaComputeIPs' >>> deprecated_server_resource_name: 'NovaCompute' >>> >>> deprecated_nic_config_name: 'compute.yaml' >>> update_serial: 25 >>> ServicesDefault: >>> - OS::TripleO::Services::Aide >>> - OS::TripleO::Services::AuditD >>> - OS::TripleO::Services::BootParams >>> >>> >>> (undercloud) [stack at undercloud templates]$ cat >>> environments/network-environment.yaml >>> >>> #This file is an example of an environment file for defining the isolated >>> #networks and related parameters. >>> resource_registry: >>> # Network Interface templates to use (these files must exist). You can >>> # override these by including one of the net-*.yaml environment files, >>> # such as net-bond-with-vlans.yaml, or modifying the list here. >>> # Port assignments for the Controller >>> OS::TripleO::Controller::Net::SoftwareConfig: OS::Heat::None >>> # Port assignments for the Compute >>> OS::TripleO::Compute::Net::SoftwareConfig: OS::Heat::None >>> >>> >>> parameter_defaults: >>> # This section is where deployment-specific configuration is done >>> # >>> ServiceNetMap: >>> IronicApiNetwork: oc_provisioning >>> IronicNetwork: oc_provisioning >>> >>> >>> >>> # This section is where deployment-specific configuration is done >>> ControllerNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' >>> ComputeNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' >>> >>> >>> >>> # Customize the IP subnet to match the local environment >>> J3MgmtNetCidr: 'fd80:fd00:fd00:4000::/64' >>> # Customize the IP range to use for static IPs and VIPs >>> J3MgmtAllocationPools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': >>> 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] >>> # Customize the VLAN ID to match the local environment >>> J3MgmtNetworkVlanID: 400 >>> >>> >>> >>> # Customize the IP subnet to match the local environment >>> InternalApiNetCidr: 'fd00:fd00:fd00:2000::/64' >>> # Customize the IP range to use for static IPs and VIPs >>> InternalApiAllocationPools: [{'start': 'fd00:fd00:fd00:2000::10', >>> 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] >>> # Customize the VLAN ID to match the local environment >>> InternalApiNetworkVlanID: 418 >>> >>> >>> >>> # Customize the IP subnet to match the local environment >>> ExternalNetCidr: 'fd00:fd00:fd00:9900::/64' >>> # Customize the IP range to use for static IPs and VIPs >>> # Leave room if the external network is also used for floating IPs >>> ExternalAllocationPools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': >>> 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] >>> # Gateway router for routable networks >>> ExternalInterfaceDefaultRoute: 'fd00:fd00:fd00:9900::1' >>> # Customize the VLAN ID to match the local environment >>> ExternalNetworkVlanID: 408 >>> >>> >>> >>> # Customize the IP subnet to match the local environment >>> OCProvisioningNetCidr: '172.23.3.0/24' >>> # Customize the IP range to use for static IPs and VIPs >>> OCProvisioningAllocationPools: [{'start': '172.23.3.10', 'end': >>> '172.23.3.50'}] >>> # Customize the VLAN ID to match the local environment >>> OCProvisioningNetworkVlanID: 412 >>> >>> >>> >>> # List of Neutron network types for tenant networks (will be used in >>> order) >>> NeutronNetworkType: 'geneve,vlan' >>> # Neutron VLAN ranges per network, for example >>> 'datacentre:1:499,tenant:500:1000': >>> NeutronNetworkVLANRanges: 'datacentre:1:1000' >>> # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 >>> miimon=100" >>> # for Linux bonds w/LACP, or "bond_mode=active-backup" for OVS >>> active/backup. >>> BondInterfaceOvsOptions: "bond_mode=active-backup" >>> >>> (undercloud) [stack at undercloud templates]$ >>> >>> >>> (undercloud) [stack at undercloud templates]$ cat >>> environments/network-isolation.yaml >>> >>> # NOTE: This template is now deprecated, and is only included for >>> compatibility >>> # when upgrading a deployment where this template was originally used. >>> For new >>> # deployments, set "ipv6: true" on desired networks in >>> network_data.yaml, and >>> # include network-isolation.yaml. >>> # >>> # Enable the creation of Neutron networks for isolated Overcloud >>> # traffic and configure each role to assign ports (related >>> # to that role) on these networks. >>> resource_registry: >>> # networks as defined in network_data.yaml >>> OS::TripleO::Network::J3Mgmt: ../network/j3mgmt_v6.yaml >>> OS::TripleO::Network::InternalApi: ../network/internal_api_v6.yaml >>> OS::TripleO::Network::External: ../network/external_v6.yaml >>> OS::TripleO::Network::OCProvisioning: ../network/oc_provisioning.yaml >>> >>> >>> # Port assignments for the VIPs >>> OS::TripleO::Network::Ports::J3MgmtVipPort: >>> ../network/ports/j3mgmt_v6.yaml >>> OS::TripleO::Network::Ports::InternalApiVipPort: >>> ../network/ports/internal_api_v6.yaml >>> OS::TripleO::Network::Ports::ExternalVipPort: >>> ../network/ports/external_v6.yaml >>> OS::TripleO::Network::Ports::OCProvisioningVipPort: >>> ../network/ports/oc_provisioning.yaml >>> >>> >>> >>> # Port assignments by role, edit role definition to assign networks to >>> roles. >>> # Port assignments for the Controller >>> OS::TripleO::Controller::Ports::J3MgmtPort: >>> ../network/ports/j3mgmt_v6.yaml >>> OS::TripleO::Controller::Ports::InternalApiPort: >>> ../network/ports/internal_api_v6.yaml >>> OS::TripleO::Controller::Ports::ExternalPort: >>> ../network/ports/external_v6.yaml >>> OS::TripleO::Controller::Ports::OCProvisioningPort: >>> ../network/ports/oc_provisioning.yaml >>> # Port assignments for the Compute >>> OS::TripleO::Compute::Ports::J3MgmtPort: >>> ../network/ports/j3mgmt_v6.yaml >>> OS::TripleO::Compute::Ports::InternalApiPort: >>> ../network/ports/internal_api_v6.yaml >>> >>> >>> >>> parameter_defaults: >>> # Enable IPv6 environment for Manila >>> ManilaIPv6: True >>> >>> (undercloud) [stack at undercloud templates]$ >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Tue, May 24, 2022 at 5:04 PM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Thanks, I'll check them out. >>>> will let you know in case it works out. >>>> >>>> On Tue, May 24, 2022 at 2:37 PM Swogat Pradhan < >>>> swogatpradhan22 at gmail.com> wrote: >>>> >>>>> Hi, >>>>> Please find the below templates: >>>>> These are for openstack wallaby release: >>>>> >>>>> (undercloud) [stack at hkg2director workplace]$ cat >>>>> custom_network_data.yaml >>>>> - name: Storage >>>>> name_lower: storage >>>>> vip: true >>>>> mtu: 1500 >>>>> subnets: >>>>> storage_subnet: >>>>> ip_subnet: 172.25.202.0/26 >>>>> allocation_pools: >>>>> - start: 172.25.202.6 >>>>> end: 172.25.202.20 >>>>> vlan: 1105 >>>>> - name: StorageMgmt >>>>> name_lower: storage_mgmt >>>>> vip: true >>>>> mtu: 1500 >>>>> subnets: >>>>> storage_mgmt_subnet: >>>>> ip_subnet: 172.25.202.64/26 >>>>> allocation_pools: >>>>> - start: 172.25.202.72 >>>>> end: 172.25.202.87 >>>>> vlan: 1106 >>>>> - name: InternalApi >>>>> name_lower: internal_api >>>>> vip: true >>>>> mtu: 1500 >>>>> subnets: >>>>> internal_api_subnet: >>>>> ip_subnet: 172.25.201.192/26 >>>>> allocation_pools: >>>>> - start: 172.25.201.198 >>>>> end: 172.25.201.212 >>>>> vlan: 1104 >>>>> - name: Tenant >>>>> vip: false # Tenant network does not use VIPs >>>>> mtu: 1500 >>>>> name_lower: tenant >>>>> subnets: >>>>> tenant_subnet: >>>>> ip_subnet: 172.25.202.128/26 >>>>> allocation_pools: >>>>> - start: 172.25.202.135 >>>>> end: 172.25.202.150 >>>>> vlan: 1108 >>>>> - name: External >>>>> name_lower: external >>>>> vip: true >>>>> mtu: 1500 >>>>> subnets: >>>>> external_subnet: >>>>> ip_subnet: 172.25.201.128/26 >>>>> allocation_pools: >>>>> - start: 172.25.201.135 >>>>> end: 172.25.201.150 >>>>> gateway_ip: 172.25.201.129 >>>>> vlan: 1103 >>>>> >>>>> (undercloud) [stack at hkg2director workplace]$ cat custom_vip_data.yaml >>>>> - network: ctlplane >>>>> #dns_name: overcloud >>>>> ip_address: 172.25.201.91 >>>>> subnet: ctlplane-subnet >>>>> - network: external >>>>> #dns_name: overcloud >>>>> ip_address: 172.25.201.150 >>>>> subnet: external_subnet >>>>> - network: internal_api >>>>> #dns_name: overcloud >>>>> ip_address: 172.25.201.250 >>>>> subnet: internal_api_subnet >>>>> - network: storage >>>>> #dns_name: overcloud >>>>> ip_address: 172.25.202.50 >>>>> subnet: storage_subnet >>>>> - network: storage_mgmt >>>>> #dns_name: overcloud >>>>> ip_address: 172.25.202.90 >>>>> subnet: storage_mgmt_subnet >>>>> >>>>> (undercloud) [stack at hkg2director workplace]$ cat >>>>> overcloud-baremetal-deploy.yaml >>>>> - name: Controller >>>>> count: 4 >>>>> defaults: >>>>> networks: >>>>> - network: ctlplane >>>>> vif: true >>>>> - network: external >>>>> subnet: external_subnet >>>>> - network: internal_api >>>>> subnet: internal_api_subnet >>>>> - network: storage >>>>> subnet: storage_subnet >>>>> - network: storage_mgmt >>>>> subnet: storage_mgmt_subnet >>>>> - network: tenant >>>>> subnet: tenant_subnet >>>>> network_config: >>>>> template: /home/stack/templates/controller.j2 >>>>> default_route_network: >>>>> - external >>>>> instances: >>>>> - hostname: overcloud-controller-0 >>>>> name: dc1-controller2 >>>>> #provisioned: false >>>>> - hostname: overcloud-controller-1 >>>>> name: dc2-controller2 >>>>> #provisioned: false >>>>> - hostname: overcloud-controller-2 >>>>> name: dc1-controller1 >>>>> #provisioned: false >>>>> - hostname: overcloud-controller-no-ceph-3 >>>>> name: dc2-ceph2 >>>>> #provisioned: false >>>>> #- hostname: overcloud-controller-3 >>>>> #name: dc2-compute3 >>>>> #provisioned: false >>>>> >>>>> - name: Compute >>>>> count: 5 >>>>> defaults: >>>>> networks: >>>>> - network: ctlplane >>>>> vif: true >>>>> - network: internal_api >>>>> subnet: internal_api_subnet >>>>> - network: tenant >>>>> subnet: tenant_subnet >>>>> - network: storage >>>>> subnet: storage_subnet >>>>> network_config: >>>>> template: /home/stack/templates/compute.j2 >>>>> instances: >>>>> - hostname: overcloud-novacompute-0 >>>>> name: dc2-compute1 >>>>> #provisioned: false >>>>> - hostname: overcloud-novacompute-1 >>>>> name: dc2-ceph1 >>>>> #provisioned: false >>>>> - hostname: overcloud-novacompute-2 >>>>> name: dc1-compute1 >>>>> #provisioned: false >>>>> - hostname: overcloud-novacompute-3 >>>>> name: dc1-compute2 >>>>> #provisioned: false >>>>> - hostname: overcloud-novacompute-4 >>>>> name: dc2-compute3 >>>>> #provisioned: false >>>>> >>>>> - name: CephStorage >>>>> count: 4 >>>>> defaults: >>>>> networks: >>>>> - network: ctlplane >>>>> vif: true >>>>> - network: internal_api >>>>> subnet: internal_api_subnet >>>>> - network: storage >>>>> subnet: storage_subnet >>>>> - network: storage_mgmt >>>>> subnet: storage_mgmt_subnet >>>>> network_config: >>>>> template: /home/stack/templates/ceph-storage.j2 >>>>> instances: >>>>> - hostname: overcloud-cephstorage-0 >>>>> name: dc2-controller1 >>>>> #provisioned: false >>>>> # - hostname: overcloud-cephstorage-1 >>>>> # name: dc2-ceph2 >>>>> - hostname: overcloud-cephstorage-1 >>>>> name: dc1-ceph1 >>>>> # provisioned: false >>>>> - hostname: overcloud-cephstorage-2 >>>>> name: dc1-ceph2 >>>>> #provisioned: false >>>>> - hostname: overcloud-cephstorage-3 >>>>> name: dc2-compute2 >>>>> #provisioned: false >>>>> >>>>> >>>>> You must use these templates to provision network, vip and nodes. >>>>> You must use the output files generated during the provisioning step >>>>> in openstack overcloud deploy command using -e parameter. >>>>> >>>>> With regards, >>>>> Swogat Pradhan >>>>> >>>>> >>>>> On Mon, May 23, 2022 at 8:33 PM Lokendra Rathour < >>>>> lokendrarathour at gmail.com> wrote: >>>>> >>>>>> Hi Swogat, >>>>>> I tried checking your solution and my templates but could not relate >>>>>> much. >>>>>> But issue seems the same >>>>>> >>>>>> http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html >>>>>> >>>>>> I tried somemore ways but looks like some issue with templates. >>>>>> Can you please share the templates used to deploy the overcloud. >>>>>> >>>>>> Mysetup have 3 controller and 1 compute. >>>>>> >>>>>> Thanks once again for reading my mail. >>>>>> >>>>>> Waiting for your reply. >>>>>> >>>>>> -Lokendra >>>>>> >>>>>> On Fri, 20 May 2022, 08:25 Swogat Pradhan, >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> Yes I was able to find the issue and fix it. >>>>>>> The issue was with the overcloud-baremetal-deployed.yaml file i was >>>>>>> trying to provision controller-0, controller-1 and controller-3 and kept >>>>>>> controller-2 aside for later, but the tripleo scripts are built in such a >>>>>>> way that they were taking controller- 0, 1 and 2 inplace of controller-3, >>>>>>> so the network ports and vip were created for controller 0,1 and 2 but not >>>>>>> for 3 , so this error was popping off. Also i would request you to check >>>>>>> the jinja nic templates and once the node provisioning is done check the >>>>>>> /etc/os-net-config/config.json/yaml file for syntax if using bonded nic >>>>>>> template. >>>>>>> If you need any more infor please let me know. >>>>>>> >>>>>>> With regards, >>>>>>> Swogat Pradhan >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, May 20, 2022 at 8:01 AM Lokendra Rathour < >>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Swogat, >>>>>>>> Thanks for raising this issue. >>>>>>>> Did you find any solution? to this problem ? >>>>>>>> >>>>>>>> Please let me know it might be helpful >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Apr 19, 2022 at 12:43 PM Swogat Pradhan < >>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> I am currently trying to deploy openstack wallaby using tripleo >>>>>>>>> arch. >>>>>>>>> I created the network jinja templates, ran the following commands >>>>>>>>> also: >>>>>>>>> >>>>>>>>> #openstack overcloud network provision --stack overcloud --output >>>>>>>>> networks-deployed-environment.yaml custom_network_data.yaml >>>>>>>>> # openstack overcloud network vip provision --stack overcloud >>>>>>>>> --output vip-deployed-environment.yaml custom_vip_data.yaml >>>>>>>>> # openstack overcloud node provision --stack overcloud >>>>>>>>> --overcloud-ssh-key /home/stack/sshkey/id_rsa >>>>>>>>> overcloud-baremetal-deploy.yaml >>>>>>>>> >>>>>>>>> and used the environment files in the openstack overcloud deploy >>>>>>>>> command: >>>>>>>>> >>>>>>>>> (undercloud) [stack at hkg2director ~]$ cat deploy.sh >>>>>>>>> #!/bin/bash >>>>>>>>> THT=/usr/share/openstack-tripleo-heat-templates/ >>>>>>>>> CNF=/home/stack/ >>>>>>>>> openstack overcloud deploy --templates $THT \ >>>>>>>>> -r $CNF/templates/roles_data.yaml \ >>>>>>>>> -n $CNF/workplace/custom_network_data.yaml \ >>>>>>>>> -e ~/containers-prepare-parameter.yaml \ >>>>>>>>> -e $CNF/templates/node-info.yaml \ >>>>>>>>> -e $CNF/templates/scheduler-hints.yaml \ >>>>>>>>> -e $CNF/workplace/networks-deployed-environment.yaml \ >>>>>>>>> -e $CNF/workplace/vip-deployed-environment.yaml \ >>>>>>>>> -e $CNF/workplace/overcloud-baremetal-deployed.yaml \ >>>>>>>>> -e $CNF/workplace/custom-net-bond-with-vlans.yaml >>>>>>>>> >>>>>>>>> Now when i run the ./deploy.sh script i encounter an error stating: >>>>>>>>> >>>>>>>>> ERROR openstack [-] Resource >>>>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>>>> included with the deployment command.: >>>>>>>>> tripleoclient.exceptions.InvalidConfiguration: Resource >>>>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>>>> included with the deployment command. >>>>>>>>> 2022-04-19 13:47:16.582 735924 INFO osc_lib.shell [-] END return >>>>>>>>> value: 1 >>>>>>>>> >>>>>>>>> Can someone tell me where the mistake is? >>>>>>>>> >>>>>>>>> With regards, >>>>>>>>> Swogat Pradhan >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >> >> -- ~ Lokendra www.inertiaspeaks.com www.inertiagroups.com skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue May 31 18:34:19 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Wed, 1 Jun 2022 00:04:19 +0530 Subject: ERROR openstack [-] Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat.| Openstack tripleo wallaby version In-Reply-To: References: Message-ID: Hi Lokendra, Like i said, > NOTE: when passing --network-config parameter in node provision step, it creates a directory in /etc/os-net-config and in it creates a file > config.yaml, do check the indentation of that file. (in my case the indentation was wrong when i was using bonding everytime, so i had to > manually change the script and run a while loop and then my node provision step was successful) this parameter --network-config creates a config.yaml(network config) file in /etc/os-net-config directory in the overcloud nodes and then ansible tries to apply the network config from the generated config file. And in wallaby version if you are using bonding then the syntax in the generated conf is wrong (atleast was in my case) and then the ansible tries to apply the network config (with wrong syntax) so your overcloud nodes become unavailable. Please run node provision separately, and keep on monitoring "metalsmith list" command. Once IP is assigned to the overcloud nodes, ssh to the nodes and assign a password to heat-admin user so that even if the network becomes unavailable you still will be able to access the nodes via console access, then you can visit /etc/os-net-config directory and verify the config.yaml file. With regards, Swogat pradhan. On Tue, May 31, 2022 at 11:56 PM Lokendra Rathour wrote: > Hi Swogat, > Thanks once again for your input it really helped much. > > instead of running mentioned those three provisioning steps i used > alternate method and passed directly in deploy command. now my current > deploy command is: > > openstack overcloud deploy --templates \ > --networks-file /home/stack/templates/custom_network_data.yaml \ > --vip-file /home/stack/templates/custom_vip_data.yaml \ > --baremetal-deployment > /home/stack/templates/overcloud-baremetal-deploy.yaml \ > --network-config \ > -e /home/stack/templates/environment.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml > \ > -e /home/stack/templates/ironic-config.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ > -e /home/stack/containers-prepare-parameter.yaml > > The files as suggested by you are well created. > > But once we run the deployment I get this error for all nodes: > > 0:00:16.064781 | 1.16s > 2022-05-31 19:13:00.276954 | 525400ef-b928-9ded-fecc-000000000094 | > TASK | Run tripleo_os_net_config_module with network_config > 2022-05-31 19:40:30.061582 | 525400ef-b928-9ded-fecc-000000000094 | > FATAL | Run tripleo_os_net_config_module with network_config | > overcloud-controller-1 | error={"msg": "Data could not be sent to remote > host \"30.30.30.117\". Make sure this host can be reached over ssh: ssh: > connect to host 30.30.30.117 port 22: No route to host\r\n"} > 2022- > > > Baremetal node list are showing in as active. > > (undercloud) [stack at undercloud ~]$ openstack baremetal node list > /usr/lib64/python3.6/site-packages/_yaml/__init__.py:23: > DeprecationWarning: The _yaml extension module is now located at yaml._yaml > and its location is subject to change. To use the LibYAML-based parser and > emitter, import from `yaml`: `from yaml import CLoader as Loader, CDumper > as Dumper`. > DeprecationWarning > > +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID > | Power State | Provisioning State | Maintenance | > > +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ > | 1a4d873c-f9f7-4504-a3af-92c11f954171 | node-a | > 901453a1-183f-4de8-aaab-0f38be2be455 | power on | active | > False | > | d18610fc-9532-410c-918e-8efc326c89f8 | node-b | > d059b94a-8357-4f8e-a0d8-15a24b0c1afe | power on | active | > False | > | b69f2d5a-5b18-4453-8843-15c6af79aca0 | node-c | > f196ef3a-7950-47b9-a5ae-751f06b18f75 | power on | active | > False | > | 8a38c584-f812-4ebc-a0b1-4299f0917637 | node-d | > 1636517c-2ab2-43d7-8205-9f02c5290207 | power on | active | > False | > > +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ > > Some config is missing it seems, please check once and advise. > > > On Tue, May 31, 2022 at 11:40 AM Swogat Pradhan > wrote: > >> Hi Lokendra, >> You need to generate another file also in the following step: openstack >> overcloud node provision --stack overcloud --overcloud-ssh-key >> /home/stack/sshkey/id_rsa overcloud-baremetal-deploy.yaml also you need >> to pass another parameter --network-config. >> example: >> openstack overcloud node provision --stack overcloud >> --overcloud-ssh-key /home/stack/sshkey/id_rsa *--network-config* *--output >> overcloud-baremetal-deployed.yaml* overcloud-baremetal-deploy.yaml >> >> And then all these output files will be passed on to the openstack >> overcloud deploy command. >> NOTE: when passing --network-config parameter in node provision step, it >> creates a directory in /etc/os-net-config and in it creates a file >> config.yaml, do check the indentation of that file. (in my case the >> indentation was wrong when i was using bondind everytime, so i had to >> manually change the script and run a while loop and then my node provision >> step was successful) >> >> On Tue, May 31, 2022 at 8:59 AM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Hi Swogat, >>> I tried generating the scripts as used by you in your deployments using >>> the >>> >>> >>> #openstack overcloud network provision --stack overcloud --output >>> networks-deployed-environment.yaml custom_network_data.yaml >>> # openstack overcloud network vip provision --stack overcloud --output >>> vip-deployed-environment.yaml custom_vip_data.yaml >>> # openstack overcloud node provision --stack overcloud >>> --overcloud-ssh-key /home/stack/sshkey/id_rsa >>> overcloud-baremetal-deploy.yaml >>> >>> and used the first two in the final deployment script, but it gives the >>> error: >>> >>> heatclient.exc.HTTPInternalServerError: ERROR: Internal Error >>> 2022-05-30 14:14:39.772 479668 ERROR >>> tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent >>> call last):\n', ' File "/usr/lib/python3.6/ted_stack\n >>> nested_stack.validate()\n', ' File >>> "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in >>> wrapper\n result = f(*args, ine 969, in validate\n result = >>> res.validate()\n', ' File >>> "/usr/lib/python3.6/site-packages/heat/engine/resources/openstack/neutron/port.py", >>> line 454site-packages/heat/engine/resources/openstack/neutron/neutron.py", >>> line 43, in validate\n res = super(NeutronResource, self).validate()\n', >>> ' File "/un return self.validate_template()\n', ' File >>> "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in >>> validate_template\n self.t.rpy", line 200, in >>> _validate_service_availability\n raise ex\n', >>> 'heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron >>> is not avaieutron network endpoint is not in service catalog.\n', '\nDuring >>> handling of the above exception, another exception occurred:\n\n', >>> 'Traceback (most recens/stack_resource.py", line 75, in >>> validate_nested_stack\n nested_stack.validate()\n', ' File >>> "/usr/lib/python3.6/site-packages/osprofiler/profiler.py"thon3.6/site-packages/heat/engine/stack.py", >>> line 969, in validate\n result = res.validate()\n', ' File >>> "/usr/lib/python3.6/site-packages/heat/engine/ateResource, >>> self).validate()\n', ' File >>> "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", >>> line 65, in validate\n self.validources/stack_resource.py", line 81, in >>> validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', >>> 'heat.common.exception.StackValidationFaileeploy/overcloud/tripleo-heat-templates/deployed-server/deployed-server.yaml>: >>> HEAT-E99001 Service neutron is not available for resource type >>> OS::TripleO::vice catalog.\n', '\nDuring handling of the above exception, >>> another exception occurred:\n\n', 'Traceback (most recent call last):\n', ' >>> File "/usr/lib/pline 320, in validate_nested_stack\n >>> nested_stack.validate()\n', ' File >>> "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in >>> wrappe/heat/engine/stack.py", line 969, in validate\n result = >>> res.validate()\n', ' File >>> "/usr/lib/python3.6/site-packages/heat/engine/resources/template_relidate()\n', >>> ' File >>> "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", >>> line 65, in validate\n self.validate_nested_stack()\n'.py", line 81, in >>> validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', >>> 'heat.common.exception.StackValidationFailed: >>> ResourceTypeUnavaimplates/puppet/compute-role.yaml>.resources.NovaCompute>> reason: neutron network endpoint is not in service catalog.\n', '\nDuring >>> handling of the above exception, >>> another/lib/python3.6/site-packages/heat/common/context.py", line 416, in >>> wrapped\n return func(self, ctx, *args, **kwargs)\n', ' File >>> "/usr/lib/python3.6/sirce_name, template_id)\n', ' File >>> "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 756, in >>> _parse_template_and_validate_stack\n stack.v line 160, in wrapper\n >>> result = f(*args, **kwargs)\n', ' File >>> "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 969, in >>> validate\n resesources/stack_resource.py", line 65, in validate\n >>> self.validate_nested_stack()\n', ' File >>> "/usr/lib/python3.6/site-packages/heat/engine/resources/oph=[self.stack.t.RESOURCES, >>> path])\n', 'heat.common.exception.StackValidationFailed: >>> ResourceTypeUnavailable: >>> resources.Compute.resources.0.resources.NovaCompute: >>> HEAT-E9900::ControlPlanePort, reason: neutron network endpoint is not in >>> service catalog.\n']. >>> >>> Request you to check once, please. >>> >>> >>> >>> >>> On Mon, May 30, 2022 at 11:06 AM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Swogat, >>>> Thanks once again. >>>> >>>> with the files as shown below I am running the overcloud deploy for >>>> wallaby using this command: >>>> >>>> (undercloud) [stack at undercloud ~]$ cat deploy_overcloud_working_1.sh >>>> openstack overcloud deploy --templates \ >>>> -n /home/stack/templates/network_data.yaml \ >>>> -r /home/stack/templates/roles_data.yaml \ >>>> -e /home/stack/templates/environment.yaml \ >>>> -e /home/stack/templates/environments/network-isolation.yaml \ >>>> -e /home/stack/templates/environments/network-environment.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>> \ >>>> -e /home/stack/templates/ironic-config.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>> -e /home/stack/containers-prepare-parameter.yaml >>>> (undercloud) [stack at undercloud ~]$ >>>> >>>> >>>> This deployment is on ipv6 using triple0 wallaby, templates, as >>>> mentioned below, are generated using rendering steps and the >>>> network_data.yaml the roles_data.yaml >>>> Steps used to render the templates: >>>> cd /usr/share/openstack-tripleo-heat-templates/ >>>> ./tools/process-templates.py -o >>>> ~/openstack-tripleo-heat-templates-rendered_at_wallaby -n >>>> /home/stack/templates/network_data.yaml -r >>>> /home/stack/templates/roles_data.yaml >>>> >>>> *Now if i try adding the related to VIP port I do get the error as:* >>>> >>>> 2022-05-30 10:37:12.792 979387 WARNING >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template >>>> to file: >>>> /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/controller-role.yaml >>>> 2022-05-30 10:37:12.792 979387 WARNING >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template >>>> to file: >>>> /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/compute-role.yaml >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured >>>> while running the command: ValueError: The environment is not a valid YAML >>>> mapping data type. >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent >>>> call last): >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>> "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, >>>> self).run(parsed_args) >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>> "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in >>>> run >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, >>>> self).run(parsed_args) >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>> "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = >>>> self.take_action(parsed_args) or 0 >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >>>> line 1189, in take_action >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud stack, parsed_args, >>>> new_tht_root, user_tht_root) >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >>>> line 227, in create_env_files >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud created_env_files, >>>> parsed_args, new_tht_root, user_tht_root) >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >>>> line 204, in build_image_params >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud cleanup=(not >>>> parsed_args.no_cleanup)) >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>> "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1929, in >>>> process_multiple_environments >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud env_path=env_path, >>>> include_env_in_files=include_env_in_files) >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>> "/usr/lib/python3.6/site-packages/heatclient/common/template_utils.py", >>>> line 326, in process_environment_and_files >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud env = >>>> environment_format.parse(raw_env) >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>> "/usr/lib/python3.6/site-packages/heatclient/common/environment_format.py", >>>> line 50, in parse >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud raise >>>> ValueError(_('The environment is not a valid ' >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud ValueError: The >>>> environment is not a valid YAML mapping data type. >>>> 2022-05-30 10:37:14.455 979387 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud >>>> 2022-05-30 10:37:14.457 979387 ERROR openstack [-] The environment is >>>> not a valid YAML mapping data type. >>>> 2022-05-30 10:37:14.457 979387 INFO osc_lib.shell [-] END return value: >>>> 1 >>>> (undercloud) [stack at undercloud ~]$ >>>> >>>> This is more of a syntax error where it is not able to understand the >>>> passed VIP data file: >>>> >>>> undercloud) [stack at undercloud ~]$ cat >>>> /home/stack/templates/vip-data-default-network-isolation.yaml >>>> - >>>> dns_name: overcloud >>>> network: internal_api >>>> - >>>> dns_name: overcloud >>>> network: external >>>> - >>>> dns_name: overcloud >>>> network: ctlplane >>>> - >>>> dns_name: overcloud >>>> network: oc_provisioning >>>> - >>>> dns_name: overcloud >>>> network: j3mgmt >>>> >>>> >>>> Please advise, also please note that similar templates generated in >>>> prior releases such as train/ussuri works perfectly. >>>> >>>> >>>> >>>> Please check the list of *templates *files: >>>> >>>> drwxr-xr-x. 2 stack stack 68 May 30 09:22 environments >>>> -rw-r--r--. 1 stack stack 265 May 27 13:47 environment.yaml >>>> -rw-rw-r--. 1 stack stack 297 May 27 13:47 init-repo.yaml >>>> -rw-r--r--. 1 stack stack 570 May 27 13:47 ironic-config.yaml >>>> drwxrwxr-x. 4 stack stack 4096 May 27 13:53 network >>>> -rw-r--r--. 1 stack stack 6370 May 27 14:26 network_data.yaml >>>> -rw-r--r--. 1 stack stack 11137 May 27 13:53 roles_data.yaml >>>> -rw-r--r--. 1 stack stack 234 May 30 09:23 >>>> vip-data-default-network-isolation.yaml >>>> >>>> >>>> >>>> (undercloud) [stack at undercloud templates]$ cat environment.yaml >>>> >>>> parameter_defaults: >>>> OvercloudControllerFlavor: control >>>> OvercloudComputeFlavor: compute >>>> ControllerCount: 3 >>>> ComputeCount: 1 >>>> TimeZone: 'Asia/Kolkata' >>>> NtpServer: ['30.30.30.3'] >>>> NeutronBridgeMappings: datacentre:br-tenant >>>> NeutronFlatNetworks: datacentre >>>> (undercloud) [stack at undercloud templates]$ >>>> >>>> >>>> >>>> (undercloud) [stack at undercloud templates]$ cat ironic-config.yaml >>>> >>>> parameter_defaults: >>>> NovaSchedulerDefaultFilters: >>>> - AggregateInstanceExtraSpecsFilter >>>> - AvailabilityZoneFilter >>>> - ComputeFilter >>>> - ComputeCapabilitiesFilter >>>> - ImagePropertiesFilter >>>> IronicEnabledHardwareTypes: >>>> - ipmi >>>> - redfish >>>> IronicEnabledPowerInterfaces: >>>> - ipmitool >>>> - redfish >>>> IronicEnabledManagementInterfaces: >>>> - ipmitool >>>> - redfish >>>> IronicCleaningDiskErase: metadata >>>> IronicIPXEEnabled: true >>>> IronicInspectorSubnets: >>>> - ip_range: 172.23.3.100,172.23.3.150 >>>> >>>> (undercloud) [stack at undercloud templates]$ cat network_data.yaml >>>> >>>> - name: J3Mgmt >>>> name_lower: j3mgmt >>>> vip: true >>>> vlan: 400 >>>> ipv6: true >>>> ipv6_subnet: 'fd80:fd00:fd00:4000::/64' >>>> ipv6_allocation_pools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': >>>> 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] >>>> mtu: 9000 >>>> >>>> >>>> >>>> - name: InternalApi >>>> name_lower: internal_api >>>> vip: true >>>> vlan: 418 >>>> ipv6: true >>>> ipv6_subnet: 'fd00:fd00:fd00:2000::/64' >>>> ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': >>>> 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] >>>> mtu: 9000 >>>> >>>> >>>> - name: External >>>> vip: true >>>> name_lower: external >>>> vlan: 408 >>>> ipv6: true >>>> gateway_ipv6: 'fd00:fd00:fd00:9900::1' >>>> ipv6_subnet: 'fd00:fd00:fd00:9900::/64' >>>> ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': >>>> 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] >>>> mtu: 9000 >>>> >>>> >>>> - name: OCProvisioning >>>> vip: true >>>> name_lower: oc_provisioning >>>> vlan: 412 >>>> ip_subnet: '172.23.3.0/24' >>>> allocation_pools: [{'start': '172.23.3.10', 'end': '172.23.3.50'}] >>>> mtu: 9000 >>>> >>>> >>>> >>>> >>>> (undercloud) [stack at undercloud templates]$ cat roles_data.yaml >>>> >>>> >>>> ############################################################################### >>>> # File generated by TripleO >>>> >>>> ############################################################################### >>>> >>>> ############################################################################### >>>> # Role: Controller >>>> # >>>> >>>> ############################################################################### >>>> - name: Controller >>>> description: | >>>> Controller role that has all the controller services loaded and >>>> handles >>>> Database, Messaging, and Network functions. >>>> CountDefault: 1 >>>> tags: >>>> - primary >>>> - controller >>>> # Create external Neutron bridge for SNAT (and floating IPs when >>>> using >>>> # ML2/OVS without DVR) >>>> - external_bridge >>>> networks: >>>> External: >>>> subnet: external_subnet >>>> InternalApi: >>>> subnet: internal_api_subnet >>>> OCProvisioning: >>>> subnet: oc_provisioning_subnet >>>> J3Mgmt: >>>> subnet: j3mgmt_subnet >>>> >>>> >>>> # For systems with both IPv4 and IPv6, you may specify a gateway >>>> network for >>>> # each, such as ['ControlPlane', 'External'] >>>> default_route_networks: ['External'] >>>> HostnameFormatDefault: '%stackname%-controller-%index%' >>>> RoleParametersDefault: >>>> OVNCMSOptions: "enable-chassis-as-gw" >>>> # Deprecated & backward-compatible values (FIXME: Make parameters >>>> consistent) >>>> # Set uses_deprecated_params to True if any deprecated params are >>>> used. >>>> uses_deprecated_params: True >>>> deprecated_param_extraconfig: 'controllerExtraConfig' >>>> deprecated_param_flavor: 'OvercloudControlFlavor' >>>> deprecated_param_image: 'controllerImage' >>>> deprecated_nic_config_name: 'controller.yaml' >>>> update_serial: 1 >>>> ServicesDefault: >>>> - OS::TripleO::Services::Aide >>>> - OS::TripleO::Services::AodhApi >>>> - OS::TripleO::Services::AodhEvaluator >>>> >>>> .. >>>> . >>>> >>>> >>>> ..############################################################################### >>>> # Role: Compute >>>> # >>>> >>>> ############################################################################### >>>> - name: Compute >>>> description: | >>>> Basic Compute Node role >>>> CountDefault: 1 >>>> # Create external Neutron bridge (unset if using ML2/OVS without DVR) >>>> tags: >>>> - compute >>>> - external_bridge >>>> networks: >>>> InternalApi: >>>> subnet: internal_api_subnet >>>> J3Mgmt: >>>> subnet: j3mgmt_subnet >>>> HostnameFormatDefault: '%stackname%-novacompute-%index%' >>>> RoleParametersDefault: >>>> FsAioMaxNumber: 1048576 >>>> TunedProfileName: "virtual-host" >>>> # Deprecated & backward-compatible values (FIXME: Make parameters >>>> consistent) >>>> # Set uses_deprecated_params to True if any deprecated params are >>>> used. >>>> # These deprecated_params only need to be used for existing roles and >>>> not for >>>> # composable roles. >>>> uses_deprecated_params: True >>>> deprecated_param_image: 'NovaImage' >>>> deprecated_param_extraconfig: 'NovaComputeExtraConfig' >>>> deprecated_param_metadata: 'NovaComputeServerMetadata' >>>> deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' >>>> deprecated_param_ips: 'NovaComputeIPs' >>>> deprecated_server_resource_name: 'NovaCompute' >>>> >>>> deprecated_nic_config_name: 'compute.yaml' >>>> update_serial: 25 >>>> ServicesDefault: >>>> - OS::TripleO::Services::Aide >>>> - OS::TripleO::Services::AuditD >>>> - OS::TripleO::Services::BootParams >>>> >>>> >>>> (undercloud) [stack at undercloud templates]$ cat >>>> environments/network-environment.yaml >>>> >>>> #This file is an example of an environment file for defining the >>>> isolated >>>> #networks and related parameters. >>>> resource_registry: >>>> # Network Interface templates to use (these files must exist). You can >>>> # override these by including one of the net-*.yaml environment files, >>>> # such as net-bond-with-vlans.yaml, or modifying the list here. >>>> # Port assignments for the Controller >>>> OS::TripleO::Controller::Net::SoftwareConfig: OS::Heat::None >>>> # Port assignments for the Compute >>>> OS::TripleO::Compute::Net::SoftwareConfig: OS::Heat::None >>>> >>>> >>>> parameter_defaults: >>>> # This section is where deployment-specific configuration is done >>>> # >>>> ServiceNetMap: >>>> IronicApiNetwork: oc_provisioning >>>> IronicNetwork: oc_provisioning >>>> >>>> >>>> >>>> # This section is where deployment-specific configuration is done >>>> ControllerNetworkConfigTemplate: >>>> 'templates/bonds_vlans/bonds_vlans.j2' >>>> ComputeNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' >>>> >>>> >>>> >>>> # Customize the IP subnet to match the local environment >>>> J3MgmtNetCidr: 'fd80:fd00:fd00:4000::/64' >>>> # Customize the IP range to use for static IPs and VIPs >>>> J3MgmtAllocationPools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': >>>> 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] >>>> # Customize the VLAN ID to match the local environment >>>> J3MgmtNetworkVlanID: 400 >>>> >>>> >>>> >>>> # Customize the IP subnet to match the local environment >>>> InternalApiNetCidr: 'fd00:fd00:fd00:2000::/64' >>>> # Customize the IP range to use for static IPs and VIPs >>>> InternalApiAllocationPools: [{'start': 'fd00:fd00:fd00:2000::10', >>>> 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] >>>> # Customize the VLAN ID to match the local environment >>>> InternalApiNetworkVlanID: 418 >>>> >>>> >>>> >>>> # Customize the IP subnet to match the local environment >>>> ExternalNetCidr: 'fd00:fd00:fd00:9900::/64' >>>> # Customize the IP range to use for static IPs and VIPs >>>> # Leave room if the external network is also used for floating IPs >>>> ExternalAllocationPools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': >>>> 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] >>>> # Gateway router for routable networks >>>> ExternalInterfaceDefaultRoute: 'fd00:fd00:fd00:9900::1' >>>> # Customize the VLAN ID to match the local environment >>>> ExternalNetworkVlanID: 408 >>>> >>>> >>>> >>>> # Customize the IP subnet to match the local environment >>>> OCProvisioningNetCidr: '172.23.3.0/24' >>>> # Customize the IP range to use for static IPs and VIPs >>>> OCProvisioningAllocationPools: [{'start': '172.23.3.10', 'end': >>>> '172.23.3.50'}] >>>> # Customize the VLAN ID to match the local environment >>>> OCProvisioningNetworkVlanID: 412 >>>> >>>> >>>> >>>> # List of Neutron network types for tenant networks (will be used in >>>> order) >>>> NeutronNetworkType: 'geneve,vlan' >>>> # Neutron VLAN ranges per network, for example >>>> 'datacentre:1:499,tenant:500:1000': >>>> NeutronNetworkVLANRanges: 'datacentre:1:1000' >>>> # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 >>>> miimon=100" >>>> # for Linux bonds w/LACP, or "bond_mode=active-backup" for OVS >>>> active/backup. >>>> BondInterfaceOvsOptions: "bond_mode=active-backup" >>>> >>>> (undercloud) [stack at undercloud templates]$ >>>> >>>> >>>> (undercloud) [stack at undercloud templates]$ cat >>>> environments/network-isolation.yaml >>>> >>>> # NOTE: This template is now deprecated, and is only included for >>>> compatibility >>>> # when upgrading a deployment where this template was originally used. >>>> For new >>>> # deployments, set "ipv6: true" on desired networks in >>>> network_data.yaml, and >>>> # include network-isolation.yaml. >>>> # >>>> # Enable the creation of Neutron networks for isolated Overcloud >>>> # traffic and configure each role to assign ports (related >>>> # to that role) on these networks. >>>> resource_registry: >>>> # networks as defined in network_data.yaml >>>> OS::TripleO::Network::J3Mgmt: ../network/j3mgmt_v6.yaml >>>> OS::TripleO::Network::InternalApi: ../network/internal_api_v6.yaml >>>> OS::TripleO::Network::External: ../network/external_v6.yaml >>>> OS::TripleO::Network::OCProvisioning: ../network/oc_provisioning.yaml >>>> >>>> >>>> # Port assignments for the VIPs >>>> OS::TripleO::Network::Ports::J3MgmtVipPort: >>>> ../network/ports/j3mgmt_v6.yaml >>>> OS::TripleO::Network::Ports::InternalApiVipPort: >>>> ../network/ports/internal_api_v6.yaml >>>> OS::TripleO::Network::Ports::ExternalVipPort: >>>> ../network/ports/external_v6.yaml >>>> OS::TripleO::Network::Ports::OCProvisioningVipPort: >>>> ../network/ports/oc_provisioning.yaml >>>> >>>> >>>> >>>> # Port assignments by role, edit role definition to assign networks >>>> to roles. >>>> # Port assignments for the Controller >>>> OS::TripleO::Controller::Ports::J3MgmtPort: >>>> ../network/ports/j3mgmt_v6.yaml >>>> OS::TripleO::Controller::Ports::InternalApiPort: >>>> ../network/ports/internal_api_v6.yaml >>>> OS::TripleO::Controller::Ports::ExternalPort: >>>> ../network/ports/external_v6.yaml >>>> OS::TripleO::Controller::Ports::OCProvisioningPort: >>>> ../network/ports/oc_provisioning.yaml >>>> # Port assignments for the Compute >>>> OS::TripleO::Compute::Ports::J3MgmtPort: >>>> ../network/ports/j3mgmt_v6.yaml >>>> OS::TripleO::Compute::Ports::InternalApiPort: >>>> ../network/ports/internal_api_v6.yaml >>>> >>>> >>>> >>>> parameter_defaults: >>>> # Enable IPv6 environment for Manila >>>> ManilaIPv6: True >>>> >>>> (undercloud) [stack at undercloud templates]$ >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Tue, May 24, 2022 at 5:04 PM Lokendra Rathour < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Thanks, I'll check them out. >>>>> will let you know in case it works out. >>>>> >>>>> On Tue, May 24, 2022 at 2:37 PM Swogat Pradhan < >>>>> swogatpradhan22 at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> Please find the below templates: >>>>>> These are for openstack wallaby release: >>>>>> >>>>>> (undercloud) [stack at hkg2director workplace]$ cat >>>>>> custom_network_data.yaml >>>>>> - name: Storage >>>>>> name_lower: storage >>>>>> vip: true >>>>>> mtu: 1500 >>>>>> subnets: >>>>>> storage_subnet: >>>>>> ip_subnet: 172.25.202.0/26 >>>>>> allocation_pools: >>>>>> - start: 172.25.202.6 >>>>>> end: 172.25.202.20 >>>>>> vlan: 1105 >>>>>> - name: StorageMgmt >>>>>> name_lower: storage_mgmt >>>>>> vip: true >>>>>> mtu: 1500 >>>>>> subnets: >>>>>> storage_mgmt_subnet: >>>>>> ip_subnet: 172.25.202.64/26 >>>>>> allocation_pools: >>>>>> - start: 172.25.202.72 >>>>>> end: 172.25.202.87 >>>>>> vlan: 1106 >>>>>> - name: InternalApi >>>>>> name_lower: internal_api >>>>>> vip: true >>>>>> mtu: 1500 >>>>>> subnets: >>>>>> internal_api_subnet: >>>>>> ip_subnet: 172.25.201.192/26 >>>>>> allocation_pools: >>>>>> - start: 172.25.201.198 >>>>>> end: 172.25.201.212 >>>>>> vlan: 1104 >>>>>> - name: Tenant >>>>>> vip: false # Tenant network does not use VIPs >>>>>> mtu: 1500 >>>>>> name_lower: tenant >>>>>> subnets: >>>>>> tenant_subnet: >>>>>> ip_subnet: 172.25.202.128/26 >>>>>> allocation_pools: >>>>>> - start: 172.25.202.135 >>>>>> end: 172.25.202.150 >>>>>> vlan: 1108 >>>>>> - name: External >>>>>> name_lower: external >>>>>> vip: true >>>>>> mtu: 1500 >>>>>> subnets: >>>>>> external_subnet: >>>>>> ip_subnet: 172.25.201.128/26 >>>>>> allocation_pools: >>>>>> - start: 172.25.201.135 >>>>>> end: 172.25.201.150 >>>>>> gateway_ip: 172.25.201.129 >>>>>> vlan: 1103 >>>>>> >>>>>> (undercloud) [stack at hkg2director workplace]$ cat custom_vip_data.yaml >>>>>> - network: ctlplane >>>>>> #dns_name: overcloud >>>>>> ip_address: 172.25.201.91 >>>>>> subnet: ctlplane-subnet >>>>>> - network: external >>>>>> #dns_name: overcloud >>>>>> ip_address: 172.25.201.150 >>>>>> subnet: external_subnet >>>>>> - network: internal_api >>>>>> #dns_name: overcloud >>>>>> ip_address: 172.25.201.250 >>>>>> subnet: internal_api_subnet >>>>>> - network: storage >>>>>> #dns_name: overcloud >>>>>> ip_address: 172.25.202.50 >>>>>> subnet: storage_subnet >>>>>> - network: storage_mgmt >>>>>> #dns_name: overcloud >>>>>> ip_address: 172.25.202.90 >>>>>> subnet: storage_mgmt_subnet >>>>>> >>>>>> (undercloud) [stack at hkg2director workplace]$ cat >>>>>> overcloud-baremetal-deploy.yaml >>>>>> - name: Controller >>>>>> count: 4 >>>>>> defaults: >>>>>> networks: >>>>>> - network: ctlplane >>>>>> vif: true >>>>>> - network: external >>>>>> subnet: external_subnet >>>>>> - network: internal_api >>>>>> subnet: internal_api_subnet >>>>>> - network: storage >>>>>> subnet: storage_subnet >>>>>> - network: storage_mgmt >>>>>> subnet: storage_mgmt_subnet >>>>>> - network: tenant >>>>>> subnet: tenant_subnet >>>>>> network_config: >>>>>> template: /home/stack/templates/controller.j2 >>>>>> default_route_network: >>>>>> - external >>>>>> instances: >>>>>> - hostname: overcloud-controller-0 >>>>>> name: dc1-controller2 >>>>>> #provisioned: false >>>>>> - hostname: overcloud-controller-1 >>>>>> name: dc2-controller2 >>>>>> #provisioned: false >>>>>> - hostname: overcloud-controller-2 >>>>>> name: dc1-controller1 >>>>>> #provisioned: false >>>>>> - hostname: overcloud-controller-no-ceph-3 >>>>>> name: dc2-ceph2 >>>>>> #provisioned: false >>>>>> #- hostname: overcloud-controller-3 >>>>>> #name: dc2-compute3 >>>>>> #provisioned: false >>>>>> >>>>>> - name: Compute >>>>>> count: 5 >>>>>> defaults: >>>>>> networks: >>>>>> - network: ctlplane >>>>>> vif: true >>>>>> - network: internal_api >>>>>> subnet: internal_api_subnet >>>>>> - network: tenant >>>>>> subnet: tenant_subnet >>>>>> - network: storage >>>>>> subnet: storage_subnet >>>>>> network_config: >>>>>> template: /home/stack/templates/compute.j2 >>>>>> instances: >>>>>> - hostname: overcloud-novacompute-0 >>>>>> name: dc2-compute1 >>>>>> #provisioned: false >>>>>> - hostname: overcloud-novacompute-1 >>>>>> name: dc2-ceph1 >>>>>> #provisioned: false >>>>>> - hostname: overcloud-novacompute-2 >>>>>> name: dc1-compute1 >>>>>> #provisioned: false >>>>>> - hostname: overcloud-novacompute-3 >>>>>> name: dc1-compute2 >>>>>> #provisioned: false >>>>>> - hostname: overcloud-novacompute-4 >>>>>> name: dc2-compute3 >>>>>> #provisioned: false >>>>>> >>>>>> - name: CephStorage >>>>>> count: 4 >>>>>> defaults: >>>>>> networks: >>>>>> - network: ctlplane >>>>>> vif: true >>>>>> - network: internal_api >>>>>> subnet: internal_api_subnet >>>>>> - network: storage >>>>>> subnet: storage_subnet >>>>>> - network: storage_mgmt >>>>>> subnet: storage_mgmt_subnet >>>>>> network_config: >>>>>> template: /home/stack/templates/ceph-storage.j2 >>>>>> instances: >>>>>> - hostname: overcloud-cephstorage-0 >>>>>> name: dc2-controller1 >>>>>> #provisioned: false >>>>>> # - hostname: overcloud-cephstorage-1 >>>>>> # name: dc2-ceph2 >>>>>> - hostname: overcloud-cephstorage-1 >>>>>> name: dc1-ceph1 >>>>>> # provisioned: false >>>>>> - hostname: overcloud-cephstorage-2 >>>>>> name: dc1-ceph2 >>>>>> #provisioned: false >>>>>> - hostname: overcloud-cephstorage-3 >>>>>> name: dc2-compute2 >>>>>> #provisioned: false >>>>>> >>>>>> >>>>>> You must use these templates to provision network, vip and nodes. >>>>>> You must use the output files generated during the provisioning step >>>>>> in openstack overcloud deploy command using -e parameter. >>>>>> >>>>>> With regards, >>>>>> Swogat Pradhan >>>>>> >>>>>> >>>>>> On Mon, May 23, 2022 at 8:33 PM Lokendra Rathour < >>>>>> lokendrarathour at gmail.com> wrote: >>>>>> >>>>>>> Hi Swogat, >>>>>>> I tried checking your solution and my templates but could not relate >>>>>>> much. >>>>>>> But issue seems the same >>>>>>> >>>>>>> http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html >>>>>>> >>>>>>> I tried somemore ways but looks like some issue with templates. >>>>>>> Can you please share the templates used to deploy the overcloud. >>>>>>> >>>>>>> Mysetup have 3 controller and 1 compute. >>>>>>> >>>>>>> Thanks once again for reading my mail. >>>>>>> >>>>>>> Waiting for your reply. >>>>>>> >>>>>>> -Lokendra >>>>>>> >>>>>>> On Fri, 20 May 2022, 08:25 Swogat Pradhan, < >>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> Yes I was able to find the issue and fix it. >>>>>>>> The issue was with the overcloud-baremetal-deployed.yaml file i was >>>>>>>> trying to provision controller-0, controller-1 and controller-3 and kept >>>>>>>> controller-2 aside for later, but the tripleo scripts are built in such a >>>>>>>> way that they were taking controller- 0, 1 and 2 inplace of controller-3, >>>>>>>> so the network ports and vip were created for controller 0,1 and 2 but not >>>>>>>> for 3 , so this error was popping off. Also i would request you to check >>>>>>>> the jinja nic templates and once the node provisioning is done check the >>>>>>>> /etc/os-net-config/config.json/yaml file for syntax if using bonded nic >>>>>>>> template. >>>>>>>> If you need any more infor please let me know. >>>>>>>> >>>>>>>> With regards, >>>>>>>> Swogat Pradhan >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, May 20, 2022 at 8:01 AM Lokendra Rathour < >>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi Swogat, >>>>>>>>> Thanks for raising this issue. >>>>>>>>> Did you find any solution? to this problem ? >>>>>>>>> >>>>>>>>> Please let me know it might be helpful >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Apr 19, 2022 at 12:43 PM Swogat Pradhan < >>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> I am currently trying to deploy openstack wallaby using tripleo >>>>>>>>>> arch. >>>>>>>>>> I created the network jinja templates, ran the following commands >>>>>>>>>> also: >>>>>>>>>> >>>>>>>>>> #openstack overcloud network provision --stack overcloud --output >>>>>>>>>> networks-deployed-environment.yaml custom_network_data.yaml >>>>>>>>>> # openstack overcloud network vip provision --stack overcloud >>>>>>>>>> --output vip-deployed-environment.yaml custom_vip_data.yaml >>>>>>>>>> # openstack overcloud node provision --stack overcloud >>>>>>>>>> --overcloud-ssh-key /home/stack/sshkey/id_rsa >>>>>>>>>> overcloud-baremetal-deploy.yaml >>>>>>>>>> >>>>>>>>>> and used the environment files in the openstack overcloud deploy >>>>>>>>>> command: >>>>>>>>>> >>>>>>>>>> (undercloud) [stack at hkg2director ~]$ cat deploy.sh >>>>>>>>>> #!/bin/bash >>>>>>>>>> THT=/usr/share/openstack-tripleo-heat-templates/ >>>>>>>>>> CNF=/home/stack/ >>>>>>>>>> openstack overcloud deploy --templates $THT \ >>>>>>>>>> -r $CNF/templates/roles_data.yaml \ >>>>>>>>>> -n $CNF/workplace/custom_network_data.yaml \ >>>>>>>>>> -e ~/containers-prepare-parameter.yaml \ >>>>>>>>>> -e $CNF/templates/node-info.yaml \ >>>>>>>>>> -e $CNF/templates/scheduler-hints.yaml \ >>>>>>>>>> -e $CNF/workplace/networks-deployed-environment.yaml \ >>>>>>>>>> -e $CNF/workplace/vip-deployed-environment.yaml \ >>>>>>>>>> -e $CNF/workplace/overcloud-baremetal-deployed.yaml \ >>>>>>>>>> -e $CNF/workplace/custom-net-bond-with-vlans.yaml >>>>>>>>>> >>>>>>>>>> Now when i run the ./deploy.sh script i encounter an error >>>>>>>>>> stating: >>>>>>>>>> >>>>>>>>>> ERROR openstack [-] Resource >>>>>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>>>>> included with the deployment command.: >>>>>>>>>> tripleoclient.exceptions.InvalidConfiguration: Resource >>>>>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>>>>> included with the deployment command. >>>>>>>>>> 2022-04-19 13:47:16.582 735924 INFO osc_lib.shell [-] END return >>>>>>>>>> value: 1 >>>>>>>>>> >>>>>>>>>> Can someone tell me where the mistake is? >>>>>>>>>> >>>>>>>>>> With regards, >>>>>>>>>> Swogat Pradhan >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>> >>> > > -- > ~ Lokendra > www.inertiaspeaks.com > www.inertiagroups.com > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue May 31 18:49:44 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 1 Jun 2022 00:19:44 +0530 Subject: ERROR openstack [-] Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat.| Openstack tripleo wallaby version In-Reply-To: References: Message-ID: Ok Swogat, Thanks once again. i will try this approach once and will let you know. On Wed, 1 Jun 2022, 00:04 Swogat Pradhan, wrote: > Hi Lokendra, > Like i said, > > NOTE: when passing --network-config parameter in node provision step, it > creates a directory in /etc/os-net-config and in it creates a file > > config.yaml, do check the indentation of that file. (in my case the > indentation was wrong when i was using bonding everytime, so i had to > > manually change the script and run a while loop and then my node > provision step was successful) > this parameter --network-config creates a config.yaml(network config) file > in /etc/os-net-config directory in the overcloud nodes and then ansible > tries to apply the network config from the generated config file. And in > wallaby version if you are using bonding then the syntax in the generated > conf is wrong (atleast was in my case) and then the ansible tries to apply > the network config (with wrong syntax) so your overcloud nodes become > unavailable. > > Please run node provision separately, and keep on monitoring "metalsmith > list" command. Once IP is assigned to the overcloud nodes, ssh to the nodes > and assign a password to heat-admin user so that even if the network > becomes unavailable you still will be able to access the nodes via console > access, then you can visit /etc/os-net-config directory and verify the > config.yaml file. > > With regards, > Swogat pradhan. > > On Tue, May 31, 2022 at 11:56 PM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Swogat, >> Thanks once again for your input it really helped much. >> >> instead of running mentioned those three provisioning steps i used >> alternate method and passed directly in deploy command. now my current >> deploy command is: >> >> openstack overcloud deploy --templates \ >> --networks-file /home/stack/templates/custom_network_data.yaml \ >> --vip-file /home/stack/templates/custom_vip_data.yaml \ >> --baremetal-deployment >> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >> --network-config \ >> -e /home/stack/templates/environment.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >> \ >> -e /home/stack/templates/ironic-config.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >> -e /home/stack/containers-prepare-parameter.yaml >> >> The files as suggested by you are well created. >> >> But once we run the deployment I get this error for all nodes: >> >> 0:00:16.064781 | 1.16s >> 2022-05-31 19:13:00.276954 | 525400ef-b928-9ded-fecc-000000000094 | >> TASK | Run tripleo_os_net_config_module with network_config >> 2022-05-31 19:40:30.061582 | 525400ef-b928-9ded-fecc-000000000094 | >> FATAL | Run tripleo_os_net_config_module with network_config | >> overcloud-controller-1 | error={"msg": "Data could not be sent to remote >> host \"30.30.30.117\". Make sure this host can be reached over ssh: ssh: >> connect to host 30.30.30.117 port 22: No route to host\r\n"} >> 2022- >> >> >> Baremetal node list are showing in as active. >> >> (undercloud) [stack at undercloud ~]$ openstack baremetal node list >> /usr/lib64/python3.6/site-packages/_yaml/__init__.py:23: >> DeprecationWarning: The _yaml extension module is now located at yaml._yaml >> and its location is subject to change. To use the LibYAML-based parser and >> emitter, import from `yaml`: `from yaml import CLoader as Loader, CDumper >> as Dumper`. >> DeprecationWarning >> >> +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ >> | UUID | Name | Instance UUID >> | Power State | Provisioning State | Maintenance | >> >> +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ >> | 1a4d873c-f9f7-4504-a3af-92c11f954171 | node-a | >> 901453a1-183f-4de8-aaab-0f38be2be455 | power on | active | >> False | >> | d18610fc-9532-410c-918e-8efc326c89f8 | node-b | >> d059b94a-8357-4f8e-a0d8-15a24b0c1afe | power on | active | >> False | >> | b69f2d5a-5b18-4453-8843-15c6af79aca0 | node-c | >> f196ef3a-7950-47b9-a5ae-751f06b18f75 | power on | active | >> False | >> | 8a38c584-f812-4ebc-a0b1-4299f0917637 | node-d | >> 1636517c-2ab2-43d7-8205-9f02c5290207 | power on | active | >> False | >> >> +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ >> >> Some config is missing it seems, please check once and advise. >> >> >> On Tue, May 31, 2022 at 11:40 AM Swogat Pradhan < >> swogatpradhan22 at gmail.com> wrote: >> >>> Hi Lokendra, >>> You need to generate another file also in the following step: openstack >>> overcloud node provision --stack overcloud --overcloud-ssh-key >>> /home/stack/sshkey/id_rsa overcloud-baremetal-deploy.yaml also you need >>> to pass another parameter --network-config. >>> example: >>> openstack overcloud node provision --stack overcloud >>> --overcloud-ssh-key /home/stack/sshkey/id_rsa *--network-config* *--output >>> overcloud-baremetal-deployed.yaml* overcloud-baremetal-deploy.yaml >>> >>> And then all these output files will be passed on to the openstack >>> overcloud deploy command. >>> NOTE: when passing --network-config parameter in node provision step, it >>> creates a directory in /etc/os-net-config and in it creates a file >>> config.yaml, do check the indentation of that file. (in my case the >>> indentation was wrong when i was using bondind everytime, so i had to >>> manually change the script and run a while loop and then my node provision >>> step was successful) >>> >>> On Tue, May 31, 2022 at 8:59 AM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Swogat, >>>> I tried generating the scripts as used by you in your deployments using >>>> the >>>> >>>> >>>> #openstack overcloud network provision --stack overcloud --output >>>> networks-deployed-environment.yaml custom_network_data.yaml >>>> # openstack overcloud network vip provision --stack overcloud --output >>>> vip-deployed-environment.yaml custom_vip_data.yaml >>>> # openstack overcloud node provision --stack overcloud >>>> --overcloud-ssh-key /home/stack/sshkey/id_rsa >>>> overcloud-baremetal-deploy.yaml >>>> >>>> and used the first two in the final deployment script, but it gives the >>>> error: >>>> >>>> heatclient.exc.HTTPInternalServerError: ERROR: Internal Error >>>> 2022-05-30 14:14:39.772 479668 ERROR >>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent >>>> call last):\n', ' File "/usr/lib/python3.6/ted_stack\n >>>> nested_stack.validate()\n', ' File >>>> "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in >>>> wrapper\n result = f(*args, ine 969, in validate\n result = >>>> res.validate()\n', ' File >>>> "/usr/lib/python3.6/site-packages/heat/engine/resources/openstack/neutron/port.py", >>>> line 454site-packages/heat/engine/resources/openstack/neutron/neutron.py", >>>> line 43, in validate\n res = super(NeutronResource, self).validate()\n', >>>> ' File "/un return self.validate_template()\n', ' File >>>> "/usr/lib/python3.6/site-packages/heat/engine/resource.py", line 1882, in >>>> validate_template\n self.t.rpy", line 200, in >>>> _validate_service_availability\n raise ex\n', >>>> 'heat.common.exception.ResourceTypeUnavailable: HEAT-E99001 Service neutron >>>> is not avaieutron network endpoint is not in service catalog.\n', '\nDuring >>>> handling of the above exception, another exception occurred:\n\n', >>>> 'Traceback (most recens/stack_resource.py", line 75, in >>>> validate_nested_stack\n nested_stack.validate()\n', ' File >>>> "/usr/lib/python3.6/site-packages/osprofiler/profiler.py"thon3.6/site-packages/heat/engine/stack.py", >>>> line 969, in validate\n result = res.validate()\n', ' File >>>> "/usr/lib/python3.6/site-packages/heat/engine/ateResource, >>>> self).validate()\n', ' File >>>> "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", >>>> line 65, in validate\n self.validources/stack_resource.py", line 81, in >>>> validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', >>>> 'heat.common.exception.StackValidationFaileeploy/overcloud/tripleo-heat-templates/deployed-server/deployed-server.yaml>: >>>> HEAT-E99001 Service neutron is not available for resource type >>>> OS::TripleO::vice catalog.\n', '\nDuring handling of the above exception, >>>> another exception occurred:\n\n', 'Traceback (most recent call last):\n', ' >>>> File "/usr/lib/pline 320, in validate_nested_stack\n >>>> nested_stack.validate()\n', ' File >>>> "/usr/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in >>>> wrappe/heat/engine/stack.py", line 969, in validate\n result = >>>> res.validate()\n', ' File >>>> "/usr/lib/python3.6/site-packages/heat/engine/resources/template_relidate()\n', >>>> ' File >>>> "/usr/lib/python3.6/site-packages/heat/engine/resources/stack_resource.py", >>>> line 65, in validate\n self.validate_nested_stack()\n'.py", line 81, in >>>> validate_nested_stack\n ex, path=[self.stack.t.RESOURCES, path])\n', >>>> 'heat.common.exception.StackValidationFailed: >>>> ResourceTypeUnavaimplates/puppet/compute-role.yaml>.resources.NovaCompute>>> reason: neutron network endpoint is not in service catalog.\n', '\nDuring >>>> handling of the above exception, >>>> another/lib/python3.6/site-packages/heat/common/context.py", line 416, in >>>> wrapped\n return func(self, ctx, *args, **kwargs)\n', ' File >>>> "/usr/lib/python3.6/sirce_name, template_id)\n', ' File >>>> "/usr/lib/python3.6/site-packages/heat/engine/service.py", line 756, in >>>> _parse_template_and_validate_stack\n stack.v line 160, in wrapper\n >>>> result = f(*args, **kwargs)\n', ' File >>>> "/usr/lib/python3.6/site-packages/heat/engine/stack.py", line 969, in >>>> validate\n resesources/stack_resource.py", line 65, in validate\n >>>> self.validate_nested_stack()\n', ' File >>>> "/usr/lib/python3.6/site-packages/heat/engine/resources/oph=[self.stack.t.RESOURCES, >>>> path])\n', 'heat.common.exception.StackValidationFailed: >>>> ResourceTypeUnavailable: >>>> resources.Compute.resources.0.resources.NovaCompute: >>>> HEAT-E9900::ControlPlanePort, reason: neutron network endpoint is not in >>>> service catalog.\n']. >>>> >>>> Request you to check once, please. >>>> >>>> >>>> >>>> >>>> On Mon, May 30, 2022 at 11:06 AM Lokendra Rathour < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Swogat, >>>>> Thanks once again. >>>>> >>>>> with the files as shown below I am running the overcloud deploy for >>>>> wallaby using this command: >>>>> >>>>> (undercloud) [stack at undercloud ~]$ cat deploy_overcloud_working_1.sh >>>>> openstack overcloud deploy --templates \ >>>>> -n /home/stack/templates/network_data.yaml \ >>>>> -r /home/stack/templates/roles_data.yaml \ >>>>> -e /home/stack/templates/environment.yaml \ >>>>> -e /home/stack/templates/environments/network-isolation.yaml \ >>>>> -e /home/stack/templates/environments/network-environment.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>> \ >>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>> (undercloud) [stack at undercloud ~]$ >>>>> >>>>> >>>>> This deployment is on ipv6 using triple0 wallaby, templates, as >>>>> mentioned below, are generated using rendering steps and the >>>>> network_data.yaml the roles_data.yaml >>>>> Steps used to render the templates: >>>>> cd /usr/share/openstack-tripleo-heat-templates/ >>>>> ./tools/process-templates.py -o >>>>> ~/openstack-tripleo-heat-templates-rendered_at_wallaby -n >>>>> /home/stack/templates/network_data.yaml -r >>>>> /home/stack/templates/roles_data.yaml >>>>> >>>>> *Now if i try adding the related to VIP port I do get the error as:* >>>>> >>>>> 2022-05-30 10:37:12.792 979387 WARNING >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template >>>>> to file: >>>>> /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/controller-role.yaml >>>>> 2022-05-30 10:37:12.792 979387 WARNING >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] rendering j2 template >>>>> to file: >>>>> /home/stack/overcloud-deploy/overcloud/tripleo-heat-templates/puppet/compute-role.yaml >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured >>>>> while running the command: ValueError: The environment is not a valid YAML >>>>> mapping data type. >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent >>>>> call last): >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>>> "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, >>>>> self).run(parsed_args) >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>>> "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in >>>>> run >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, >>>>> self).run(parsed_args) >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>>> "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = >>>>> self.take_action(parsed_args) or 0 >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>>> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >>>>> line 1189, in take_action >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud stack, parsed_args, >>>>> new_tht_root, user_tht_root) >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>>> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >>>>> line 227, in create_env_files >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud created_env_files, >>>>> parsed_args, new_tht_root, user_tht_root) >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>>> "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", >>>>> line 204, in build_image_params >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud cleanup=(not >>>>> parsed_args.no_cleanup)) >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>>> "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1929, in >>>>> process_multiple_environments >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud env_path=env_path, >>>>> include_env_in_files=include_env_in_files) >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>>> "/usr/lib/python3.6/site-packages/heatclient/common/template_utils.py", >>>>> line 326, in process_environment_and_files >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud env = >>>>> environment_format.parse(raw_env) >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >>>>> "/usr/lib/python3.6/site-packages/heatclient/common/environment_format.py", >>>>> line 50, in parse >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud raise >>>>> ValueError(_('The environment is not a valid ' >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud ValueError: The >>>>> environment is not a valid YAML mapping data type. >>>>> 2022-05-30 10:37:14.455 979387 ERROR >>>>> tripleoclient.v1.overcloud_deploy.DeployOvercloud >>>>> 2022-05-30 10:37:14.457 979387 ERROR openstack [-] The environment is >>>>> not a valid YAML mapping data type. >>>>> 2022-05-30 10:37:14.457 979387 INFO osc_lib.shell [-] END return >>>>> value: 1 >>>>> (undercloud) [stack at undercloud ~]$ >>>>> >>>>> This is more of a syntax error where it is not able to understand the >>>>> passed VIP data file: >>>>> >>>>> undercloud) [stack at undercloud ~]$ cat >>>>> /home/stack/templates/vip-data-default-network-isolation.yaml >>>>> - >>>>> dns_name: overcloud >>>>> network: internal_api >>>>> - >>>>> dns_name: overcloud >>>>> network: external >>>>> - >>>>> dns_name: overcloud >>>>> network: ctlplane >>>>> - >>>>> dns_name: overcloud >>>>> network: oc_provisioning >>>>> - >>>>> dns_name: overcloud >>>>> network: j3mgmt >>>>> >>>>> >>>>> Please advise, also please note that similar templates generated in >>>>> prior releases such as train/ussuri works perfectly. >>>>> >>>>> >>>>> >>>>> Please check the list of *templates *files: >>>>> >>>>> drwxr-xr-x. 2 stack stack 68 May 30 09:22 environments >>>>> -rw-r--r--. 1 stack stack 265 May 27 13:47 environment.yaml >>>>> -rw-rw-r--. 1 stack stack 297 May 27 13:47 init-repo.yaml >>>>> -rw-r--r--. 1 stack stack 570 May 27 13:47 ironic-config.yaml >>>>> drwxrwxr-x. 4 stack stack 4096 May 27 13:53 network >>>>> -rw-r--r--. 1 stack stack 6370 May 27 14:26 network_data.yaml >>>>> -rw-r--r--. 1 stack stack 11137 May 27 13:53 roles_data.yaml >>>>> -rw-r--r--. 1 stack stack 234 May 30 09:23 >>>>> vip-data-default-network-isolation.yaml >>>>> >>>>> >>>>> >>>>> (undercloud) [stack at undercloud templates]$ cat environment.yaml >>>>> >>>>> parameter_defaults: >>>>> OvercloudControllerFlavor: control >>>>> OvercloudComputeFlavor: compute >>>>> ControllerCount: 3 >>>>> ComputeCount: 1 >>>>> TimeZone: 'Asia/Kolkata' >>>>> NtpServer: ['30.30.30.3'] >>>>> NeutronBridgeMappings: datacentre:br-tenant >>>>> NeutronFlatNetworks: datacentre >>>>> (undercloud) [stack at undercloud templates]$ >>>>> >>>>> >>>>> >>>>> (undercloud) [stack at undercloud templates]$ cat ironic-config.yaml >>>>> >>>>> parameter_defaults: >>>>> NovaSchedulerDefaultFilters: >>>>> - AggregateInstanceExtraSpecsFilter >>>>> - AvailabilityZoneFilter >>>>> - ComputeFilter >>>>> - ComputeCapabilitiesFilter >>>>> - ImagePropertiesFilter >>>>> IronicEnabledHardwareTypes: >>>>> - ipmi >>>>> - redfish >>>>> IronicEnabledPowerInterfaces: >>>>> - ipmitool >>>>> - redfish >>>>> IronicEnabledManagementInterfaces: >>>>> - ipmitool >>>>> - redfish >>>>> IronicCleaningDiskErase: metadata >>>>> IronicIPXEEnabled: true >>>>> IronicInspectorSubnets: >>>>> - ip_range: 172.23.3.100,172.23.3.150 >>>>> >>>>> (undercloud) [stack at undercloud templates]$ cat network_data.yaml >>>>> >>>>> - name: J3Mgmt >>>>> name_lower: j3mgmt >>>>> vip: true >>>>> vlan: 400 >>>>> ipv6: true >>>>> ipv6_subnet: 'fd80:fd00:fd00:4000::/64' >>>>> ipv6_allocation_pools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': >>>>> 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] >>>>> mtu: 9000 >>>>> >>>>> >>>>> >>>>> - name: InternalApi >>>>> name_lower: internal_api >>>>> vip: true >>>>> vlan: 418 >>>>> ipv6: true >>>>> ipv6_subnet: 'fd00:fd00:fd00:2000::/64' >>>>> ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': >>>>> 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] >>>>> mtu: 9000 >>>>> >>>>> >>>>> - name: External >>>>> vip: true >>>>> name_lower: external >>>>> vlan: 408 >>>>> ipv6: true >>>>> gateway_ipv6: 'fd00:fd00:fd00:9900::1' >>>>> ipv6_subnet: 'fd00:fd00:fd00:9900::/64' >>>>> ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:9900::10', 'end': >>>>> 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] >>>>> mtu: 9000 >>>>> >>>>> >>>>> - name: OCProvisioning >>>>> vip: true >>>>> name_lower: oc_provisioning >>>>> vlan: 412 >>>>> ip_subnet: '172.23.3.0/24' >>>>> allocation_pools: [{'start': '172.23.3.10', 'end': '172.23.3.50'}] >>>>> mtu: 9000 >>>>> >>>>> >>>>> >>>>> >>>>> (undercloud) [stack at undercloud templates]$ cat roles_data.yaml >>>>> >>>>> >>>>> ############################################################################### >>>>> # File generated by TripleO >>>>> >>>>> ############################################################################### >>>>> >>>>> ############################################################################### >>>>> # Role: Controller >>>>> # >>>>> >>>>> ############################################################################### >>>>> - name: Controller >>>>> description: | >>>>> Controller role that has all the controller services loaded and >>>>> handles >>>>> Database, Messaging, and Network functions. >>>>> CountDefault: 1 >>>>> tags: >>>>> - primary >>>>> - controller >>>>> # Create external Neutron bridge for SNAT (and floating IPs when >>>>> using >>>>> # ML2/OVS without DVR) >>>>> - external_bridge >>>>> networks: >>>>> External: >>>>> subnet: external_subnet >>>>> InternalApi: >>>>> subnet: internal_api_subnet >>>>> OCProvisioning: >>>>> subnet: oc_provisioning_subnet >>>>> J3Mgmt: >>>>> subnet: j3mgmt_subnet >>>>> >>>>> >>>>> # For systems with both IPv4 and IPv6, you may specify a gateway >>>>> network for >>>>> # each, such as ['ControlPlane', 'External'] >>>>> default_route_networks: ['External'] >>>>> HostnameFormatDefault: '%stackname%-controller-%index%' >>>>> RoleParametersDefault: >>>>> OVNCMSOptions: "enable-chassis-as-gw" >>>>> # Deprecated & backward-compatible values (FIXME: Make parameters >>>>> consistent) >>>>> # Set uses_deprecated_params to True if any deprecated params are >>>>> used. >>>>> uses_deprecated_params: True >>>>> deprecated_param_extraconfig: 'controllerExtraConfig' >>>>> deprecated_param_flavor: 'OvercloudControlFlavor' >>>>> deprecated_param_image: 'controllerImage' >>>>> deprecated_nic_config_name: 'controller.yaml' >>>>> update_serial: 1 >>>>> ServicesDefault: >>>>> - OS::TripleO::Services::Aide >>>>> - OS::TripleO::Services::AodhApi >>>>> - OS::TripleO::Services::AodhEvaluator >>>>> >>>>> .. >>>>> . >>>>> >>>>> >>>>> ..############################################################################### >>>>> # Role: Compute >>>>> # >>>>> >>>>> ############################################################################### >>>>> - name: Compute >>>>> description: | >>>>> Basic Compute Node role >>>>> CountDefault: 1 >>>>> # Create external Neutron bridge (unset if using ML2/OVS without DVR) >>>>> tags: >>>>> - compute >>>>> - external_bridge >>>>> networks: >>>>> InternalApi: >>>>> subnet: internal_api_subnet >>>>> J3Mgmt: >>>>> subnet: j3mgmt_subnet >>>>> HostnameFormatDefault: '%stackname%-novacompute-%index%' >>>>> RoleParametersDefault: >>>>> FsAioMaxNumber: 1048576 >>>>> TunedProfileName: "virtual-host" >>>>> # Deprecated & backward-compatible values (FIXME: Make parameters >>>>> consistent) >>>>> # Set uses_deprecated_params to True if any deprecated params are >>>>> used. >>>>> # These deprecated_params only need to be used for existing roles >>>>> and not for >>>>> # composable roles. >>>>> uses_deprecated_params: True >>>>> deprecated_param_image: 'NovaImage' >>>>> deprecated_param_extraconfig: 'NovaComputeExtraConfig' >>>>> deprecated_param_metadata: 'NovaComputeServerMetadata' >>>>> deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' >>>>> deprecated_param_ips: 'NovaComputeIPs' >>>>> deprecated_server_resource_name: 'NovaCompute' >>>>> >>>>> deprecated_nic_config_name: 'compute.yaml' >>>>> update_serial: 25 >>>>> ServicesDefault: >>>>> - OS::TripleO::Services::Aide >>>>> - OS::TripleO::Services::AuditD >>>>> - OS::TripleO::Services::BootParams >>>>> >>>>> >>>>> (undercloud) [stack at undercloud templates]$ cat >>>>> environments/network-environment.yaml >>>>> >>>>> #This file is an example of an environment file for defining the >>>>> isolated >>>>> #networks and related parameters. >>>>> resource_registry: >>>>> # Network Interface templates to use (these files must exist). You >>>>> can >>>>> # override these by including one of the net-*.yaml environment >>>>> files, >>>>> # such as net-bond-with-vlans.yaml, or modifying the list here. >>>>> # Port assignments for the Controller >>>>> OS::TripleO::Controller::Net::SoftwareConfig: OS::Heat::None >>>>> # Port assignments for the Compute >>>>> OS::TripleO::Compute::Net::SoftwareConfig: OS::Heat::None >>>>> >>>>> >>>>> parameter_defaults: >>>>> # This section is where deployment-specific configuration is done >>>>> # >>>>> ServiceNetMap: >>>>> IronicApiNetwork: oc_provisioning >>>>> IronicNetwork: oc_provisioning >>>>> >>>>> >>>>> >>>>> # This section is where deployment-specific configuration is done >>>>> ControllerNetworkConfigTemplate: >>>>> 'templates/bonds_vlans/bonds_vlans.j2' >>>>> ComputeNetworkConfigTemplate: 'templates/bonds_vlans/bonds_vlans.j2' >>>>> >>>>> >>>>> >>>>> # Customize the IP subnet to match the local environment >>>>> J3MgmtNetCidr: 'fd80:fd00:fd00:4000::/64' >>>>> # Customize the IP range to use for static IPs and VIPs >>>>> J3MgmtAllocationPools: [{'start': 'fd80:fd00:fd00:4000::10', 'end': >>>>> 'fd80:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] >>>>> # Customize the VLAN ID to match the local environment >>>>> J3MgmtNetworkVlanID: 400 >>>>> >>>>> >>>>> >>>>> # Customize the IP subnet to match the local environment >>>>> InternalApiNetCidr: 'fd00:fd00:fd00:2000::/64' >>>>> # Customize the IP range to use for static IPs and VIPs >>>>> InternalApiAllocationPools: [{'start': 'fd00:fd00:fd00:2000::10', >>>>> 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] >>>>> # Customize the VLAN ID to match the local environment >>>>> InternalApiNetworkVlanID: 418 >>>>> >>>>> >>>>> >>>>> # Customize the IP subnet to match the local environment >>>>> ExternalNetCidr: 'fd00:fd00:fd00:9900::/64' >>>>> # Customize the IP range to use for static IPs and VIPs >>>>> # Leave room if the external network is also used for floating IPs >>>>> ExternalAllocationPools: [{'start': 'fd00:fd00:fd00:9900::10', >>>>> 'end': 'fd00:fd00:fd00:9900:ffff:ffff:ffff:fffe'}] >>>>> # Gateway router for routable networks >>>>> ExternalInterfaceDefaultRoute: 'fd00:fd00:fd00:9900::1' >>>>> # Customize the VLAN ID to match the local environment >>>>> ExternalNetworkVlanID: 408 >>>>> >>>>> >>>>> >>>>> # Customize the IP subnet to match the local environment >>>>> OCProvisioningNetCidr: '172.23.3.0/24' >>>>> # Customize the IP range to use for static IPs and VIPs >>>>> OCProvisioningAllocationPools: [{'start': '172.23.3.10', 'end': >>>>> '172.23.3.50'}] >>>>> # Customize the VLAN ID to match the local environment >>>>> OCProvisioningNetworkVlanID: 412 >>>>> >>>>> >>>>> >>>>> # List of Neutron network types for tenant networks (will be used in >>>>> order) >>>>> NeutronNetworkType: 'geneve,vlan' >>>>> # Neutron VLAN ranges per network, for example >>>>> 'datacentre:1:499,tenant:500:1000': >>>>> NeutronNetworkVLANRanges: 'datacentre:1:1000' >>>>> # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 >>>>> miimon=100" >>>>> # for Linux bonds w/LACP, or "bond_mode=active-backup" for OVS >>>>> active/backup. >>>>> BondInterfaceOvsOptions: "bond_mode=active-backup" >>>>> >>>>> (undercloud) [stack at undercloud templates]$ >>>>> >>>>> >>>>> (undercloud) [stack at undercloud templates]$ cat >>>>> environments/network-isolation.yaml >>>>> >>>>> # NOTE: This template is now deprecated, and is only included for >>>>> compatibility >>>>> # when upgrading a deployment where this template was originally used. >>>>> For new >>>>> # deployments, set "ipv6: true" on desired networks in >>>>> network_data.yaml, and >>>>> # include network-isolation.yaml. >>>>> # >>>>> # Enable the creation of Neutron networks for isolated Overcloud >>>>> # traffic and configure each role to assign ports (related >>>>> # to that role) on these networks. >>>>> resource_registry: >>>>> # networks as defined in network_data.yaml >>>>> OS::TripleO::Network::J3Mgmt: ../network/j3mgmt_v6.yaml >>>>> OS::TripleO::Network::InternalApi: ../network/internal_api_v6.yaml >>>>> OS::TripleO::Network::External: ../network/external_v6.yaml >>>>> OS::TripleO::Network::OCProvisioning: ../network/oc_provisioning.yaml >>>>> >>>>> >>>>> # Port assignments for the VIPs >>>>> OS::TripleO::Network::Ports::J3MgmtVipPort: >>>>> ../network/ports/j3mgmt_v6.yaml >>>>> OS::TripleO::Network::Ports::InternalApiVipPort: >>>>> ../network/ports/internal_api_v6.yaml >>>>> OS::TripleO::Network::Ports::ExternalVipPort: >>>>> ../network/ports/external_v6.yaml >>>>> OS::TripleO::Network::Ports::OCProvisioningVipPort: >>>>> ../network/ports/oc_provisioning.yaml >>>>> >>>>> >>>>> >>>>> # Port assignments by role, edit role definition to assign networks >>>>> to roles. >>>>> # Port assignments for the Controller >>>>> OS::TripleO::Controller::Ports::J3MgmtPort: >>>>> ../network/ports/j3mgmt_v6.yaml >>>>> OS::TripleO::Controller::Ports::InternalApiPort: >>>>> ../network/ports/internal_api_v6.yaml >>>>> OS::TripleO::Controller::Ports::ExternalPort: >>>>> ../network/ports/external_v6.yaml >>>>> OS::TripleO::Controller::Ports::OCProvisioningPort: >>>>> ../network/ports/oc_provisioning.yaml >>>>> # Port assignments for the Compute >>>>> OS::TripleO::Compute::Ports::J3MgmtPort: >>>>> ../network/ports/j3mgmt_v6.yaml >>>>> OS::TripleO::Compute::Ports::InternalApiPort: >>>>> ../network/ports/internal_api_v6.yaml >>>>> >>>>> >>>>> >>>>> parameter_defaults: >>>>> # Enable IPv6 environment for Manila >>>>> ManilaIPv6: True >>>>> >>>>> (undercloud) [stack at undercloud templates]$ >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, May 24, 2022 at 5:04 PM Lokendra Rathour < >>>>> lokendrarathour at gmail.com> wrote: >>>>> >>>>>> Thanks, I'll check them out. >>>>>> will let you know in case it works out. >>>>>> >>>>>> On Tue, May 24, 2022 at 2:37 PM Swogat Pradhan < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> Please find the below templates: >>>>>>> These are for openstack wallaby release: >>>>>>> >>>>>>> (undercloud) [stack at hkg2director workplace]$ cat >>>>>>> custom_network_data.yaml >>>>>>> - name: Storage >>>>>>> name_lower: storage >>>>>>> vip: true >>>>>>> mtu: 1500 >>>>>>> subnets: >>>>>>> storage_subnet: >>>>>>> ip_subnet: 172.25.202.0/26 >>>>>>> allocation_pools: >>>>>>> - start: 172.25.202.6 >>>>>>> end: 172.25.202.20 >>>>>>> vlan: 1105 >>>>>>> - name: StorageMgmt >>>>>>> name_lower: storage_mgmt >>>>>>> vip: true >>>>>>> mtu: 1500 >>>>>>> subnets: >>>>>>> storage_mgmt_subnet: >>>>>>> ip_subnet: 172.25.202.64/26 >>>>>>> allocation_pools: >>>>>>> - start: 172.25.202.72 >>>>>>> end: 172.25.202.87 >>>>>>> vlan: 1106 >>>>>>> - name: InternalApi >>>>>>> name_lower: internal_api >>>>>>> vip: true >>>>>>> mtu: 1500 >>>>>>> subnets: >>>>>>> internal_api_subnet: >>>>>>> ip_subnet: 172.25.201.192/26 >>>>>>> allocation_pools: >>>>>>> - start: 172.25.201.198 >>>>>>> end: 172.25.201.212 >>>>>>> vlan: 1104 >>>>>>> - name: Tenant >>>>>>> vip: false # Tenant network does not use VIPs >>>>>>> mtu: 1500 >>>>>>> name_lower: tenant >>>>>>> subnets: >>>>>>> tenant_subnet: >>>>>>> ip_subnet: 172.25.202.128/26 >>>>>>> allocation_pools: >>>>>>> - start: 172.25.202.135 >>>>>>> end: 172.25.202.150 >>>>>>> vlan: 1108 >>>>>>> - name: External >>>>>>> name_lower: external >>>>>>> vip: true >>>>>>> mtu: 1500 >>>>>>> subnets: >>>>>>> external_subnet: >>>>>>> ip_subnet: 172.25.201.128/26 >>>>>>> allocation_pools: >>>>>>> - start: 172.25.201.135 >>>>>>> end: 172.25.201.150 >>>>>>> gateway_ip: 172.25.201.129 >>>>>>> vlan: 1103 >>>>>>> >>>>>>> (undercloud) [stack at hkg2director workplace]$ cat >>>>>>> custom_vip_data.yaml >>>>>>> - network: ctlplane >>>>>>> #dns_name: overcloud >>>>>>> ip_address: 172.25.201.91 >>>>>>> subnet: ctlplane-subnet >>>>>>> - network: external >>>>>>> #dns_name: overcloud >>>>>>> ip_address: 172.25.201.150 >>>>>>> subnet: external_subnet >>>>>>> - network: internal_api >>>>>>> #dns_name: overcloud >>>>>>> ip_address: 172.25.201.250 >>>>>>> subnet: internal_api_subnet >>>>>>> - network: storage >>>>>>> #dns_name: overcloud >>>>>>> ip_address: 172.25.202.50 >>>>>>> subnet: storage_subnet >>>>>>> - network: storage_mgmt >>>>>>> #dns_name: overcloud >>>>>>> ip_address: 172.25.202.90 >>>>>>> subnet: storage_mgmt_subnet >>>>>>> >>>>>>> (undercloud) [stack at hkg2director workplace]$ cat >>>>>>> overcloud-baremetal-deploy.yaml >>>>>>> - name: Controller >>>>>>> count: 4 >>>>>>> defaults: >>>>>>> networks: >>>>>>> - network: ctlplane >>>>>>> vif: true >>>>>>> - network: external >>>>>>> subnet: external_subnet >>>>>>> - network: internal_api >>>>>>> subnet: internal_api_subnet >>>>>>> - network: storage >>>>>>> subnet: storage_subnet >>>>>>> - network: storage_mgmt >>>>>>> subnet: storage_mgmt_subnet >>>>>>> - network: tenant >>>>>>> subnet: tenant_subnet >>>>>>> network_config: >>>>>>> template: /home/stack/templates/controller.j2 >>>>>>> default_route_network: >>>>>>> - external >>>>>>> instances: >>>>>>> - hostname: overcloud-controller-0 >>>>>>> name: dc1-controller2 >>>>>>> #provisioned: false >>>>>>> - hostname: overcloud-controller-1 >>>>>>> name: dc2-controller2 >>>>>>> #provisioned: false >>>>>>> - hostname: overcloud-controller-2 >>>>>>> name: dc1-controller1 >>>>>>> #provisioned: false >>>>>>> - hostname: overcloud-controller-no-ceph-3 >>>>>>> name: dc2-ceph2 >>>>>>> #provisioned: false >>>>>>> #- hostname: overcloud-controller-3 >>>>>>> #name: dc2-compute3 >>>>>>> #provisioned: false >>>>>>> >>>>>>> - name: Compute >>>>>>> count: 5 >>>>>>> defaults: >>>>>>> networks: >>>>>>> - network: ctlplane >>>>>>> vif: true >>>>>>> - network: internal_api >>>>>>> subnet: internal_api_subnet >>>>>>> - network: tenant >>>>>>> subnet: tenant_subnet >>>>>>> - network: storage >>>>>>> subnet: storage_subnet >>>>>>> network_config: >>>>>>> template: /home/stack/templates/compute.j2 >>>>>>> instances: >>>>>>> - hostname: overcloud-novacompute-0 >>>>>>> name: dc2-compute1 >>>>>>> #provisioned: false >>>>>>> - hostname: overcloud-novacompute-1 >>>>>>> name: dc2-ceph1 >>>>>>> #provisioned: false >>>>>>> - hostname: overcloud-novacompute-2 >>>>>>> name: dc1-compute1 >>>>>>> #provisioned: false >>>>>>> - hostname: overcloud-novacompute-3 >>>>>>> name: dc1-compute2 >>>>>>> #provisioned: false >>>>>>> - hostname: overcloud-novacompute-4 >>>>>>> name: dc2-compute3 >>>>>>> #provisioned: false >>>>>>> >>>>>>> - name: CephStorage >>>>>>> count: 4 >>>>>>> defaults: >>>>>>> networks: >>>>>>> - network: ctlplane >>>>>>> vif: true >>>>>>> - network: internal_api >>>>>>> subnet: internal_api_subnet >>>>>>> - network: storage >>>>>>> subnet: storage_subnet >>>>>>> - network: storage_mgmt >>>>>>> subnet: storage_mgmt_subnet >>>>>>> network_config: >>>>>>> template: /home/stack/templates/ceph-storage.j2 >>>>>>> instances: >>>>>>> - hostname: overcloud-cephstorage-0 >>>>>>> name: dc2-controller1 >>>>>>> #provisioned: false >>>>>>> # - hostname: overcloud-cephstorage-1 >>>>>>> # name: dc2-ceph2 >>>>>>> - hostname: overcloud-cephstorage-1 >>>>>>> name: dc1-ceph1 >>>>>>> # provisioned: false >>>>>>> - hostname: overcloud-cephstorage-2 >>>>>>> name: dc1-ceph2 >>>>>>> #provisioned: false >>>>>>> - hostname: overcloud-cephstorage-3 >>>>>>> name: dc2-compute2 >>>>>>> #provisioned: false >>>>>>> >>>>>>> >>>>>>> You must use these templates to provision network, vip and nodes. >>>>>>> You must use the output files generated during the provisioning step >>>>>>> in openstack overcloud deploy command using -e parameter. >>>>>>> >>>>>>> With regards, >>>>>>> Swogat Pradhan >>>>>>> >>>>>>> >>>>>>> On Mon, May 23, 2022 at 8:33 PM Lokendra Rathour < >>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Swogat, >>>>>>>> I tried checking your solution and my templates but could not >>>>>>>> relate much. >>>>>>>> But issue seems the same >>>>>>>> >>>>>>>> http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028401.html >>>>>>>> >>>>>>>> I tried somemore ways but looks like some issue with templates. >>>>>>>> Can you please share the templates used to deploy the overcloud. >>>>>>>> >>>>>>>> Mysetup have 3 controller and 1 compute. >>>>>>>> >>>>>>>> Thanks once again for reading my mail. >>>>>>>> >>>>>>>> Waiting for your reply. >>>>>>>> >>>>>>>> -Lokendra >>>>>>>> >>>>>>>> On Fri, 20 May 2022, 08:25 Swogat Pradhan, < >>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> Yes I was able to find the issue and fix it. >>>>>>>>> The issue was with the overcloud-baremetal-deployed.yaml file i >>>>>>>>> was trying to provision controller-0, controller-1 and controller-3 and >>>>>>>>> kept controller-2 aside for later, but the tripleo scripts are built in >>>>>>>>> such a way that they were taking controller- 0, 1 and 2 inplace of >>>>>>>>> controller-3, so the network ports and vip were created for controller 0,1 >>>>>>>>> and 2 but not for 3 , so this error was popping off. Also i would request >>>>>>>>> you to check the jinja nic templates and once the node provisioning is done >>>>>>>>> check the /etc/os-net-config/config.json/yaml file for syntax if using >>>>>>>>> bonded nic template. >>>>>>>>> If you need any more infor please let me know. >>>>>>>>> >>>>>>>>> With regards, >>>>>>>>> Swogat Pradhan >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, May 20, 2022 at 8:01 AM Lokendra Rathour < >>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Swogat, >>>>>>>>>> Thanks for raising this issue. >>>>>>>>>> Did you find any solution? to this problem ? >>>>>>>>>> >>>>>>>>>> Please let me know it might be helpful >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Apr 19, 2022 at 12:43 PM Swogat Pradhan < >>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> I am currently trying to deploy openstack wallaby using tripleo >>>>>>>>>>> arch. >>>>>>>>>>> I created the network jinja templates, ran the following >>>>>>>>>>> commands also: >>>>>>>>>>> >>>>>>>>>>> #openstack overcloud network provision --stack overcloud >>>>>>>>>>> --output networks-deployed-environment.yaml custom_network_data.yaml >>>>>>>>>>> # openstack overcloud network vip provision --stack overcloud >>>>>>>>>>> --output vip-deployed-environment.yaml custom_vip_data.yaml >>>>>>>>>>> # openstack overcloud node provision --stack overcloud >>>>>>>>>>> --overcloud-ssh-key /home/stack/sshkey/id_rsa >>>>>>>>>>> overcloud-baremetal-deploy.yaml >>>>>>>>>>> >>>>>>>>>>> and used the environment files in the openstack overcloud deploy >>>>>>>>>>> command: >>>>>>>>>>> >>>>>>>>>>> (undercloud) [stack at hkg2director ~]$ cat deploy.sh >>>>>>>>>>> #!/bin/bash >>>>>>>>>>> THT=/usr/share/openstack-tripleo-heat-templates/ >>>>>>>>>>> CNF=/home/stack/ >>>>>>>>>>> openstack overcloud deploy --templates $THT \ >>>>>>>>>>> -r $CNF/templates/roles_data.yaml \ >>>>>>>>>>> -n $CNF/workplace/custom_network_data.yaml \ >>>>>>>>>>> -e ~/containers-prepare-parameter.yaml \ >>>>>>>>>>> -e $CNF/templates/node-info.yaml \ >>>>>>>>>>> -e $CNF/templates/scheduler-hints.yaml \ >>>>>>>>>>> -e $CNF/workplace/networks-deployed-environment.yaml \ >>>>>>>>>>> -e $CNF/workplace/vip-deployed-environment.yaml \ >>>>>>>>>>> -e $CNF/workplace/overcloud-baremetal-deployed.yaml \ >>>>>>>>>>> -e $CNF/workplace/custom-net-bond-with-vlans.yaml >>>>>>>>>>> >>>>>>>>>>> Now when i run the ./deploy.sh script i encounter an error >>>>>>>>>>> stating: >>>>>>>>>>> >>>>>>>>>>> ERROR openstack [-] Resource >>>>>>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>>>>>> included with the deployment command.: >>>>>>>>>>> tripleoclient.exceptions.InvalidConfiguration: Resource >>>>>>>>>>> OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type >>>>>>>>>>> OS::Neutron::Port and the Neutron service is not available when using >>>>>>>>>>> ephemeral Heat. The generated environments from 'openstack overcloud >>>>>>>>>>> baremetal provision' and 'openstack overcloud network provision' must be >>>>>>>>>>> included with the deployment command. >>>>>>>>>>> 2022-04-19 13:47:16.582 735924 INFO osc_lib.shell [-] END return >>>>>>>>>>> value: 1 >>>>>>>>>>> >>>>>>>>>>> Can someone tell me where the mistake is? >>>>>>>>>>> >>>>>>>>>>> With regards, >>>>>>>>>>> Swogat Pradhan >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>> >>>> >> >> -- >> ~ Lokendra >> www.inertiaspeaks.com >> www.inertiagroups.com >> skype: lokendrarathour >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: