From tonykarera at gmail.com Mon Oct 2 07:08:40 2023 From: tonykarera at gmail.com (Karera Tony) Date: Mon, 2 Oct 2023 09:08:40 +0200 Subject: Masakari issue Message-ID: Hello Team, I have an openstack Wallaby Environment deployed using kolla-ansible. So, I shutdown all the VMs on the compute nodes becauseI wanted to upgrade the nodes kernel. After I rebooted them and all was fine apart from the fact that all the hosts in instance-ha segment are in maitenance mode. When I try to turn to false. I get the error below. *Error: *Failed to update host. Details ConflictException: 409: Client Error for url: http://x.x.x.x:15868/v1/segments/8d042245-5610-4b84-b611-b633f8f8367c/hosts/066c5654-dd1a-4fa9-a664-dad10b89e202, Host 066c5654-dd1a-4fa9-a664-dad10b89e202 can't be updated as it is in-use to process notifications. Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Mon Oct 2 11:03:42 2023 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Mon, 2 Oct 2023 12:03:42 +0100 Subject: [charms] Feature freeze and release date for 2023.2 'bobcat' charms Message-ID: Hi All The OpenStack 2023.2 feature freeze is happening this Friday, 1st April. The "23.10" charms release set of the charms is to support OpenStack 2023.2 ?bobcat? which is releasing this week. OpenStack charms will support 2023.2 ?bobcat? on Ubuntu 22.04 (Jammy) and the upcoming Ubuntu 23.10 (Mantic). The charms comprise of OpenStack, Ceph, OVN and various support charms. We anticipate the release of the charms to the 2023.2/stable channel for the OpenStack charms during the w/b 23rd October, with the target release date of Wednesday 25th October. Please reach out to us on #openstack-charms if you require any further information. Thanks Alex.-- Alex Kavanagh OpenStack Engineering - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Oct 2 12:13:46 2023 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 2 Oct 2023 14:13:46 +0200 Subject: [neutron] Bug deputy report (week starting on September 25) Message-ID: Hi, I was the bug deputy for Neutron last week, please see my summary of last week's bugs. *High* * Description Failed to invoke the API interface to obtain the address group list (https://bugs.launchpad.net/neutron/+bug/2037596 ) - In Progress * [OVN] ``PortBindingChassisEvent`` event is not executing the conditions check (https://bugs.launchpad.net/neutron/+bug/2037717) - In Progress * No possibility to delete ECMP routes ( https://bugs.launchpad.net/ovsdbapp/+bug/2037536 ) - In Progress, OVSDBAPP * Impossible to add a static route if learned route exists ( https://bugs.launchpad.net/ovsdbapp/+bug/2037573 ) - In progress, OVSDBAPP * Impossible to specify a routing table ( https://bugs.launchpad.net/ovsdbapp/+bug/2037652 ) - In progresss OVSDBAPP *Medium* * Do not depend on l3-agent for vpn failover ( https://bugs.launchpad.net/neutron/+bug/1999761) - In Progress / VPNaaS * unit tests don't work for 'hardware_vtep' ( https://bugs.launchpad.net/ovsdbapp/+bug/2037568 ) - In Progress, OVSDBAPP *Low hanging fruit / Incomplete* * BGP floating IPs over l2 segmented network in Neutron ( https://bugs.launchpad.net/neutron/+bug/2037263 ) - Doc fix. * OVSDB transaction returned TRY_AGAIN, retrying do_commit ( https://bugs.launchpad.net/neutron/+bug/2037500 ) - I can't reproduce it, the original bug is for Victoria. *RFE* * [OVN] Allow scheduling external ports on non-gateway nodes ( https://open.spotify.com/playlist/2nI2TA9KrHvlcrZWH5xPbz ) RFE approved, discussed on Friday's Drivers Meeting. Regards Lajos Katona -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Oct 2 12:15:44 2023 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 2 Oct 2023 14:15:44 +0200 Subject: [largescale-sig] Next meeting: October 4, 15utc Message-ID: <9617c774-1d6c-a772-819a-23d7af5ac798@openstack.org> Hi everyone, The Large Scale SIG will be meeting this Wednesday in #openstack-operators on OFTC IRC, at 15UTC, our EU+US-friendly time. Kristin will be chairing. You can doublecheck how that UTC time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20231004T15 Feel free to add topics to the agenda: https://etherpad.opendev.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From jay at gr-oss.io Mon Oct 2 18:45:05 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Mon, 2 Oct 2023 11:45:05 -0700 Subject: [tc] Technical Committee next weekly meeting Tuesday, October 3, 2023 Message-ID: Hello, This is a reminder that the weekly Technical Committee meeting is to be held Tuesday, October 3. As it is the first meeting of the month, we'll be meeting via video chat. Use the following link to connect: https://us06web.zoom.us/j/87108541765?pwd=emlXVXg4QUxrUTlLNDZ2TTllWUM3Zz09. The proposed agenda for the meeting is: * Roll call * Follow up on past action items ** Action items from Sept 26, 2023 meeting: https://meetings.opendev.org/meetings/tc/2023/tc.2023-09-26-18.00.html * Gate health check * Leaderless projects (gmann) * Call for volunteers for Vice-Chair * Open Discussion and Reviews * Register for the PTG ** #link https://openinfra.dev/ptg/ ** #link https://review.opendev.org/q/projects:openstack/governance+is:open Thank you, Jay Faulkner TC Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Mon Oct 2 20:07:55 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Mon, 2 Oct 2023 13:07:55 -0700 Subject: [ironic][ptg] vPTG Availability Message-ID: Hi all, As stated in the Ironic meeting today, I'm going to work under the assumption that availability for this vPTG is similar to last, as the core team has not significantly changed. This means I propose the following time for the Baremetal SIG operator hour: - Monday -- Baremetal SIG Operator Hour: 1300 UTC-1400 UTC And these time windows for Ironic vPTG discussions: - Tuesday, Wednesday, Thursday -- 1300-1700 UTC and/or 2300-2400 UTC To be clear: it's not my expectation we'll be needing 5 hours a day for 3 days; but I'm going to ensure all Ironic sessions are within these time windows. Once we nail down a full list of topics, I'll get a detailed schedule up on the etherpad. I intend on finalizing these times Wednesday. Please issue feedback by then. Thank you, Jay Faulkner Ironic PTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Oct 3 13:30:48 2023 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 3 Oct 2023 13:30:48 +0000 Subject: [neutron][vpnaas][ovn] Pushing VPNaaS support through OVN In-Reply-To: References: Message-ID: Hi everyone ? I just wanted to know if it?s possible to make any more progress on this, unfortunately we have missed another release to get this in. Is it possible to get a bit of attention on this? Otherwise, if we know it?s working for some folks, should we just merge it right away? Thanks, Mohammed From: Mohammed Naser Date: Thursday, July 20, 2023 at 1:26?AM To: OpenStack Discuss , b.petermann at syseleven.de Subject: [neutron][vpnaas][ovn] Pushing VPNaaS support through OVN Hi all! One of the biggest show stoppers for us for OVN is VPNaaS but I think it?s time that we?d like to be involved in getting it over the line, I want to thank Bodo who has shown a lot of patience in getting the code necessary, and it seems that the code works for them in production as well. I?d like to ask if possible for folks to review these two outstanding changes: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/847007 https://review.opendev.org/c/openstack/neutron-vpnaas/+/765353 I know Slawek had some concerns, so hoping that raising via the email list if Bodo can address those and over the next bit of time we can get this landed in, since it even has actual proper testing both in CI and with user feedback. Thanks! Mohammed -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Oct 3 18:27:42 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 3 Oct 2023 14:27:42 -0400 Subject: [cinder][stable] proposing Jon Bernard for cinder-stable-maint Message-ID: Hello Argonauts, Jon Bernard has been acting as the cinder release manager for a few cycles now and is familiar with the OpenStack Stable Policy [0] and the cinder project backport policy [1]. I'd like to propose that he be added to the cinder-stable-maint team, which will give him +2 powers on the stable branches. cheers, brian [0] https://docs.openstack.org/project-team-guide/stable-branches.html [1] https://docs.openstack.org/cinder/latest/contributor/backporting.html From jungleboyj at gmail.com Tue Oct 3 19:12:57 2023 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 3 Oct 2023 14:12:57 -0500 Subject: [cinder][stable] proposing Jon Bernard for cinder-stable-maint In-Reply-To: References: Message-ID: <85db7090-6333-8bae-6394-e8951bd75130@gmail.com> +2 from me! On 10/3/2023 1:27 PM, Brian Rosmaita wrote: > Hello Argonauts, > > Jon Bernard has been acting as the cinder release manager for a few > cycles now and is familiar with the OpenStack Stable Policy [0] and > the cinder project backport policy [1].? I'd like to propose that he > be added to the cinder-stable-maint team, which will give him +2 > powers on the stable branches. > > cheers, > brian > > > [0] https://docs.openstack.org/project-team-guide/stable-branches.html > [1] https://docs.openstack.org/cinder/latest/contributor/backporting.html > From jay at gr-oss.io Tue Oct 3 19:38:58 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Tue, 3 Oct 2023 12:38:58 -0700 Subject: [tc] Monthly video meeting uploaded to youtube Message-ID: Hi all, As usual, the TC held our weekly meeting in video chat. That video has been uploaded to Youtube and is available here: https://www.youtube.com/watch?v=IEYyVIKBhVQ. Thanks, Jay Faulkner TC Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkloppenborg at resetdata.com.au Tue Oct 3 21:48:38 2023 From: kkloppenborg at resetdata.com.au (Karl Kloppenborg) Date: Tue, 3 Oct 2023 21:48:38 +0000 Subject: [neutron][vpnaas][ovn] Pushing VPNaaS support through OVN In-Reply-To: References: Message-ID: Hi Mohommed, What current issues are outstanding with this? Do you have any bug reports that I could look at? I?ve been working on implementing OVN to replace OVS on my stacks, would be good to see if there?s any issues I can expect. Thanks, Karl. Get Outlook for iOS ________________________________ From: Mohammed Naser Sent: Wednesday, October 4, 2023 12:30:48 AM To: OpenStack Discuss ; b.petermann at syseleven.de Subject: Re: [neutron][vpnaas][ovn] Pushing VPNaaS support through OVN Hi everyone ? I just wanted to know if it?s possible to make any more progress on this, unfortunately we have missed another release to get this in. Is it possible to get a bit of attention on this? Otherwise, if we know it?s working for some folks, should we just merge it right away? Thanks, Mohammed From: Mohammed Naser Date: Thursday, July 20, 2023 at 1:26?AM To: OpenStack Discuss , b.petermann at syseleven.de Subject: [neutron][vpnaas][ovn] Pushing VPNaaS support through OVN Hi all! One of the biggest show stoppers for us for OVN is VPNaaS but I think it?s time that we?d like to be involved in getting it over the line, I want to thank Bodo who has shown a lot of patience in getting the code necessary, and it seems that the code works for them in production as well. I?d like to ask if possible for folks to review these two outstanding changes: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/847007 https://review.opendev.org/c/openstack/neutron-vpnaas/+/765353 I know Slawek had some concerns, so hoping that raising via the email list if Bodo can address those and over the next bit of time we can get this landed in, since it even has actual proper testing both in CI and with user feedback. Thanks! Mohammed -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Oct 3 22:36:45 2023 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 4 Oct 2023 09:36:45 +1100 Subject: [cinder][stable] proposing Jon Bernard for cinder-stable-maint In-Reply-To: References: Message-ID: On Wed, 4 Oct 2023 at 05:36, Brian Rosmaita wrote: > > Hello Argonauts, > > Jon Bernard has been acting as the cinder release manager for a few > cycles now and is familiar with the OpenStack Stable Policy [0] and the > cinder project backport policy [1]. I'd like to propose that he be > added to the cinder-stable-maint team, which will give him +2 powers on > the stable branches. FWIW: I vote would be to "Make it so" Yours Tony. From katonalala at gmail.com Wed Oct 4 07:56:00 2023 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 4 Oct 2023 09:56:00 +0200 Subject: [neutron][vpnaas][ovn] Pushing VPNaaS support through OVN In-Reply-To: References: Message-ID: Hi, It is on my todo list to check the last changes of it, and check it personally. Lajos Karl Kloppenborg ezt ?rta (id?pont: 2023. okt. 4., Sze, 0:49): > Hi Mohommed, > > What current issues are outstanding with this? Do you have any bug reports > that I could look at? > > I?ve been working on implementing OVN to replace OVS on my stacks, would > be good to see if there?s any issues I can expect. > > Thanks, > Karl. > > Get Outlook for iOS > ------------------------------ > *From:* Mohammed Naser > *Sent:* Wednesday, October 4, 2023 12:30:48 AM > *To:* OpenStack Discuss ; > b.petermann at syseleven.de > *Subject:* Re: [neutron][vpnaas][ovn] Pushing VPNaaS support through OVN > > > Hi everyone ? > > > > I just wanted to know if it?s possible to make any more progress on this, > unfortunately we have missed another release to get this in. > > > > Is it possible to get a bit of attention on this? Otherwise, if we know > it?s working for some folks, should we just merge it right away? > > > > Thanks, > > Mohammed > > > > *From: *Mohammed Naser > *Date: *Thursday, July 20, 2023 at 1:26?AM > *To: *OpenStack Discuss , > b.petermann at syseleven.de > *Subject: *[neutron][vpnaas][ovn] Pushing VPNaaS support through OVN > > Hi all! > > > > One of the biggest show stoppers for us for OVN is VPNaaS but I think it?s > time that we?d like to be involved in getting it over the line, I want to > thank Bodo who has shown a lot of patience in getting the code necessary, > and it seems that the code works for them in production as well. > > > > I?d like to ask if possible for folks to review these two outstanding > changes: > > > > https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/847007 > > https://review.opendev.org/c/openstack/neutron-vpnaas/+/765353 > > > > I know Slawek had some concerns, so hoping that raising via the email list > if Bodo can address those and over the next bit of time we can get this > landed in, since it even has actual proper testing both in CI and with user > feedback. > > > > Thanks! > > Mohammed > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Oct 4 12:25:05 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 4 Oct 2023 17:55:05 +0530 Subject: [cinder][stable] proposing Jon Bernard for cinder-stable-maint In-Reply-To: References: Message-ID: Jon has been doing excellent work in managing the stable releases for the past couple of cycles. He will be a great addition to the stable core team. +1 from my side! On Wed, Oct 4, 2023 at 4:12?AM Tony Breeds wrote: > On Wed, 4 Oct 2023 at 05:36, Brian Rosmaita > wrote: > > > > Hello Argonauts, > > > > Jon Bernard has been acting as the cinder release manager for a few > > cycles now and is familiar with the OpenStack Stable Policy [0] and the > > cinder project backport policy [1]. I'd like to propose that he be > > added to the cinder-stable-maint team, which will give him +2 powers on > > the stable branches. > > FWIW: I vote would be to "Make it so" > > Yours Tony. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Oct 4 14:07:06 2023 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 4 Oct 2023 16:07:06 +0200 Subject: OpenStack 2023.2 Bobcat is officially released! Message-ID: Hello OpenStack community, The official OpenStack 2023.2 Bobcat release announcement has been sent out: https://lists.openstack.org/pipermail/openstack-announce/2023-October/002073.html Thanks to all who were a part of the 2023.2 Bobcat development cycle! This marks the official opening of the openstack/releases repository for 2024.1 Caracal releases, and freezes are now lifted. stable/2023.2 is now a fully normal stable branch, and the normal stable policy applies from now on. Thanks, -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openinfra.dev Wed Oct 4 14:10:06 2023 From: helena at openinfra.dev (Helena Spease) Date: Wed, 4 Oct 2023 09:10:06 -0500 Subject: OpenStack Bobcat Digital Contributor Badge Message-ID: <3B0B2D0A-58BF-4CC3-BE6A-7ED2F463F8D0@openinfra.dev> Hi everyone! Firstly, I would like to congratulate you on the latest OpenStack release! As a token of our appreciation, the OpenInfra Foundation has created a digital badge [1] for you. OpenStack 2023.2, nicknamed Bobcat is the 28th on-time release of OpenStack and it wouldn?t have been possible without you. Share this digital badge on social media to share with the world what you have accomplished! One of the cool parts of the OpenStack release is the data we can share. For example, the Bobcat release was possible because of the 10,476 changes authored by over 580 contributors all over the world. To ensure that our data stays as up-to-date as possible, please make sure your affiliations are still correct. You can update your affiliation by clicking here[2], logging in to your OpenInfraID profile and scrolling to the very bottom of the page. Many congratulations, Helena helena at openinfra.dev Community Programs Coordinator The OpenInfra Foundation [1] https://drive.google.com/file/d/1dudXntnmz1bi1bPyItUGZaZNeX31277X/view?usp=sharing [2] https://openinfra.dev/a/profile -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenStack_BOBCAT_Contributor_White.png Type: image/png Size: 33621 bytes Desc: not available URL: From juliaashleykreger at gmail.com Wed Oct 4 14:15:59 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 4 Oct 2023 07:15:59 -0700 Subject: [ironic][ops] Anyone using ironic-staging-drivers? Message-ID: Greetings folks, I'm curious if any operators out there are using ironic-staging-drivers[0]. It has long been under-appreciated and forever un-official, and some of the developers are thinking we just need to archive the repository at this point. If you are using the ironic-staging-drivers, please let us know along with what driver you're using from it. Thanks! -Julia [0]: https://opendev.org/x/ironic-staging-drivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From jobernar at redhat.com Wed Oct 4 14:24:51 2023 From: jobernar at redhat.com (Jon Bernard) Date: Wed, 4 Oct 2023 14:24:51 +0000 Subject: Cinder Bug Report 2023-10-04 Message-ID: Hello Argonauts, Cinder Bug Meeting Etherpad Undecided - Reader user able to create and delete group snapshot - Status: New - Reader user can create, delete, update and create group from source - Status: New - create-QOS-specs and get-all-associations-for-QOS-specs returns wrong status code - Status: New - Cinder backup fail with s3 driver and volume size exceed 2TB - Status: New - Reader user can create and update volume metadata as well as update and delete volume metadata-item - Status: New Thanks, -- Jon From sbauza at redhat.com Wed Oct 4 14:53:07 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 4 Oct 2023 16:53:07 +0200 Subject: [nova][placement] nova-core team update Message-ID: Hey folks, After some time, I asked our loved contributors named Alex Xu, Eric Fried and Lee Yarwood whether they wanted to be removed from our nova-core team as they don't have time for the moment to be around. As all of them said yes, I eventually deleted their emails from [1]. Hopefully, if they return to the community later, they could be welcomed back quickly on the team if they want. Thank both of you three for what you helped on Nova ! -Sylvain As a reminder, our nova-core team is open, please look at [2] if you wonder how to be in it. [1] https://review.opendev.org/admin/groups/54f6a1ec13b7453596635e8708f1b60bfd281ebd,members [2] https://docs.openstack.org/nova/latest/contributor/how-to-get-involved.html#how-do-i-become-nova-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristin at openinfra.dev Wed Oct 4 16:01:40 2023 From: kristin at openinfra.dev (Kristin Barrientos) Date: Wed, 4 Oct 2023 11:01:40 -0500 Subject: OpenInfra Live - Oct. 5 at 9 a.m CT / 1400 UTC Message-ID: <51FA5398-FA4A-4F45-8032-5BC6014AF744@openinfra.dev> Hi everyone, This week?s OpenInfra Live episode is brought to you by the OpenStack community. Episode: OpenStack 2023.2: Bobcat The OpenStack community released Bobcat the 28th version of the world?s most widely deployed open source cloud infrastructure software, this week. Join us to learn about the latest from community leaders about what was delivered in Bobcat and what we can expect in Caracal, OpenStack's 29th release, targeting early April 2024. Speakers: Carlos SIlva, Rajat Dhasmana, Sylvain Bauza, Jay Faulkner, Rodolfo Alonso Hernandez, Kendall Nelson Date and time: Oct. 5 at 9 a.m CT / 1400 UTC You can watch us live on: YouTube: https://www.youtube.com/watch?v=q7WDncK3YuM LinkedIn: https://www.linkedin.com/events/7112158533135081472/comments/ WeChat: recording will be posted on OpenStack WeChat after the live stream Have an idea for a future episode? Share it now at ideas.openinfra.live. Thanks, Kristin Barrientos Marketing Coordinator OpenInfra Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Oct 4 18:40:19 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 4 Oct 2023 11:40:19 -0700 Subject: [tc] Congratulations to Brian Rosmaita, Vice Chair Message-ID: Hi all, Brian Rosmaita graciously volunteered to serve as Vice-Chair of the Technical Committee for the next cycle. Congratulations Brian, and thank you! -- Jay Faulkner TC Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Oct 4 20:10:07 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 5 Oct 2023 01:40:07 +0530 Subject: [tc] Congratulations to Brian Rosmaita, Vice Chair In-Reply-To: References: Message-ID: Congratulations Brian!! On Thu, Oct 5, 2023 at 12:18?AM Jay Faulkner wrote: > Hi all, > > Brian Rosmaita graciously volunteered to serve as Vice-Chair of the > Technical Committee for the next cycle. Congratulations Brian, and thank > you! > > -- > Jay Faulkner > TC Chair > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Wed Oct 4 20:25:03 2023 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Wed, 4 Oct 2023 15:25:03 -0500 Subject: [tc] Congratulations to Brian Rosmaita, Vice Chair In-Reply-To: References: Message-ID: COngrats Brian! On Wed, Oct 4, 2023 at 3:12?PM Rajat Dhasmana wrote: > Congratulations Brian!! > > On Thu, Oct 5, 2023 at 12:18?AM Jay Faulkner wrote: > >> Hi all, >> >> Brian Rosmaita graciously volunteered to serve as Vice-Chair of the >> Technical Committee for the next cycle. Congratulations Brian, and thank >> you! >> >> -- >> Jay Faulkner >> TC Chair >> > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkloppenborg at resetdata.com.au Thu Oct 5 00:17:27 2023 From: kkloppenborg at resetdata.com.au (Karl Kloppenborg) Date: Thu, 5 Oct 2023 00:17:27 +0000 Subject: [tc] Congratulations to Brian Rosmaita, Vice Chair In-Reply-To: References: Message-ID: Congrats mate! Get Outlook for iOS ________________________________ From: Arkady Kanevsky Sent: Thursday, October 5, 2023 7:25:03 AM To: Rajat Dhasmana Cc: Jay Faulkner ; OpenStack Discuss Subject: Re: [tc] Congratulations to Brian Rosmaita, Vice Chair COngrats Brian! On Wed, Oct 4, 2023 at 3:12?PM Rajat Dhasmana > wrote: Congratulations Brian!! On Thu, Oct 5, 2023 at 12:18?AM Jay Faulkner > wrote: Hi all, Brian Rosmaita graciously volunteered to serve as Vice-Chair of the Technical Committee for the next cycle. Congratulations Brian, and thank you! -- Jay Faulkner TC Chair -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Thu Oct 5 14:53:19 2023 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Thu, 5 Oct 2023 16:53:19 +0200 Subject: [ops] [nova] "invalid argument: shares xxx must be in range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update Message-ID: Dear all We have recently updated openstack nova on some AlmaLinux9 compute nodes running Yoga from 1:25.2.0 to 1.25.2.1. After this operation some VMs don't start anymore. In the log it is reported: libvirt.libvirtError: invalid argument: shares \'57344\' must be in range [1, 10000]\n'} libvirt version is 9.0.0-10.3 A quick google search suggests that it is something related to cgroups and it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9 repos). Did I get it right ? Thanks, Massimo -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Oct 5 19:17:04 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Thu, 05 Oct 2023 20:17:04 +0100 Subject: [ops] [nova] "invalid argument: shares xxx must be in range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update In-Reply-To: References: Message-ID: <2b6c6409067a5faf4088801bb4754772c1500bcd.camel@redhat.com> On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote: > Dear all > > We have recently updated openstack nova on some AlmaLinux9 compute nodes > running Yoga from 1:25.2.0 to 1.25.2.1. After this operation some VMs don't > start anymore. In the log it is reported: > > libvirt.libvirtError: invalid argument: shares \'57344\' must be in range > [1, 10000]\n'} > > libvirt version is 9.0.0-10.3 > > > A quick google search suggests that it is something related to cgroups and > it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9 repos). > Did I get it right ? not quite it is reated to cgroups but the cause is that in cgroups_v1 the maxvlaue of shares i.e. cpu_shares changed form make int to 10000 in cgroups_v2 so the issue is teh vm requested a cpu share value of 57344 which is not vlaid on an OS that is useing cgroups_v2 libvirt will not clamp the value nor will nova. you have to change the volue in your flavor and resize the vm. > > > > Thanks, Massimo From fprzewozny at opera.com Fri Oct 6 06:58:38 2023 From: fprzewozny at opera.com (=?utf-8?Q?Franciszek_Przewo=C5=BAny?=) Date: Fri, 6 Oct 2023 08:58:38 +0200 Subject: [ops] [nova] "invalid argument: shares xxx must be in range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update In-Reply-To: <2b6c6409067a5faf4088801bb4754772c1500bcd.camel@redhat.com> References: <2b6c6409067a5faf4088801bb4754772c1500bcd.camel@redhat.com> Message-ID: <7EEEE763-876A-425A-A717-6E9B1D1A2A01@opera.com> Hi Massimo, We are using Ubuntu for our environments and we experienced the same issue during upgrade from Yoga/Focal to Yoga/Jammy. On Yoga/Focal cgroups_v1 were used, and cpu_shares parameter value was cpu count * 1024. From Jammy cgroups_v2 have been implemented, and cpu_shares value has been set by default to 100. It has hard limit of 10000, so flavors with more than 9vCPUs won't fit. If you need to fix this issue without stopping VMs, you can set cpu_shares with libvirt command: virsh schedinfo $domain --live cpu_shares=100 for more details about virsh schedinfo visit: https://libvirt.org/manpages/virsh.html#schedinfo BR, Franciszek > On 5 Oct 2023, at 21:17, smooney at redhat.com wrote: > > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote: >> Dear all >> >> We have recently updated openstack nova on some AlmaLinux9 compute nodes >> running Yoga from 1:25.2.0 to 1.25.2.1. After this operation some VMs don't >> start anymore. In the log it is reported: >> >> libvirt.libvirtError: invalid argument: shares \'57344\' must be in range >> [1, 10000]\n'} >> >> libvirt version is 9.0.0-10.3 >> >> >> A quick google search suggests that it is something related to cgroups and >> it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9 repos). >> Did I get it right ? > not quite > > it is reated to cgroups but the cause is that in cgroups_v1 the maxvlaue of shares > i.e. cpu_shares changed form make int to 10000 in cgroups_v2 > so the issue is teh vm requested a cpu share value of 57344 which is not vlaid on an OS > that is useing cgroups_v2 libvirt will not clamp the value nor will nova. > you have to change the volue in your flavor and resize the vm. > >> >> >> >> Thanks, Massimo > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1710 bytes Desc: not available URL: From massimo.sgaravatto at gmail.com Fri Oct 6 07:04:38 2023 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 6 Oct 2023 09:04:38 +0200 Subject: [ops] [nova] "invalid argument: shares xxx must be in range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update In-Reply-To: <2b6c6409067a5faf4088801bb4754772c1500bcd.camel@redhat.com> References: <2b6c6409067a5faf4088801bb4754772c1500bcd.camel@redhat.com> Message-ID: Thanks for your answer I guess you are referring to the cpu_shares in the flavor quota [*] Actually I never explicitly set a value of cpu_shares in the flavor ... [*] https://docs.openstack.org/nova/yoga/admin/resource-limits.html On Thu, Oct 5, 2023 at 9:17?PM wrote: > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote: > > Dear all > > > > We have recently updated openstack nova on some AlmaLinux9 compute nodes > > running Yoga from 1:25.2.0 to 1.25.2.1. After this operation some VMs > don't > > start anymore. In the log it is reported: > > > > libvirt.libvirtError: invalid argument: shares \'57344\' must be in range > > [1, 10000]\n'} > > > > libvirt version is 9.0.0-10.3 > > > > > > A quick google search suggests that it is something related to cgroups > and > > it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9 repos). > > Did I get it right ? > not quite > > it is reated to cgroups but the cause is that in cgroups_v1 the maxvlaue > of shares > i.e. cpu_shares changed form make int to 10000 in cgroups_v2 > so the issue is teh vm requested a cpu share value of 57344 which is not > vlaid on an OS > that is useing cgroups_v2 libvirt will not clamp the value nor will nova. > you have to change the volue in your flavor and resize the vm. > > > > > > > > > Thanks, Massimo > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Fri Oct 6 07:10:47 2023 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 6 Oct 2023 09:10:47 +0200 Subject: [ops] [nova] "invalid argument: shares xxx must be in range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update In-Reply-To: <7EEEE763-876A-425A-A717-6E9B1D1A2A01@opera.com> References: <2b6c6409067a5faf4088801bb4754772c1500bcd.camel@redhat.com> <7EEEE763-876A-425A-A717-6E9B1D1A2A01@opera.com> Message-ID: Thanks a lot Franciszek ! I was indeed seeing the problem with a VM big 56 vcpus while I didn't see the issue with a tiny instance Thanks again ! Cheers, Massimo On Fri, Oct 6, 2023 at 8:58?AM Franciszek Przewo?ny wrote: > Hi Massimo, > > We are using Ubuntu for our environments and we experienced the same issue > during upgrade from Yoga/Focal to Yoga/Jammy. On Yoga/Focal cgroups_v1 were > used, and cpu_shares parameter value was cpu count * 1024. From Jammy > cgroups_v2 have been implemented, and cpu_shares value has been set by > default to 100. It has hard limit of 10000, so flavors with more than > 9vCPUs won't fit. If you need to fix this issue without stopping VMs, you > can set cpu_shares with libvirt command: virsh schedinfo $domain --live > cpu_shares=100 > for more details about virsh schedinfo visit: > https://libvirt.org/manpages/virsh.html#schedinfo > > BR, > Franciszek > > On 5 Oct 2023, at 21:17, smooney at redhat.com wrote: > > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote: > > Dear all > > We have recently updated openstack nova on some AlmaLinux9 compute nodes > running Yoga from 1:25.2.0 to 1.25.2.1. After this operation some VMs don't > start anymore. In the log it is reported: > > libvirt.libvirtError: invalid argument: shares \'57344\' must be in range > [1, 10000]\n'} > > libvirt version is 9.0.0-10.3 > > > A quick google search suggests that it is something related to cgroups and > it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9 repos). > Did I get it right ? > > not quite > > it is reated to cgroups but the cause is that in cgroups_v1 the maxvlaue > of shares > i.e. cpu_shares changed form make int to 10000 in cgroups_v2 > so the issue is teh vm requested a cpu share value of 57344 which is not > vlaid on an OS > that is useing cgroups_v2 libvirt will not clamp the value nor will nova. > you have to change the volue in your flavor and resize the vm. > > > > > Thanks, Massimo > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Oct 6 07:49:45 2023 From: skaplons at redhat.com (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=) Date: Fri, 06 Oct 2023 09:49:45 +0200 Subject: [TC][Monasca] Proposal to mark Monasca as an inactive project Message-ID: <8709827.GXAFRqVoOG@p1gen4> Hi, I just proposed patch [1] to mark Monasca as inactive project. It was discussed during last TC meeting on 03.10.2023. Reasons for that are described in the commit message of the [1] but I will also put all of it here: previous cycle there wasn't almost any contributions to that project and most active contributors in last 180 days were Elod Illes and Dr. Jens Harbott who were fixing some Zuul configuration issues only. Here are detailed statistics about Monasca projects: Validating Gerrit... * There are 7 ready for review patches generated within 180 days * There are 2 not reviewed patches generated within 180 days * There are 5 merged patches generated within 180 days * Unreviewed patch rate for patches generated within 180 days is 28.0 % * Merged patch rate for patches generated within 180 days is 71.0 % * Here's top 10 owner for patches generated within 180 days (Name/Account_ID: Percentage): - Dr. Jens Harbott : 42.86% - Joel Capitao : 28.57% - Elod Illes : 28.57% Validate Zuul... Set buildsets fetch size to 500 * Repo: openstack/monasca-log-api gate job builds success rate: 82% * Repo: openstack/monasca-statsd gate job builds success rate: 95% * Repo: openstack/monasca-tempest-plugin gate job builds success rate: 81% * Repo: openstack/monasca-common gate job builds success rate: 84% * Repo: openstack/monasca-kibana-plugin gate job builds success rate: 83% * Repo: openstack/monasca-ceilometer gate job builds success rate: 100% * Repo: openstack/monasca-events-api gate job builds success rate: 83% * Repo: openstack/monasca-ui gate job builds success rate: 88% * Repo: openstack/monasca-specs gate job builds success rate: 100% * Repo: openstack/monasca-grafana-datasource gate job builds success rate: 100% * Repo: openstack/monasca-persister gate job builds success rate: 98% * Repo: openstack/monasca-notification gate job builds success rate: 93% * Repo: openstack/monasca-thresh gate job builds success rate: 100% * Repo: openstack/monasca-api gate job builds success rate: 76% * Repo: openstack/monasca-agent gate job builds success rate: 98% * Repo: openstack/python-monascaclient gate job builds success rate: 100% * Repo: openstack/monasca-transform gate job builds success rate: 100% What's next? According to the "Emerging and inactive projects" document [2] if there will be no volunteers who will step up and want to maintain this project before Milestone-2 of the 2024.1 cycle (week of Jan 08 2024) there will be no new release of Monasca in the 2024.1 cycle and TC will discuss if project should be retired. So if You are interested in keeping Monasca alive and active, please reach out to the TC to discuss that ASAP. Thx in advance. [1] https://review.opendev.org/c/openstack/governance/+/897520 [2] https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ralonsoh at redhat.com Fri Oct 6 08:26:49 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 6 Oct 2023 10:26:49 +0200 Subject: [neutron] Neutron drivers meeting cancelled Message-ID: Hello Neutrinos: Due to the lack of agenda [1], today's meeting is cancelled. Have a nice weekend. [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Oct 6 10:24:37 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Fri, 06 Oct 2023 11:24:37 +0100 Subject: [ops] [nova] "invalid argument: shares xxx must be in range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update In-Reply-To: References: <2b6c6409067a5faf4088801bb4754772c1500bcd.camel@redhat.com> Message-ID: <4df42464d0162610620da7225bbdf378c5a08077.camel@redhat.com> On Fri, 2023-10-06 at 09:04 +0200, Massimo Sgaravatto wrote: > Thanks for your answer > > I guess you are referring to the cpu_shares in the flavor quota [*] > > Actually I never explicitly set a value of cpu_shares in the flavor ... if you are seeing this for instance where its not explictly in the falvor then this is form the old behavior where we set a default share value as flavor.vcpus *1000 in newer version of openstack we never set an implict value https://github.com/openstack/nova/commit/f77a9fee5b736899ecc39d33e4f4e4012cee751cthis change is aviable in zed+ i know this has been backported downstream to wallaby so its possibel to do i belive there was consern with upstream backporting do to cpu starvation if you were hevialy using this funtionatlity. that is why it was not backported upstream in the past but you could apply it too your own cloud. > > > [*] https://docs.openstack.org/nova/yoga/admin/resource-limits.html > > On Thu, Oct 5, 2023 at 9:17?PM wrote: > > > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote: > > > Dear all > > > > > > We have recently updated openstack nova on some AlmaLinux9 compute nodes > > > running Yoga from 1:25.2.0 to 1.25.2.1. After this operation some VMs > > don't > > > start anymore. In the log it is reported: > > > > > > libvirt.libvirtError: invalid argument: shares \'57344\' must be in range > > > [1, 10000]\n'} > > > > > > libvirt version is 9.0.0-10.3 > > > > > > > > > A quick google search suggests that it is something related to cgroups > > and > > > it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9 repos). > > > Did I get it right ? > > not quite > > > > it is reated to cgroups but the cause is that in cgroups_v1 the maxvlaue > > of shares > > i.e. cpu_shares changed form make int to 10000 in cgroups_v2 > > so the issue is teh vm requested a cpu share value of 57344 which is not > > vlaid on an OS > > that is useing cgroups_v2 libvirt will not clamp the value nor will nova. > > you have to change the volue in your flavor and resize the vm. > > > > > > > > > > > > > > Thanks, Massimo > > > > From asma.naz at techavenue.biz Fri Oct 6 11:28:53 2023 From: asma.naz at techavenue.biz (Asma Naz Shariq) Date: Fri, 6 Oct 2023 16:28:53 +0500 Subject: openstack-discuss Digest, Vol 60, Issue 10 ~ Fwaas in Openstack 2023.1 antelope Message-ID: <000601d9f848$49f6b8e0$dde42aa0$@techavenue.biz> Hi Openstack Community! I have set up OpenStack release 2023.1 antelope with Kolla-Ansible . However, I noticed that there is no enable_plugin option in the /etc/kolla/global.yml file. Now, I am trying to install FWaaS (Firewall-as-a-Service) following the instructions provided in this OpenStack's Firewall-as-a-Service (FWaaS) v2 scenario documentation. The documentation states, On Ubuntu and CentOS, modify the [fwaas] section in the /etc/neutron/fwaas_driver.ini file instead of /etc/neutron/neutron.conf. Unfortunately, I cannot find the fwaas_driver.ini file in the neutron-server, neutron-l3-agent, or neutron-openvswitch-agent containers Can someone guide me on how to properly install FWaaS in a Kolla environment using the information from the provided link? Best, -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Friday, October 6, 2023 1:27 PM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 60, Issue 10 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [ops] [nova] "invalid argument: shares xxx must be in range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update (Massimo Sgaravatto) 2. [TC][Monasca] Proposal to mark Monasca as an inactive project (S?awek Kap?o?ski) 3. [neutron] Neutron drivers meeting cancelled (Rodolfo Alonso Hernandez) ---------------------------------------------------------------------- Message: 1 Date: Fri, 6 Oct 2023 09:10:47 +0200 From: Massimo Sgaravatto To: Franciszek Przewo?ny Cc: OpenStack Discuss , smooney at redhat.com Subject: Re: [ops] [nova] "invalid argument: shares xxx must be in range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update Message-ID: Content-Type: text/plain; charset="utf-8" Thanks a lot Franciszek ! I was indeed seeing the problem with a VM big 56 vcpus while I didn't see the issue with a tiny instance Thanks again ! Cheers, Massimo On Fri, Oct 6, 2023 at 8:58?AM Franciszek Przewo?ny wrote: > Hi Massimo, > > We are using Ubuntu for our environments and we experienced the same > issue during upgrade from Yoga/Focal to Yoga/Jammy. On Yoga/Focal > cgroups_v1 were used, and cpu_shares parameter value was cpu count * > 1024. From Jammy > cgroups_v2 have been implemented, and cpu_shares value has been set by > default to 100. It has hard limit of 10000, so flavors with more than > 9vCPUs won't fit. If you need to fix this issue without stopping VMs, > you can set cpu_shares with libvirt command: virsh schedinfo $domain > --live > cpu_shares=100 > for more details about virsh schedinfo visit: > https://libvirt.org/manpages/virsh.html#schedinfo > > BR, > Franciszek > > On 5 Oct 2023, at 21:17, smooney at redhat.com wrote: > > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote: > > Dear all > > We have recently updated openstack nova on some AlmaLinux9 compute > nodes running Yoga from 1:25.2.0 to 1.25.2.1. After this operation > some VMs don't start anymore. In the log it is reported: > > libvirt.libvirtError: invalid argument: shares \'57344\' must be in > range [1, 10000]\n'} > > libvirt version is 9.0.0-10.3 > > > A quick google search suggests that it is something related to cgroups > and it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9 repos). > Did I get it right ? > > not quite > > it is reated to cgroups but the cause is that in cgroups_v1 the > maxvlaue of shares i.e. cpu_shares changed form make int to 10000 in > cgroups_v2 so the issue is teh vm requested a cpu share value of 57344 > which is not vlaid on an OS that is useing cgroups_v2 libvirt will not > clamp the value nor will nova. > you have to change the volue in your flavor and resize the vm. > > > > > Thanks, Massimo > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Fri, 06 Oct 2023 09:49:45 +0200 From: S?awek Kap?o?ski To: "openstack-discuss at lists.openstack.org" Subject: [TC][Monasca] Proposal to mark Monasca as an inactive project Message-ID: <8709827.GXAFRqVoOG at p1gen4> Content-Type: text/plain; charset="us-ascii" Hi, I just proposed patch [1] to mark Monasca as inactive project. It was discussed during last TC meeting on 03.10.2023. Reasons for that are described in the commit message of the [1] but I will also put all of it here: previous cycle there wasn't almost any contributions to that project and most active contributors in last 180 days were Elod Illes and Dr. Jens Harbott who were fixing some Zuul configuration issues only. Here are detailed statistics about Monasca projects: Validating Gerrit... * There are 7 ready for review patches generated within 180 days * There are 2 not reviewed patches generated within 180 days * There are 5 merged patches generated within 180 days * Unreviewed patch rate for patches generated within 180 days is 28.0 % * Merged patch rate for patches generated within 180 days is 71.0 % * Here's top 10 owner for patches generated within 180 days (Name/Account_ID: Percentage): - Dr. Jens Harbott : 42.86% - Joel Capitao : 28.57% - Elod Illes : 28.57% Validate Zuul... Set buildsets fetch size to 500 * Repo: openstack/monasca-log-api gate job builds success rate: 82% * Repo: openstack/monasca-statsd gate job builds success rate: 95% * Repo: openstack/monasca-tempest-plugin gate job builds success rate: 81% * Repo: openstack/monasca-common gate job builds success rate: 84% * Repo: openstack/monasca-kibana-plugin gate job builds success rate: 83% * Repo: openstack/monasca-ceilometer gate job builds success rate: 100% * Repo: openstack/monasca-events-api gate job builds success rate: 83% * Repo: openstack/monasca-ui gate job builds success rate: 88% * Repo: openstack/monasca-specs gate job builds success rate: 100% * Repo: openstack/monasca-grafana-datasource gate job builds success rate: 100% * Repo: openstack/monasca-persister gate job builds success rate: 98% * Repo: openstack/monasca-notification gate job builds success rate: 93% * Repo: openstack/monasca-thresh gate job builds success rate: 100% * Repo: openstack/monasca-api gate job builds success rate: 76% * Repo: openstack/monasca-agent gate job builds success rate: 98% * Repo: openstack/python-monascaclient gate job builds success rate: 100% * Repo: openstack/monasca-transform gate job builds success rate: 100% What's next? According to the "Emerging and inactive projects" document [2] if there will be no volunteers who will step up and want to maintain this project before Milestone-2 of the 2024.1 cycle (week of Jan 08 2024) there will be no new release of Monasca in the 2024.1 cycle and TC will discuss if project should be retired. So if You are interested in keeping Monasca alive and active, please reach out to the TC to discuss that ASAP. Thx in advance. [1] https://review.opendev.org/c/openstack/governance/+/897520 [2] https://governance.openstack.org/tc/reference/emerging-technology-and-inacti ve-projects.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: ------------------------------ Message: 3 Date: Fri, 6 Oct 2023 10:26:49 +0200 From: Rodolfo Alonso Hernandez To: openstack-discuss Subject: [neutron] Neutron drivers meeting cancelled Message-ID: Content-Type: text/plain; charset="utf-8" Hello Neutrinos: Due to the lack of agenda [1], today's meeting is cancelled. Have a nice weekend. [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org ------------------------------ End of openstack-discuss Digest, Vol 60, Issue 10 ************************************************* From corey.bryant at canonical.com Fri Oct 6 12:32:40 2023 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 6 Oct 2023 08:32:40 -0400 Subject: OpenStack 2023.2 Bobcat for Ubuntu 22.04 LTS Message-ID: The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack 2023.2 Bobcat on Ubuntu 22.04 LTS (Jammy Jellyfish). Details of the Bobcat release can be found at: https://releases.openstack.org/bobcat The Ubuntu Cloud Archive for OpenStack 2023.2 Bobcat can be enabled on Ubuntu 22.04 by running the following command: sudo add-apt-repository cloud-archive:bobcat The Ubuntu Cloud Archive for 2023.2 Bobcat includes updates for: aodh, barbican, ceilometer, ceph (18.2.0), cinder, designate, designate-dashboard, dpdk (22.11.3), glance, gnocchi, heat, heat-dashboard, horizon, ironic, ironic-ui, keystone, magnum, magnum-ui, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-l2gw, networking-mlnx, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-taas, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, openvswitch (3.2.0), ovn (23.09.0), ovn-octavia-provider, placement, sahara, sahara-dashboard, senlin, swift, trove-dashboard, vitrage, watcher, watcher-dashboard, zaqar, and zaqar-ui. For a full list of packages and versions, please refer to: https://openstack-ci-reports.ubuntu.com/reports/cloud-archive/bobcat_versions.html == Reporting bugs == If you have any issues please report bugs using the ?ubuntu-bug? tool to ensure that bugs get logged in the right place in Launchpad: sudo ubuntu-bug nova-conductor Thank you to everyone who contributed to OpenStack 2023.2 Bobcat! -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Shih at obspm.fr Fri Oct 6 14:20:32 2023 From: Albert.Shih at obspm.fr (Albert Shih) Date: Fri, 6 Oct 2023 16:20:32 +0200 Subject: [puppet] Openstack & puppet Message-ID: Hi everyone, Few year ago I've deploy a very small openstack cluster with puppet, but to learn how openstack work I've build my own module (very simple module mostly with file + template + hiera). Now after few year I will need to re-install everything properly for production. Because all my infrastructure are all manage with puppet, I will not happy to use something like ansible, chef, etc. I also see they are something like Kolla who use docker/container, something I like because of the capability to revert a upgrade. So I've few questions about puppet & openstack : What are the status of ?supporting? puppet ? I know it's opensource project so I'm perfectly aware ?tomorrow? everything can be stop, but still is they are any plan already to leave puppet for something else. Is it a good idea to start a new installation for openstack with puppet ? I see Kolla work with ansible, is they are any plan to do the same with puppet ? Regards -- Albert SHIH ? ? Heure locale/Local time: ven. 06 oct. 2023 16:06:28 CEST From juliaashleykreger at gmail.com Fri Oct 6 15:08:51 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 6 Oct 2023 08:08:51 -0700 Subject: [puppet] Openstack & puppet In-Reply-To: References: Message-ID: Greetings, I think it might be helpful for readers to understand what you're seeking *to* support puppet? There are OpenStack puppet modules, and they could likely use attention, but were largely used for configuration and not the complete *act* of deployment. Most *deployment* operations have been facilitated by deployment projects in the wider OpenStack Community. For example, Kolla is one of those projects. So is the question about Kolla supporting puppet based configuration and orchestration of Kolla to facilitate the overall act of deployment? Hopefully the questions bring additional context and insight and aid the conversation. -Julia On Fri, Oct 6, 2023 at 7:29?AM Albert Shih wrote: > Hi everyone, > > Few year ago I've deploy a very small openstack cluster with puppet, but to > learn how openstack work I've build my own module (very simple module > mostly with file + template + hiera). > > Now after few year I will need to re-install everything properly for > production. > > Because all my infrastructure are all manage with puppet, I will not happy > to use something like ansible, chef, etc. > > I also see they are something like Kolla who use docker/container, > something I like because of the capability to revert a upgrade. > > So I've few questions about puppet & openstack : > > What are the status of ?supporting? puppet ? I know it's opensource > project so I'm perfectly aware ?tomorrow? everything can be stop, but > still is they are any plan already to leave puppet for something else. > > Is it a good idea to start a new installation for openstack with puppet ? > > I see Kolla work with ansible, is they are any plan to do the same with > puppet ? > > Regards > > -- > Albert SHIH ? ? > Heure locale/Local time: > ven. 06 oct. 2023 16:06:28 CEST > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Fri Oct 6 15:17:45 2023 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 6 Oct 2023 17:17:45 +0200 Subject: [ops] [nova] "invalid argument: shares xxx must be in range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update In-Reply-To: <4df42464d0162610620da7225bbdf378c5a08077.camel@redhat.com> References: <2b6c6409067a5faf4088801bb4754772c1500bcd.camel@redhat.com> <4df42464d0162610620da7225bbdf378c5a08077.camel@redhat.com> Message-ID: Ok thanks I still do not fully understand why I see the problem with nova 1.25.2:1 while it works with v. 1.25.2:0 (without changing any other packages). According to the release notes only this change [*] was introduced [*] https://bugs.launchpad.net/nova/+bug/1941005 Cheers, Massimo On Fri, Oct 6, 2023 at 12:25?PM wrote: > On Fri, 2023-10-06 at 09:04 +0200, Massimo Sgaravatto wrote: > > Thanks for your answer > > > > I guess you are referring to the cpu_shares in the flavor quota [*] > > > > Actually I never explicitly set a value of cpu_shares in the flavor ... > if you are seeing this for instance where its not explictly in the falvor > then > this is form the old behavior where we set a default share value as > flavor.vcpus *1000 > in newer version of openstack we never set an implict value > > https://github.com/openstack/nova/commit/f77a9fee5b736899ecc39d33e4f4e4012cee751cthis > change is aviable in zed+ > i know this has been backported downstream to wallaby so its possibel to do > i belive there was consern with upstream backporting do to cpu starvation > if you were hevialy using this funtionatlity. that is why it was not > backported > upstream in the past but you could apply it too your own cloud. > > > > > > > [*] https://docs.openstack.org/nova/yoga/admin/resource-limits.html > > > > On Thu, Oct 5, 2023 at 9:17?PM wrote: > > > > > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote: > > > > Dear all > > > > > > > > We have recently updated openstack nova on some AlmaLinux9 compute > nodes > > > > running Yoga from 1:25.2.0 to 1.25.2.1. After this operation some VMs > > > don't > > > > start anymore. In the log it is reported: > > > > > > > > libvirt.libvirtError: invalid argument: shares \'57344\' must be in > range > > > > [1, 10000]\n'} > > > > > > > > libvirt version is 9.0.0-10.3 > > > > > > > > > > > > A quick google search suggests that it is something related to > cgroups > > > and > > > > it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9 > repos). > > > > Did I get it right ? > > > not quite > > > > > > it is reated to cgroups but the cause is that in cgroups_v1 the > maxvlaue > > > of shares > > > i.e. cpu_shares changed form make int to 10000 in cgroups_v2 > > > so the issue is teh vm requested a cpu share value of 57344 which is > not > > > vlaid on an OS > > > that is useing cgroups_v2 libvirt will not clamp the value nor will > nova. > > > you have to change the volue in your flavor and resize the vm. > > > > > > > > > > > > > > > > > > > Thanks, Massimo > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Fri Oct 6 15:38:06 2023 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 6 Oct 2023 10:38:06 -0500 Subject: [tc] Congratulations to Brian Rosmaita, Vice Chair In-Reply-To: References: Message-ID: Congrats Brian! Thank you for continuing to support OpenStack! Jay On 10/4/2023 1:40 PM, Jay Faulkner wrote: > Hi all, > > Brian Rosmaita graciously volunteered to serve as Vice-Chair of the > Technical Committee for the next cycle. Congratulations Brian, and > thank you! > > -- > Jay Faulkner > TC Chair From fungi at yuggoth.org Fri Oct 6 19:42:57 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Oct 2023 19:42:57 +0000 Subject: [all][infra][tact-sig] Reminder: Mailman v3 migration for OpenStack mailing lists 2023-10-12 In-Reply-To: <20230929235523.gtxaxdw5jzxkh46n@yuggoth.org> References: <20230929235523.gtxaxdw5jzxkh46n@yuggoth.org> Message-ID: <20231006194257.stswy2cpup4dettt@yuggoth.org> Reminder for those who may have missed the initial announcement, as this is less than a week away now... On Thursday, October 12 at 15:30 UTC, the OpenDev Collaboratory systems administrators will be migrating the lists.openstack.org mailing list site from the aging Mailman 2.1.29 server to a newer Mailman 3.3.8 deployment. This maintenance window is expected to last approximately four hours. The key takeaways are as follows: * There will be an extended outage for the site while DNS is updated and configuration, subscriber lists and message archives are imported, but incoming messages should correctly queue at the sender's end and arrive at the conclusion of our work * Because this is on a new server, there are new IP addresses from which list mail will be originating: 162.209.78.70 and 2001:4800:7813:516:be76:4eff:fe04:5423 * Moderation queues will not be copied to the new server, so moderators are encouraged to process any held messages prior to the migration time in order to avoid losing them * Anyone wishing to adjust their list subscriptions, or handle moderation or list configuration after the migration, needs to use the Sign Up button on the Web page to create a new account; it will be linked to your imported roles as soon as you confirm by following a URL from an E-mail message the server sends you, and is global for all sites on the server so you only need to do this once * The software providing Web front-ends for list management and archive browsing is entirely new in Mailman v3 and therefore has a much different appearance, though we've added appropriate redirects and frozen copies of old archives in order to accommodate existing hyperlinks If you have any questions or concerns, feel free to follow up on the service-discuss mailing list or find us in the #opendev channel on the OFTC IRC network. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Fri Oct 6 22:13:02 2023 From: amy at demarco.com (Amy Marrich) Date: Fri, 6 Oct 2023 17:13:02 -0500 Subject: [ptl][tc][ops][ptg] Operator + Developers interaction (operator-hours) slots in 2024.1 Caracel PTG Message-ID: <126BAD69-36B9-4285-9049-4EB4041870BF@demarco.com> Hello Everyone/PTL, To promote interaction/feedback from operators, the OpenStack TC would like to bring operators and developers together again at the upcoming PTG. To make it easy for operators and less conflict among projects, the TC is requesting projects to book 'operator hours' in non-conflicting slots(different time/day slots). How project can reserve/book these 'operator hour' (Action item for projects PTL/PTG planner): ----------------------------------------------------------------------------------------------------------------- 1. Request Kendall(diablo_rojo) or Fungi (in #openinfra-events IRC channel) or email ptg at openinfra.dev to register the new track 'operator-hour-' for example, 'operator-hour-nova' 2. Once track is registered, pick one of the available 'operator-hour-placeholder' slot and book it with newly registered track 'operator-hour-'. For example #operator-hour-nova book essex-WedB1 * The idea is to book these slots as 'operator-hour-' is to give operators a easy way to find and join. Otherwise it will be difficult for them to find such slot/time from etherpads. We request every project to book at least one 'operator hours' slot for operators to join your PTG slot. Ping me in #openstack-tc or #openinfra-events IRC channel for any query. [1] https://ptg.opendev.org/ptg.html Amy(spotz) From amy at demarco.com Fri Oct 6 22:13:08 2023 From: amy at demarco.com (Amy Marrich) Date: Fri, 6 Oct 2023 17:13:08 -0500 Subject: [ptl][tc][ops][ptg] Operator + Developers interaction (operator-hours) slots in 2024.1 Caracel PTG Message-ID: ?Hello Everyone/PTL, To promote interaction/feedback from operators, the OpenStack TC would like to bring operators and developers together again at the upcoming PTG. To make it easy for operators and less conflict among projects, the TC is requesting projects to book 'operator hours' in non-conflicting slots(different time/day slots). How project can reserve/book these 'operator hour' (Action item for projects PTL/PTG planner): ----------------------------------------------------------------------------------------------------------------- 1. Request Kendall(diablo_rojo) or Fungi (in #openinfra-events IRC channel) or email ptg at openinfra.dev to register the new track 'operator-hour-' for example, 'operator-hour-nova' 2. Once track is registered, pick one of the available 'operator-hour-placeholder' slot and book it with newly registered track 'operator-hour-'. For example #operator-hour-nova book essex-WedB1 * The idea is to book these slots as 'operator-hour-' is to give operators a easy way to find and join. Otherwise it will be difficult for them to find such slot/time from etherpads. We request every project to book at least one 'operator hours' slot for operators to join your PTG slot. Ping me in #openstack-tc or #openinfra-events IRC channel for any query. [1] https://ptg.opendev.org/ptg.html Amy(spotz) From amy at demarco.com Fri Oct 6 22:13:22 2023 From: amy at demarco.com (Amy Marrich) Date: Fri, 6 Oct 2023 17:13:22 -0500 Subject: [ptl][tc][ops][ptg] Operator + Developers interaction (operator-hours) slots in 2024.1 Caracel PTG Message-ID: <52BFAF6A-D127-419B-8B88-5C4B3E88AC6D@demarco.com> ?Hello Everyone/PTL, To promote interaction/feedback from operators, the OpenStack TC would like to bring operators and developers together again at the upcoming PTG. To make it easy for operators and less conflict among projects, the TC is requesting projects to book 'operator hours' in non-conflicting slots(different time/day slots). How project can reserve/book these 'operator hour' (Action item for projects PTL/PTG planner): ----------------------------------------------------------------------------------------------------------------- 1. Request Kendall(diablo_rojo) or Fungi (in #openinfra-events IRC channel) or email ptg at openinfra.dev to register the new track 'operator-hour-' for example, 'operator-hour-nova' 2. Once track is registered, pick one of the available 'operator-hour-placeholder' slot and book it with newly registered track 'operator-hour-'. For example #operator-hour-nova book essex-WedB1 * The idea is to book these slots as 'operator-hour-' is to give operators a easy way to find and join. Otherwise it will be difficult for them to find such slot/time from etherpads. We request every project to book at least one 'operator hours' slot for operators to join your PTG slot. Ping me in #openstack-tc or #openinfra-events IRC channel for any query. [1] https://ptg.opendev.org/ptg.html Amy(spotz) From suzhengwei at inspur.com Sat Oct 7 02:08:08 2023 From: suzhengwei at inspur.com (=?utf-8?B?U2FtIFN1ICjoi4/mraPkvJ8p?=) Date: Sat, 7 Oct 2023 02:08:08 +0000 Subject: =?utf-8?B?562U5aSNOiBNYXNha2FyaSBpc3N1ZQ==?= In-Reply-To: References: Message-ID: <255a3f80dd274f8faf747eadb5a61156@inspur.com> Hello Tony Karera, Sorry for respone after a few days. I just got back from holiday. When there are notifications still in 'new', 'running' or 'error' status, it not allows to update or delete the host or segment [1]. In addition, if host reboot during upgrade the nodes kernel, it would trigger an host failure notification and recovery workflow. To avoid to set the hosts in maitenance mode, we advise that set the segment 'enabled' value 'False' before planned upgrade or maitenance [2]. [1]https://opendev.org/openstack/masakari/src/branch/master/masakari/ha/api.py L220 [2]https://opendev.org/openstack/masakari/src/branch/master/releasenotes/notes/enabled-to-segment-7e6184feb1e4f818.yaml ???: Karera Tony [mailto:tonykarera at gmail.com] ????: 2023?10?2? 15:09 ???: openstack-discuss ??: Masakari issue Hello Team, I have an openstack Wallaby Environment deployed using kolla-ansible. So, I shutdown all the VMs on the compute nodes becauseI wanted to upgrade the nodes kernel. After I rebooted them and all was fine apart from the fact that all the hosts in instance-ha segment are in maitenance mode. When I try to turn to false. I get the error below. Error: Failed to update host. Details ConflictException: 409: Client Error for url: http://x.x.x.x:15868/v1/segments/8d042245-5610-4b84-b611-b633f8f8367c/hosts/066c5654-dd1a-4fa9-a664-dad10b89e202, Host 066c5654-dd1a-4fa9-a664-dad10b89e202 can't be updated as it is in-use to process notifications. Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3606 bytes Desc: not available URL: From skaplons at redhat.com Mon Oct 9 08:13:49 2023 From: skaplons at redhat.com (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=) Date: Mon, 09 Oct 2023 10:13:49 +0200 Subject: [neutron] Bug deputy report from week of the 2.10.2023 Message-ID: <7579966.EvYhyI6sBW@p1gen4> Hi, I was bug deputy last week. Here's summary of the bugs from this week: ## Critical * https://bugs.launchpad.net/neutron/+bug/2038541 - LinuxBridgeARPSpoofTestCase functional tests fails with latest jammy kernel 5.15.0-86.96 - fix proposed by Rodolfo https://review.opendev.org/c/openstack/neutron/+/897412 ## High * https://bugs.launchpad.net/neutron/+bug/2038234 - PortBindingChassisEvent matches all port types - assigned to jlibosva ## Medium * https://bugs.launchpad.net/neutron/+bug/2038091 - [OVN] Manual sync misidentifies Octavia OVN Load Balancer health monitor port and deletes metadata port - assigned to Fernando Royo * https://bugs.launchpad.net/neutron/+bug/2038413 - [OVN] host id in NB database not updated correctly for virtual ports, assigned to Michel Nederlof and fix proposed: https://review.opendev.org/c/openstack/neutron/+/896883 * https://bugs.launchpad.net/neutron/+bug/2038422 - [OVN] virtual ports not working upon failover, assigned to Michel Nederlof and fix proposed: https://review.opendev.org/c/openstack/neutron/+/896884 * https://bugs.launchpad.net/neutron/+bug/2038646 - [RBAC] Update "subnet" policies - assigned to Rodolfo, patch proposed https://review.opendev.org/c/openstack/neutron/+/897540 * https://bugs.launchpad.net/neutron/+bug/2038655 - DHCP agent scheduler API extension should be supported by ML2/OVN backend - assigned to Slawek, patch proposed https://review.opendev.org/c/openstack/neutron/+/897528 but there will be also neutron-tempest-plugin change needed ## Low * https://bugs.launchpad.net/neutron/+bug/2038373 - Segment unit tests are not mocking properly, unassigned, for me looks like low-hanging-fruit bug * https://bugs.launchpad.net/neutron/+bug/2038520 - [UT] Error accesing to row.external_ids in ``TestOvnSbIdlNotifyHandler`` tests, assigned to Rodolfo, fix proposed https://review.opendev.org/c/openstack/neutron/+/897407 * https://bugs.launchpad.net/neutron/+bug/2038555 - Remove unused tables, assigned to Rodolfo, fix proposed https://review.opendev.org/c/openstack/neutron/+/897472 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From katonalala at gmail.com Mon Oct 9 09:07:09 2023 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 9 Oct 2023 11:07:09 +0200 Subject: openstack-discuss Digest, Vol 60, Issue 10 ~ Fwaas in Openstack 2023.1 antelope In-Reply-To: <000601d9f848$49f6b8e0$dde42aa0$@techavenue.biz> References: <000601d9f848$49f6b8e0$dde42aa0$@techavenue.biz> Message-ID: Hi Asma, The enable_plugin is for devstack based deployments, I suppose. To tell the truth I am not familiar with kolla, but I found this page which speaks about enabling neutron extensions like sfc or vpnaas: https://docs.openstack.org/kolla-ansible/4.0.2/networking-guide.html and these in the group_vars/all.yml: https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/group_vars/all.yml#L819 So to enable vpnaas: enable_neutron_vpnaas: "yes" I suppose to do all the magic to install and set neutron-vpnaas. I can't find neutron-fwaas , but perhaps this is just my lack of experience with kolla. To see what is necessary to configure fwaas, I would check the devstack plugin: https://opendev.org/openstack/neutron-fwaas/src/branch/master/devstack Best wishes. Lajos (lajoskatona) Asma Naz Shariq ezt ?rta (id?pont: 2023. okt. 6., P, 14:37): > Hi Openstack Community! > > I have set up OpenStack release 2023.1 antelope with Kolla-Ansible . > However, I noticed that there is no enable_plugin option in the > /etc/kolla/global.yml file. Now, I am trying to install FWaaS > (Firewall-as-a-Service) following the instructions provided in this > OpenStack's Firewall-as-a-Service (FWaaS) v2 scenario documentation. > > The documentation states, On Ubuntu and CentOS, modify the [fwaas] section > in the /etc/neutron/fwaas_driver.ini file instead of > /etc/neutron/neutron.conf. Unfortunately, I cannot find the > fwaas_driver.ini > file in the neutron-server, neutron-l3-agent, or neutron-openvswitch-agent > containers > > Can someone guide me on how to properly install FWaaS in a Kolla > environment > using the information from the provided link? > > Best, > > -----Original Message----- > From: openstack-discuss-request at lists.openstack.org > > Sent: Friday, October 6, 2023 1:27 PM > To: openstack-discuss at lists.openstack.org > Subject: openstack-discuss Digest, Vol 60, Issue 10 > > Send openstack-discuss mailing list submissions to > openstack-discuss at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > > https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > or, via email, send a message with subject or body 'help' to > openstack-discuss-request at lists.openstack.org > > You can reach the person managing the list at > openstack-discuss-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific than > "Re: Contents of openstack-discuss digest..." > > > Today's Topics: > > 1. Re: [ops] [nova] "invalid argument: shares xxx must be in > range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update > (Massimo Sgaravatto) > 2. [TC][Monasca] Proposal to mark Monasca as an inactive project > (S?awek Kap?o?ski) > 3. [neutron] Neutron drivers meeting cancelled > (Rodolfo Alonso Hernandez) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 6 Oct 2023 09:10:47 +0200 > From: Massimo Sgaravatto > To: Franciszek Przewo?ny > Cc: OpenStack Discuss , > smooney at redhat.com > Subject: Re: [ops] [nova] "invalid argument: shares xxx must be in > range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update > Message-ID: > < > CALaZjRGh6xnzX12cMgDTYx2yJYddUD9X3oh60JWnrB33ZdEf_Q at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Thanks a lot Franciszek ! > I was indeed seeing the problem with a VM big 56 vcpus while I didn't see > the issue with a tiny instance > > Thanks again ! > > Cheers, Massimo > > On Fri, Oct 6, 2023 at 8:58?AM Franciszek Przewo?ny > wrote: > > > Hi Massimo, > > > > We are using Ubuntu for our environments and we experienced the same > > issue during upgrade from Yoga/Focal to Yoga/Jammy. On Yoga/Focal > > cgroups_v1 were used, and cpu_shares parameter value was cpu count * > > 1024. From Jammy > > cgroups_v2 have been implemented, and cpu_shares value has been set by > > default to 100. It has hard limit of 10000, so flavors with more than > > 9vCPUs won't fit. If you need to fix this issue without stopping VMs, > > you can set cpu_shares with libvirt command: virsh schedinfo $domain > > --live > > cpu_shares=100 > > for more details about virsh schedinfo visit: > > https://libvirt.org/manpages/virsh.html#schedinfo > > > > BR, > > Franciszek > > > > On 5 Oct 2023, at 21:17, smooney at redhat.com wrote: > > > > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote: > > > > Dear all > > > > We have recently updated openstack nova on some AlmaLinux9 compute > > nodes running Yoga from 1:25.2.0 to 1.25.2.1. After this operation > > some VMs don't start anymore. In the log it is reported: > > > > libvirt.libvirtError: invalid argument: shares \'57344\' must be in > > range [1, 10000]\n'} > > > > libvirt version is 9.0.0-10.3 > > > > > > A quick google search suggests that it is something related to cgroups > > and it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9 > repos). > > Did I get it right ? > > > > not quite > > > > it is reated to cgroups but the cause is that in cgroups_v1 the > > maxvlaue of shares i.e. cpu_shares changed form make int to 10000 in > > cgroups_v2 so the issue is teh vm requested a cpu share value of 57344 > > which is not vlaid on an OS that is useing cgroups_v2 libvirt will not > > clamp the value nor will nova. > > you have to change the volue in your flavor and resize the vm. > > > > > > > > > > Thanks, Massimo > > > > > > > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > < > https://lists.openstack.org/pipermail/openstack-discuss/attachments/2023100 > 6/62a848f5/attachment-0001.htm > > > > > ------------------------------ > > Message: 2 > Date: Fri, 06 Oct 2023 09:49:45 +0200 > From: S?awek Kap?o?ski > To: "openstack-discuss at lists.openstack.org" > > Subject: [TC][Monasca] Proposal to mark Monasca as an inactive project > Message-ID: <8709827.GXAFRqVoOG at p1gen4> > Content-Type: text/plain; charset="us-ascii" > > Hi, > > I just proposed patch [1] to mark Monasca as inactive project. It was > discussed during last TC meeting on 03.10.2023. Reasons for that are > described in the commit message of the [1] but I will also put all of it > here: > > previous cycle there wasn't almost any contributions to that project and > most active contributors in last 180 days were Elod Illes and Dr. Jens > Harbott who were fixing some Zuul configuration issues only. > Here are detailed statistics about Monasca projects: > > Validating Gerrit... > * There are 7 ready for review patches generated within 180 days > * There are 2 not reviewed patches generated within 180 days > * There are 5 merged patches generated within 180 days > * Unreviewed patch rate for patches generated within 180 days is 28.0 % > * Merged patch rate for patches generated within 180 days is 71.0 % > * Here's top 10 owner for patches generated within 180 days > (Name/Account_ID: Percentage): > - Dr. Jens Harbott : 42.86% > - Joel Capitao : 28.57% > - Elod Illes : 28.57% > Validate Zuul... > Set buildsets fetch size to 500 > * Repo: openstack/monasca-log-api gate job builds success rate: 82% > * Repo: openstack/monasca-statsd gate job builds success rate: 95% > * Repo: openstack/monasca-tempest-plugin gate job builds success rate: 81% > * Repo: openstack/monasca-common gate job builds success rate: 84% > * Repo: openstack/monasca-kibana-plugin gate job builds success rate: 83% > * Repo: openstack/monasca-ceilometer gate job builds success rate: 100% > * Repo: openstack/monasca-events-api gate job builds success rate: 83% > * Repo: openstack/monasca-ui gate job builds success rate: 88% > * Repo: openstack/monasca-specs gate job builds success rate: 100% > * Repo: openstack/monasca-grafana-datasource gate job builds success rate: > 100% > * Repo: openstack/monasca-persister gate job builds success rate: 98% > * Repo: openstack/monasca-notification gate job builds success rate: 93% > * Repo: openstack/monasca-thresh gate job builds success rate: 100% > * Repo: openstack/monasca-api gate job builds success rate: 76% > * Repo: openstack/monasca-agent gate job builds success rate: 98% > * Repo: openstack/python-monascaclient gate job builds success rate: 100% > * Repo: openstack/monasca-transform gate job builds success rate: 100% > > What's next? > According to the "Emerging and inactive projects" document [2] if there > will > be no volunteers who will step up and want to maintain this project before > Milestone-2 of the 2024.1 cycle (week of Jan 08 2024) there will be no new > release of Monasca in the 2024.1 cycle and TC will discuss if project > should > be retired. > > So if You are interested in keeping Monasca alive and active, please reach > out to the TC to discuss that ASAP. Thx in advance. > > [1] https://review.opendev.org/c/openstack/governance/+/897520 > [2] > > https://governance.openstack.org/tc/reference/emerging-technology-and-inacti > ve-projects.html > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > < > https://lists.openstack.org/pipermail/openstack-discuss/attachments/2023100 > 6/138c13c0/attachment-0001.htm > > > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 488 bytes > Desc: This is a digitally signed message part. > URL: > < > https://lists.openstack.org/pipermail/openstack-discuss/attachments/2023100 > 6/138c13c0/attachment-0001.sig > > > > > ------------------------------ > > Message: 3 > Date: Fri, 6 Oct 2023 10:26:49 +0200 > From: Rodolfo Alonso Hernandez > To: openstack-discuss > Subject: [neutron] Neutron drivers meeting cancelled > Message-ID: > < > CAECr9X7LBMRJQyi1sejTCKQopW+PkVA0JUfMh4i4QRD0Z7-eRw at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hello Neutrinos: > > Due to the lack of agenda [1], today's meeting is cancelled. > > Have a nice weekend. > > [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > < > https://lists.openstack.org/pipermail/openstack-discuss/attachments/2023100 > 6/cc1d43b0/attachment.htm > > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openstack-discuss mailing list > openstack-discuss at lists.openstack.org > > > ------------------------------ > > End of openstack-discuss Digest, Vol 60, Issue 10 > ************************************************* > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Mon Oct 9 13:46:35 2023 From: tobias.urdin at binero.com (Tobias Urdin) Date: Mon, 9 Oct 2023 13:46:35 +0000 Subject: [puppet] Openstack & puppet In-Reply-To: References: Message-ID: Hello Albert, The Puppet OpenStack project is available and can be used but has a very low amount of contributors and I would classify it being in a more maintenance state. Takashi has been doing a lot of great work with cleanup, fixing bugs and making sure a lot of the modules are up-to-date so it should be usable in it?s current state. I think Julia is thinking mostly about it being used in the RedHat product before where it mostly did configuration but it can be used to deploy all the software as well, but it?s correct that it doesn?t handle the lifecycle of a OpenStack cluster for example upgrades. We?d be happy to have more contributors for the project to keep it alive :) You can also join our IRC #puppet-openstack on OFTC. Best regards Tobias > On 6 Oct 2023, at 16:20, Albert Shih wrote: > > Hi everyone, > > Few year ago I've deploy a very small openstack cluster with puppet, but to > learn how openstack work I've build my own module (very simple module > mostly with file + template + hiera). > > Now after few year I will need to re-install everything properly for > production. > > Because all my infrastructure are all manage with puppet, I will not happy > to use something like ansible, chef, etc. > > I also see they are something like Kolla who use docker/container, > something I like because of the capability to revert a upgrade. > > So I've few questions about puppet & openstack : > > What are the status of ?supporting? puppet ? I know it's opensource > project so I'm perfectly aware ?tomorrow? everything can be stop, but > still is they are any plan already to leave puppet for something else. > > Is it a good idea to start a new installation for openstack with puppet ? > > I see Kolla work with ansible, is they are any plan to do the same with > puppet ? > > Regards > > -- > Albert SHIH ? ? > Heure locale/Local time: > ven. 06 oct. 2023 16:06:28 CEST > From nguyenhuukhoinw at gmail.com Mon Oct 9 14:12:24 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 9 Oct 2023 21:12:24 +0700 Subject: [kolla-ansible][masakari] how segment works Message-ID: Hello guys. I deployed masakari with kolla-ansible but I has a question: Can you explain for me about segment because my masakari works same attitude with one or more segments. https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/wallaby/app-masakari.html "With Masakari, compute nodes are grouped into failover segments. In the event of a compute node failure, that node?s instances are moved onto another compute node within the same segment" I have 2 segments: Compute01 >> Compute03 on segment 1 Compute04 >> Compute06 on segment 2 I think that instances only evacuate on its segment but they can go cross segment. Pls correct me if I'm wrong. Thank you. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Mon Oct 9 14:27:44 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 9 Oct 2023 21:27:44 +0700 Subject: [kolla-ansible][masakari] how segment works In-Reply-To: References: Message-ID: Hello guys. I found this https://review.opendev.org/c/openstack/masakari/+/825286 Will we have ideas on this problem? Nguyen Huu Khoi On Mon, Oct 9, 2023 at 9:12?PM Nguy?n H?u Kh?i wrote: > Hello guys. > > I deployed masakari with kolla-ansible but I has a question: > > Can you explain for me about segment because my masakari works same > attitude with one or more segments. > > > https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/wallaby/app-masakari.html > > "With Masakari, compute nodes are grouped into failover segments. In the > event of a compute node failure, that node?s instances are moved onto > another compute node within the same segment" > > I have 2 segments: > > Compute01 >> Compute03 on segment 1 > Compute04 >> Compute06 on segment 2 > > I think that instances only evacuate on its segment but they can go cross > segment. > > > Pls correct me if I'm wrong. > > Thank you. > > > > Nguyen Huu Khoi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Mon Oct 9 19:23:53 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Mon, 9 Oct 2023 12:23:53 -0700 Subject: [tc] Technical Committee next weekly meeting Tuesday, October 10, 2023 Message-ID: Hello, This is a reminder that the weekly Technical Committee meeting is to be held Tuesday, October 10 at 1800 UTC. It will be held in #openstack-tc on OFTC. The proposed agenda is as follows: - Roll call - Follow up on tracked action items: https://meetings.opendev.org/meetings/tc/2023/tc.2023-10-03-18.00.html - JayF - Schedule a cross-project performance session at the PTG centered around apparent heavy DB usage in devstack from Keystone and Neutron; ensure sqlalchemy + oslo.db maintainers are invited. - Appoint Rosmaita as vice chair - Before next video meeting, write up a short document on pros/cons of moving TC video meetings to jitsi-meet. - Knikolla - Complete documentation for unmaintained branch policy in releases.openstack.org - Investigate or delegate research on DB usage patterns in Keystone in devstack. Due before PTG. - Slaweq - To propose a patch to openstack/governance for TC consideration to mark monasca inactive - Gate health check - Leaderless projects (gmann) - PTG Scheduling and agenda - Open Discussion and Reviews Thank you, Jay Faulkner TC Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From fkr at osb-alliance.com Tue Oct 10 06:40:40 2023 From: fkr at osb-alliance.com (Felix Kronlage-Dammers) Date: Tue, 10 Oct 2023 08:40:40 +0200 Subject: [publiccloud-sig] Reminder - next meeting October 11th - 0700 UTC Message-ID: <8CB663BE-AB8F-47BF-8972-F65DEDA84728@osb-alliance.com> Hi everyone, this Wednesday the next meeting of the public cloud sig is going to happen. This time we will meet in video again. (As previously mentioned here [1]). We will meet at 0700 UTC here: https://conf.scs.koeln:8443/OIF-public-cloud-sig In parallel we will be on irc on #openstack-operators as well of course. I started an agenda (https://etherpad.opendev.org/p/publiccloud-sig-meeting), one of the items I?d like to cover is the upcoming vPTG. Am very much looking forward to wednesday morning! :) felix [1] -- Felix Kronlage-Dammers Product Owner IaaS & Operations Sovereign Cloud Stack Sovereign Cloud Stack ? standardized, built and operated by many Ein Projekt der Open Source Business Alliance - Bundesverband f?r digitale Souver?nit?t e.V. Tel.: +49-30-206539-205 | Matrix: @fkronlage:matrix.org | fkr at osb-alliance.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 862 bytes Desc: OpenPGP digital signature URL: From thuvh87 at gmail.com Tue Oct 10 07:41:18 2023 From: thuvh87 at gmail.com (Hoai-Thu Vuong) Date: Tue, 10 Oct 2023 14:41:18 +0700 Subject: [kolla] change default domain In-Reply-To: References: <5059bebf-4137-91df-d995-e77dd42eea20@pawsey.org.au> Message-ID: I found some vars in kolla-ansible default_project_domain_name: "Default" default_project_domain_id: "default" default_user_domain_name: "Default" default_user_domain_id: "default" I think you can change this value (I haven't done the lab yet, plz check it) https://github.com/openstack/kolla-ansible/blob/f59edacf46ef07070b566def523b4ddb7c8f98ae/ansible/group_vars/all.yml#L910 On Thu, Sep 28, 2023 at 6:28?PM Vivian Rook wrote: > Thank you for your reply! > > > Are you talking about the Horizon setting? We put this in globals.yml > > No I mean the openstack domain, the kinds that I get when I run: > $ openstack --os-cloud=kolla-admin domain list > > +----------------------------------+------------------+---------+-------------------------------------------+ > | ID | Name | Enabled | > Description | > > +----------------------------------+------------------+---------+-------------------------------------------+ > | 3a9492b526334a418a5ce328786c376c | magnum | True | Owns > users and projects created by magnum | > | default | Default | True | The > default domain | > | ff60f58e888742cd9974cd85d6c24e54 | heat_user_domain | True | > | > > +----------------------------------+------------------+---------+-------------------------------------------+ > > In this case I would like the "default" name to be called "surface" > > Thank you! > > On Wed, Sep 27, 2023 at 9:09?PM Gregory Orange < > gregory.orange at pawsey.org.au> wrote: > >> On 20/9/23 19:21, Vivian Rook wrote: >> > I'm looking for a setting to change the default domain name from >> > "default" to "kolla" Does such an option exist in globals.yml? >> >> Are you talking about the Horizon setting? We put this in globals.yml >> >> >> horizon_keystone_domain_choices: >> pawsey: Pawsey >> Default: Default >> horizon_keystone_multidomain: True >> >> >> This results in a dropdown box for Domain at the login screen. Pawsey >> selected by default, although a browser cookie remembers which one users >> have most recently used. >> >> HTH, >> Greg. >> > > > -- > > *Vivian Rook (They/Them)* > Site Reliability Engineer > Wikimedia Foundation > -- Thu. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wu.wenxiang at 99cloud.net Tue Oct 10 08:05:10 2023 From: wu.wenxiang at 99cloud.net (=?UTF-8?B?5ZC05paH55u4?=) Date: Tue, 10 Oct 2023 16:05:10 +0800 (GMT+08:00) Subject: =?UTF-8?B?UmU6W2FsbF1bZWxlY3Rpb25zXVtwdGxdIFByb2plY3QgVGVhbSBMZWFkIEVsZWN0aW9uIENvbmNsdXNpb24gYW5kIFJlc3VsdHM=?= In-Reply-To: References: Message-ID: Hello, Tony I already mentioned in #openstack-tc that Skyline is not leaderless - I just managed to miss that I needed to propose myself as PTL which was an oversight on my behalf. Please consider this email my candidacy for PTL of Skyline for the TC to consider. Thanks Best Regrads Wu Wenxiang Original: From?Tony Breeds Date?2023-09-21 07:53:25(?? (GMT+08:00))To?OpenStack Discuss , openstack-announceCc?Subject?[all][elections][ptl] Project Team Lead Election Conclusion and ResultsThank you to the electorate, to all those who voted and to all candidates who put their name forward for Project Team Lead (PTL) in this election. A healthy, open process breeds trust in our decision making capability thank you to all those who make this process possible. Now for the results of the PTL election process, please join me in extending congratulations to the following PTLs: * Adjutant : Dale Smith * Barbican : Grzegorz Grasza * Blazar : Pierre Riteau * Cinder : Rajat Dhasmana * Cloudkitty : Rafael Weingartner * Designate : Michael Johnson * Freezer : ge cong * Glance : Pranali Deore * Heat : Takashi Kajinami * Horizon : Vishal Manchanda * Ironic : Jay Faulkner * Keystone : Dave Wilde * Kolla : Michal Nasiadka * Kuryr : Roman Dobosz * Magnum : Jake Yip * Manila : Carlos Silva * Masakari : sam sue * Murano : Rong Zhu * Neutron : Brian Haley * Nova : Sylvain Bauza * Octavia : Gregory Thiemonge * OpenStack Charms : Felipe Reyes * OpenStack Helm : Vladimir Kozhukalov * OpenStackAnsible : Dmitriy Rabotyagov * OpenStackSDK : Artem Goncharov * Puppet OpenStack : Takashi Kajinami * Quality Assurance : Martin Kopec * Solum : Rong Zhu * Storlets : Takashi Kajinami * Swift : Tim Burke * Tacker : Yasufumi Ogawa * Telemetry : Erno Kuvaja * Vitrage : Dmitriy Rabotyagov * Watcher : chen ke * Zaqar : Hao Wang * Zun : Hongbin Lu Elections: * OpenStack_Helm: https://civs1.civs.us/cgi-bin/results.pl?id=E_3cf498bb10adc5b8 Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all involved in the PTL election process, Yours Tony. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Oct 10 11:49:00 2023 From: zigo at debian.org (Thomas Goirand) Date: Tue, 10 Oct 2023 13:49:00 +0200 Subject: [puppet] Openstack & puppet In-Reply-To: References: Message-ID: <55575d4f-b83b-4e68-a2c0-c127155d4358@debian.org> Hi Albert, My response inline below. On 10/9/23 15:46, Tobias Urdin wrote: > Hello Albert, > > The Puppet OpenStack project is available and can be used but has a very low > amount of contributors and I would classify it being in a more maintenance state. I'd like to highlight here that Tobias has been doing an enormous amount of work in the puppet-openstack project, and anyone using puppet should thank him for it. > Takashi has been doing a lot of great work with cleanup, fixing bugs and making > sure a lot of the modules are up-to-date so it should be usable in it?s current state. I tried setting-up bobcat recently (just right after I finished the Debian packaging) and it worked. The only thing that I didn't test yet, is using user tokens for cinder/nova communication, so my setup should be broken regarding Cinder (as this is required starting with Bobcat), though it should be easy to fix. > I think Julia is thinking mostly about it being used in the RedHat product before where > it mostly did configuration but it can be used to deploy all the software as well, but it?s > correct that it doesn?t handle the lifecycle of a OpenStack cluster for example upgrades. Yeah, but for a project like the one I maintain, this is easily handled by a small shell script doing the necessary apt work. More on this when we're done enhancing it (as we're trying to reduce the API down time to a minimum). Cheers, Thomas Goirand (zigo) From gmann at ghanshyammann.com Tue Oct 10 16:13:05 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 Oct 2023 09:13:05 -0700 Subject: [all][elections][ptl] Project Team Lead Election Conclusion and Results In-Reply-To: References: Message-ID: <18b1a5d042b.b46c5701114157.1486098444989677814@ghanshyammann.com> Thanks Wu, I have added your interest to continue as PTL in the below etherpad - https://etherpad.opendev.org/p/2024.1-leaderless#L55 Feel free to propose the PTL appointment change in governance and TC will discuss next. Example, https://review.opendev.org/c/openstack/governance/+/896585 Also, if you think PTL election are little overhead for you then there is DPL model available for skyline to adopt. - https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html -gmann ---- On Tue, 10 Oct 2023 01:05:10 -0700 ??? wrote --- > Hello, Tony > I already mentioned in #openstack-tc that Skyline is not leaderless - I just managed to miss that I needed to propose myself as PTL which was an oversight on my behalf. > Please consider this email my candidacy for PTL of Skyline for the TC to consider. > Thanks > Best RegradsWu Wenxiang > > > Original:From?Tony Breeds tony at bakeyournoodle.com> > Date?2023-09-21 07:53:25(?? (GMT+08:00)) > To?OpenStack Discuss openstack-discuss at lists.openstack.org> , openstack-announceopenstack-announce at lists.openstack.org> > Cc? > Subject?[all][elections][ptl] Project Team Lead Election Conclusion and Results > Thank you to the electorate, to all those who voted and to allcandidates who put their name forward for Project Team Lead (PTL) inthis election. A healthy, open process breeds trust in our decisionmaking capability thank you to all those who make this processpossible.Now for the results of the PTL election process, please join me inextending congratulations to the following PTLs:* Adjutant : Dale Smith* Barbican : Grzegorz Grasza* Blazar : Pierre Riteau* Cinder : Rajat Dhasmana* Cloudkitty : Rafael Weingartner* Designate : Michael Johnson* Freezer : ge cong* Glance : Pranali Deore* Heat : Takashi Kajinami* Horizon : Vishal Manchanda* Ironic : Jay Faulkner* Keystone : Dave Wilde* Kolla : Michal Nasiadka* Kuryr : Roman Dobosz* Magnum : Jake Yip* Manila : Carlos Silva* Masakari : sam sue* Murano : Rong Zhu* Neutron : Brian Haley* Nova : Sylvain Bauza* Octavia : Gregory Thiemonge* OpenStack Charms : Felipe Reyes* OpenStack Helm : Vladimir Kozhukalov* OpenStackAnsible : Dmitriy Rabotyagov* OpenStackSDK : Artem Goncharov* Puppet OpenStack : Takashi Kajinami* Quality Assurance : Martin Kopec* Solum : Rong Zhu* Storlets : Takashi Kajinami* Swift : Tim Burke* Tacker : Yasufumi Ogawa* Telemetry : Erno Kuvaja* Vitrage : Dmitriy Rabotyagov* Watcher : chen ke* Zaqar : Hao Wang* Zun : Hongbin LuElections:* OpenStack_Helm: https://civs1.civs.us/cgi-bin/results.pl?id=E_3cf498bb10adc5b8Election process details and results are also available here:https://governance.openstack.org/election/Thank you to all involved in the PTL election process,Yours Tony. > From gmann at ghanshyammann.com Tue Oct 10 17:14:08 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 Oct 2023 10:14:08 -0700 Subject: [rally] Need Rally PTL/DPL for this release cycle. Message-ID: <18b1a94e878.e896dfae118294.3729792224580369562@ghanshyammann.com> Hi Andrey, Anyone interested in Rally PTL role, As you know, Rally is leaderless project in this cycle[1], in case you missed the PTL election or any new PTL volunteer, please reply to this email or add your name in below etherpad, or even propose the PTL appointment in governance model. - https://etherpad.opendev.org/p/2024.1-leaderless Also, think about DPL model if election are little overhead for this project with solo maintainer. - https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html -gmann From gmann at ghanshyammann.com Tue Oct 10 17:18:19 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 Oct 2023 10:18:19 -0700 Subject: [mistral] Need Mistral PTL/DPL for this release cycle. Message-ID: <18b1a98befb.f89525c5118525.2024440836433703469@ghanshyammann.com> Hi Axel, Anyone interested in Mistral PTL/DPL role, As you know, Mistral is leaderless project in this cycle, in case you missed the PTL election or any new PTL volunteer, please reply to this email or add your name in below etherpad, or even propose the PTL appointment in governance model. - https://etherpad.opendev.org/p/2024.1-leaderless Also, think about DPL model if election are little overhead for this project. - https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html -gmann From axel.vanzaghi at axellink.fr Wed Oct 11 07:51:58 2023 From: axel.vanzaghi at axellink.fr (Axel Vanzaghi) Date: Wed, 11 Oct 2023 07:51:58 +0000 Subject: [mistral] Need Mistral PTL/DPL for this release cycle. In-Reply-To: <18b1a98befb.f89525c5118525.2024440836433703469@ghanshyammann.com> References: <18b1a98befb.f89525c5118525.2024440836433703469@ghanshyammann.com> Message-ID: <-2I57AEhSkn0dJNo9f36KxDSPFHjTFzH0axygJRsWZg05pm4Nq10dmkxwI1r8eCYZFOUv7-JYgHTCx3A-pQ7GVzmaw9z8ANH88gGBwWqCCk=@axellink.fr> ------- Original Message ------- Le mardi 10 octobre 2023 ? 7:18 PM, Ghanshyam Mann a ?crit?: > > > Hi Axel, Anyone interested in Mistral PTL/DPL role, > > As you know, Mistral is leaderless project in this cycle, in case you missed > the PTL election or any new PTL volunteer, please reply to this email or > add your name in below etherpad, or even propose the PTL appointment > in governance model. > - https://etherpad.opendev.org/p/2024.1-leaderless > > Also, think about DPL model if election are little overhead for this project. > - https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > -gmann Hello, Sorry I missed the election, I still volunteer as Mistral PTL if needed however ! Axel Vanzaghi From jake.yip at ardc.edu.au Wed Oct 11 08:39:59 2023 From: jake.yip at ardc.edu.au (Jake Yip) Date: Wed, 11 Oct 2023 16:39:59 +0800 Subject: [magnum] Cancelling IRC meeting today Message-ID: <6cf58ad4-2d1e-4417-86d2-43189fea7396@ardc.edu.au> Hi all, Unfortunately I would have to cancel today's meeting due to personal matters. Apologies for the late notice. Regards, Jake From rdhasman at redhat.com Wed Oct 11 11:39:35 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 11 Oct 2023 17:09:35 +0530 Subject: [cinder][stable] proposing Jon Bernard for cinder-stable-maint In-Reply-To: References: Message-ID: This thread has been up for a week and having heard only positive responses, I've added Jon to the stable core team. Welcome Jon to the cinder stable core group! On Wed, Oct 4, 2023 at 5:55?PM Rajat Dhasmana wrote: > Jon has been doing excellent work in managing the stable releases for the > past couple of cycles. > He will be a great addition to the stable core team. > +1 from my side! > > On Wed, Oct 4, 2023 at 4:12?AM Tony Breeds > wrote: > >> On Wed, 4 Oct 2023 at 05:36, Brian Rosmaita >> wrote: >> > >> > Hello Argonauts, >> > >> > Jon Bernard has been acting as the cinder release manager for a few >> > cycles now and is familiar with the OpenStack Stable Policy [0] and the >> > cinder project backport policy [1]. I'd like to propose that he be >> > added to the cinder-stable-maint team, which will give him +2 powers on >> > the stable branches. >> >> FWIW: I vote would be to "Make it so" >> >> Yours Tony. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Oct 11 13:43:10 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 11 Oct 2023 09:43:10 -0400 Subject: [tc] dropping python 3.8 support for 2024.1 Message-ID: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> I thought about this issue some more after yesterday's TC meeting, and tried to articulate why I think keeping python 3.8 support for 2024.1 is a bad idea on the patch: https://review.opendev.org/c/openstack/governance/+/895160 I don't know if my argument there will change anyone's mind, but since a lot of people have already voted on the gerrit review, I wanted to make sure that people are at least aware of it, to wit: I just don't see that py38 support for 2024.1 makes sense. It's not a default version in Ubuntu 22.04, Debian 12, Debian 11, CentOS Stream 9, or Rocky Linux 9, which are the distros specifically called out in the current 2024.1 PTI that this proposal is patching. While we can use Ubuntu 20.04 to run unit tests, we can't run master (2024.1 development) devstack in it, so it doesn't seem to me that Ubuntu 20.04 is a distribution that we can feasibly use for meaningful 2024.1 testing. This implies, in my opinion, that there is a solid reason for dropping python 3.8 support in advance of it going EOL, as required by [0]. Looking at the Python Update Process resolution [1], python 3.8 does not meet the three criteria set out in the "Unit Tests" section: 1. it's not the latest version of Python 3 available in any distro we can feasibly use for testing 2. It's not the default in any of the Linux distros identified in the 2024.1 PTI 3. It isn't used to run integration tests at the beginning of the 2024.1 (Caracal) cycle Add to that the fact that libraries are beginning to drop support [2], add further that py38 will go EOL roughly 6 months after the 2024.1 release (no more security updates), I don't see a reason to wait until a key library forces us to make a change during the development cycle. I'd prefer to do it now. [0] https://governance.openstack.org/tc/reference/pti/python.html#specific-commands [1] https://governance.openstack.org/tc/resolutions/20181024-python-update-process.html#unit-tests [2] https://review.opendev.org/c/openstack/requirements/+/884564 From jobernar at redhat.com Wed Oct 11 14:34:42 2023 From: jobernar at redhat.com (Jon Bernard) Date: Wed, 11 Oct 2023 14:34:42 +0000 Subject: Cinder Bug Report 2023-10-11 Message-ID: Hello Argonauts, Cinder Bug Meeting Etherpad Undecided - Storwize SVC: Logging chap secret in DEBUG logs - Status: New - IBM SVC: Logging error when failed to create SVC host - Status: New - i want ask a question, but the label closed - Status: New - Reader and member users can list and show group-type - Status: New - Volume retype does not have any visibility into its progress - Status: New Thanks, -- Jon From noonedeadpunk at gmail.com Wed Oct 11 17:19:01 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 11 Oct 2023 19:19:01 +0200 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> Message-ID: To be fair, all these arguments were also raised during the 2023.2 development cycle. With additional one - we will be not able to drop support during non-SLURP release without breaking upgrades/compatibility between SLURPs. A decision was taken to keep py3.8 back then with acknowledgement of that fact. Now, when 2023.2 has been released with py3.8 support, from my perspective there is really no way it can be removed until 2024.1 without breaking someone's upgrades. Despite I agree with your arguments, TC has quite explicitly stated that deprecations should not take place in non-SLURP releases according to [1] and I believe that reasons and background in this resolution are still valid and should be respected, despite how inconvenient it might be sometimes. As also platform consumers rely on that and plan their maintenance and further work based on promises that are being made by community. In case a key library drops support for py3.8 then I believe we will need to leverage upper-constraints to set the version back for the library to the one which supports py3.8. Either for all python versions, or solely for py3.8, like we already did for py3.6 and py3.8 back in Xena/Yoga. [1] https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html#details On Wed, Oct 11, 2023, 15:45 Brian Rosmaita wrote: > I thought about this issue some more after yesterday's TC meeting, and > tried to articulate why I think keeping python 3.8 support for 2024.1 is > a bad idea on the patch: > > https://review.opendev.org/c/openstack/governance/+/895160 > > I don't know if my argument there will change anyone's mind, but since a > lot of people have already voted on the gerrit review, I wanted to make > sure that people are at least aware of it, to wit: > > I just don't see that py38 support for 2024.1 makes sense. It's not a > default version in Ubuntu 22.04, Debian 12, Debian 11, CentOS Stream 9, > or Rocky Linux 9, which are the distros specifically called out in the > current 2024.1 PTI that this proposal is patching. > > While we can use Ubuntu 20.04 to run unit tests, we can't run master > (2024.1 development) devstack in it, so it doesn't seem to me that > Ubuntu 20.04 is a distribution that we can feasibly use for meaningful > 2024.1 testing. This implies, in my opinion, that there is a solid > reason for dropping python 3.8 support in advance of it going EOL, as > required by [0]. > > Looking at the Python Update Process resolution [1], python 3.8 does not > meet the three criteria set out in the "Unit Tests" section: > > 1. it's not the latest version of Python 3 available in any distro we > can feasibly use for testing > 2. It's not the default in any of the Linux distros identified in the > 2024.1 PTI > 3. It isn't used to run integration tests at the beginning of the 2024.1 > (Caracal) cycle > > Add to that the fact that libraries are beginning to drop support [2], > add further that py38 will go EOL roughly 6 months after the 2024.1 > release (no more security updates), I don't see a reason to wait until a > key library forces us to make a change during the development cycle. I'd > prefer to do it now. > > [0] > > https://governance.openstack.org/tc/reference/pti/python.html#specific-commands > [1] > > https://governance.openstack.org/tc/resolutions/20181024-python-update-process.html#unit-tests > [2] https://review.opendev.org/c/openstack/requirements/+/884564 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Oct 11 17:31:54 2023 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 11 Oct 2023 10:31:54 -0700 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> Message-ID: <8e168a95-1532-4f9e-b798-fecce53b355e@app.fastmail.com> On Wed, Oct 11, 2023, at 10:19 AM, Dmitriy Rabotyagov wrote: > To be fair, all these arguments were also raised during the 2023.2 > development cycle. With additional one - we will be not able to drop > support during non-SLURP release without breaking > upgrades/compatibility between SLURPs. A decision was taken to keep > py3.8 back then with acknowledgement of that fact. > > Now, when 2023.2 has been released with py3.8 support, from my > perspective there is really no way it can be removed until 2024.1 > without breaking someone's upgrades. I'm not sure this is the case? 2023.1 and 2023.2 (the two releases you might upgrade from to 2024.1) both support python 3.9 and python3.10 in addition to python3.8. Your upgrade path would be to first update your runtime on 2023.x from python3.8 to 3.9/3.10 then upgrade to 2024.1. > > Despite I agree with your arguments, TC has quite explicitly stated > that deprecations should not take place in non-SLURP releases according > to [1] and I believe that reasons and background in this resolution are > still valid and should be respected, despite how inconvenient it might > be sometimes. As also platform consumers rely on that and plan their > maintenance and further work based on promises that are being made by > community. > > In case a key library drops support for py3.8 then I believe we will > need to leverage upper-constraints to set the version back for the > library to the one which supports py3.8. Either for all python > versions, or solely for py3.8, like we already did for py3.6 and py3.8 > back in Xena/Yoga. I think the big struggle with dropping python3.8 previously was the order of operates was backwards. Services need to drop python versions first, then libraries. This will require coordination across openstack which is what we didn't have the last time we attempted this. > > [1] > https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html#details From gmann at ghanshyammann.com Wed Oct 11 18:10:29 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 11 Oct 2023 11:10:29 -0700 Subject: [mistral] Need Mistral PTL/DPL for this release cycle. In-Reply-To: <-2I57AEhSkn0dJNo9f36KxDSPFHjTFzH0axygJRsWZg05pm4Nq10dmkxwI1r8eCYZFOUv7-JYgHTCx3A-pQ7GVzmaw9z8ANH88gGBwWqCCk=@axellink.fr> References: <18b1a98befb.f89525c5118525.2024440836433703469@ghanshyammann.com> <-2I57AEhSkn0dJNo9f36KxDSPFHjTFzH0axygJRsWZg05pm4Nq10dmkxwI1r8eCYZFOUv7-JYgHTCx3A-pQ7GVzmaw9z8ANH88gGBwWqCCk=@axellink.fr> Message-ID: <18b1feedc94.114878d7e224116.2278073573546874371@ghanshyammann.com> ---- On Wed, 11 Oct 2023 00:51:58 -0700 Axel Vanzaghi wrote --- > ------- Original Message ------- > Le mardi 10 octobre 2023 ? 7:18 PM, Ghanshyam Mann gmann at ghanshyammann.com> a ?crit?: > > > > > > > > Hi Axel, Anyone interested in Mistral PTL/DPL role, > > > > As you know, Mistral is leaderless project in this cycle, in case you missed > > the PTL election or any new PTL volunteer, please reply to this email or > > add your name in below etherpad, or even propose the PTL appointment > > in governance model. > > - https://etherpad.opendev.org/p/2024.1-leaderless > > > > Also, think about DPL model if election are little overhead for this project. > > - https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > > > -gmann > > Hello, > > Sorry I missed the election, I still volunteer as Mistral PTL if needed however ! Hi Axel, thanks for response, I have added your interest in etherpad - https://etherpad.opendev.org/p/2024.1-leaderless Feel free to propose the PTL appointment in governance repo, example: https://review.opendev.org/c/openstack/governance/+/897922 -gmann > > Axel Vanzaghi > > From gmann at ghanshyammann.com Wed Oct 11 18:30:04 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 11 Oct 2023 11:30:04 -0700 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> Message-ID: <18b2000c943.c2f420cd224884.6238447139664650904@ghanshyammann.com> ---- On Wed, 11 Oct 2023 06:43:10 -0700 Brian Rosmaita wrote --- > I thought about this issue some more after yesterday's TC meeting, and > tried to articulate why I think keeping python 3.8 support for 2024.1 is > a bad idea on the patch: > > https://review.opendev.org/c/openstack/governance/+/895160 > > I don't know if my argument there will change anyone's mind, but since a > lot of people have already voted on the gerrit review, I wanted to make > sure that people are at least aware of it, to wit: > > I just don't see that py38 support for 2024.1 makes sense. It's not a > default version in Ubuntu 22.04, Debian 12, Debian 11, CentOS Stream 9, > or Rocky Linux 9, which are the distros specifically called out in the > current 2024.1 PTI that this proposal is patching. > > While we can use Ubuntu 20.04 to run unit tests, we can't run master > (2024.1 development) devstack in it, so it doesn't seem to me that > Ubuntu 20.04 is a distribution that we can feasibly use for meaningful > 2024.1 testing. This implies, in my opinion, that there is a solid > reason for dropping python 3.8 support in advance of it going EOL, as > required by [0]. > > Looking at the Python Update Process resolution [1], python 3.8 does not > meet the three criteria set out in the "Unit Tests" section: > > 1. it's not the latest version of Python 3 available in any distro we > can feasibly use for testing > 2. It's not the default in any of the Linux distros identified in the > 2024.1 PTI > 3. It isn't used to run integration tests at the beginning of the 2024.1 > (Caracal) cycle > > Add to that the fact that libraries are beginning to drop support [2], > add further that py38 will go EOL roughly 6 months after the 2024.1 > release (no more security updates), I don't see a reason to wait until a > key library forces us to make a change during the development cycle. I'd > prefer to do it now. > But this is what we agreed in the below change in policy to keep python min version as much as we can. - https://review.opendev.org/c/openstack/governance/+/882154 Again, this is very low expectation on keeping/testing python3.8 which is to run the unit or functional tests so that we make sure we do not break installation or code error for python3.8. This is not very costly for upstream but a good help for users on older python. I agree on the point that if we are not able to test it due to not available in our supported distro (I think focal will continue having it) or any external deps/lib hardly break us. But we do not have that situation yet and I will again suggest we can deal with that once it happen. -gmann > [0] > https://governance.openstack.org/tc/reference/pti/python.html#specific-commands > [1] > https://governance.openstack.org/tc/resolutions/20181024-python-update-process.html#unit-tests > [2] https://review.opendev.org/c/openstack/requirements/+/884564 > > From gmann at ghanshyammann.com Wed Oct 11 18:31:51 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 11 Oct 2023 11:31:51 -0700 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <8e168a95-1532-4f9e-b798-fecce53b355e@app.fastmail.com> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <8e168a95-1532-4f9e-b798-fecce53b355e@app.fastmail.com> Message-ID: <18b20026a5c.10db8d203224951.7014464526134629656@ghanshyammann.com> ---- On Wed, 11 Oct 2023 10:31:54 -0700 Clark Boylan wrote --- > On Wed, Oct 11, 2023, at 10:19 AM, Dmitriy Rabotyagov wrote: > > To be fair, all these arguments were also raised during the 2023.2 > > development cycle. With additional one - we will be not able to drop > > support during non-SLURP release without breaking > > upgrades/compatibility between SLURPs. A decision was taken to keep > > py3.8 back then with acknowledgement of that fact. > > > > Now, when 2023.2 has been released with py3.8 support, from my > > perspective there is really no way it can be removed until 2024.1 > > without breaking someone's upgrades. > > I'm not sure this is the case? 2023.1 and 2023.2 (the two releases you might upgrade from to 2024.1) both support python 3.9 and python3.10 in addition to python3.8. Your upgrade path would be to first update your runtime on 2023.x from python3.8 to 3.9/3.10 then upgrade to 2024.1. > > > > > Despite I agree with your arguments, TC has quite explicitly stated > > that deprecations should not take place in non-SLURP releases according > > to [1] and I believe that reasons and background in this resolution are > > still valid and should be respected, despite how inconvenient it might > > be sometimes. As also platform consumers rely on that and plan their > > maintenance and further work based on promises that are being made by > > community. > > > > In case a key library drops support for py3.8 then I believe we will > > need to leverage upper-constraints to set the version back for the > > library to the one which supports py3.8. Either for all python > > versions, or solely for py3.8, like we already did for py3.6 and py3.8 > > back in Xena/Yoga. > > I think the big struggle with dropping python3.8 previously was the order of operates was backwards. Services need to drop python versions first, then libraries. This will require coordination across openstack which is what we didn't have the last time we attempted this. Yes, and that is why I think this work need more coordination and community-wide goal is one way to do so like we do for any distro version upgrade in our CI. -gmann > > > > > [1] > > https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html#details > > From smooney at redhat.com Wed Oct 11 19:07:58 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Wed, 11 Oct 2023 20:07:58 +0100 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> Message-ID: <31cacfcbd9120c87e56efe0f7507f9250d8de7c3.camel@redhat.com> On Wed, 2023-10-11 at 19:19 +0200, Dmitriy Rabotyagov wrote: > To be fair, all these arguments were also raised during the 2023.2 > development cycle. With additional one - we will be not able to drop > support during non-SLURP release without breaking upgrades/compatibility > between SLURPs. A decision was taken to keep py3.8 back then with > acknowledgement of that fact. > > Now, when 2023.2 has been released with py3.8 support, from my > perspective there is really no way it can be removed until 2024.1 without > breaking someone's upgrades. > > Despite I agree with your arguments, TC has quite explicitly stated that > deprecations should not take place in non-SLURP releases according to [1] deprecateion are allowed in non slurps we just cant remove supprot for a depreaction that has not been release in slurp. ingoring that for a moment the deprecation poilcy only applies to aplciation feature we have never applied it to runtimes or minium python version. > and I believe that reasons and background in this resolution are still > valid and should be respected, despite how inconvenient it might be > sometimes. As also platform consumers rely on that and plan their > maintenance and further work based on promises that are being made by > community. > > In case a key library drops support for py3.8 then I believe we will need > to leverage upper-constraints to set the version back for the library to > the one which supports py3.8. Either for all python versions, or solely for > py3.8, like we already did for py3.6 and py3.8 back in Xena/Yoga. > > [1] > https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html#details > > On Wed, Oct 11, 2023, 15:45 Brian Rosmaita > wrote: > > > I thought about this issue some more after yesterday's TC meeting, and > > tried to articulate why I think keeping python 3.8 support for 2024.1 is > > a bad idea on the patch: > > > > https://review.opendev.org/c/openstack/governance/+/895160 > > > > I don't know if my argument there will change anyone's mind, but since a > > lot of people have already voted on the gerrit review, I wanted to make > > sure that people are at least aware of it, to wit: > > > > I just don't see that py38 support for 2024.1 makes sense. It's not a > > default version in Ubuntu 22.04, Debian 12, Debian 11, CentOS Stream 9, > > or Rocky Linux 9, which are the distros specifically called out in the > > current 2024.1 PTI that this proposal is patching. > > > > While we can use Ubuntu 20.04 to run unit tests, we can't run master > > (2024.1 development) devstack in it, so it doesn't seem to me that > > Ubuntu 20.04 is a distribution that we can feasibly use for meaningful > > 2024.1 testing. This implies, in my opinion, that there is a solid > > reason for dropping python 3.8 support in advance of it going EOL, as > > required by [0]. > > > > Looking at the Python Update Process resolution [1], python 3.8 does not > > meet the three criteria set out in the "Unit Tests" section: > > > > 1. it's not the latest version of Python 3 available in any distro we > > can feasibly use for testing > > 2. It's not the default in any of the Linux distros identified in the > > 2024.1 PTI > > 3. It isn't used to run integration tests at the beginning of the 2024.1 > > (Caracal) cycle > > > > Add to that the fact that libraries are beginning to drop support [2], > > add further that py38 will go EOL roughly 6 months after the 2024.1 > > release (no more security updates), I don't see a reason to wait until a > > key library forces us to make a change during the development cycle. I'd > > prefer to do it now. > > > > [0] > > > > https://governance.openstack.org/tc/reference/pti/python.html#specific-commands > > [1] > > > > https://governance.openstack.org/tc/resolutions/20181024-python-update-process.html#unit-tests > > [2] https://review.opendev.org/c/openstack/requirements/+/884564 > > > > From jay at gr-oss.io Wed Oct 11 19:40:52 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 11 Oct 2023 12:40:52 -0700 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <18b2000c943.c2f420cd224884.6238447139664650904@ghanshyammann.com> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <18b2000c943.c2f420cd224884.6238447139664650904@ghanshyammann.com> Message-ID: > > > Again, this is very low expectation on keeping/testing python3.8 which is > to run the unit or functional tests so that we make sure we do not break > installation or code error for python3.8. This is not very costly for > upstream > but a good help for users on older python. > I'm going to be honest; I find more to be a reason to drop it; not as a reason to keep it. Taking on the downside of being stuck to python 3.8-supporting libraries without even gaining the upside of being able to know that you can run a fully tested OpenStack on Python 3.8 doesn't seem to be a good value to me. Thanks, Jay Faulkner P.S. Apologies to those who got this twice; I didn't include the list the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Oct 11 20:41:26 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 11 Oct 2023 22:41:26 +0200 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <31cacfcbd9120c87e56efe0f7507f9250d8de7c3.camel@redhat.com> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <31cacfcbd9120c87e56efe0f7507f9250d8de7c3.camel@redhat.com> Message-ID: deprecateion are allowed in non slurps we just cant remove supprot for a depreaction that has not been release in slurp. Yes, thanks for correcting me, that's eventually what I meant, though picked misguiding words to the express that. ingoring that for a moment the deprecation poilcy only applies to aplciation feature we have never applied it to runtimes or minium python version. I'm not sure it's specified in resolution? To be frank, dropping an application feature is way less deal breaker then dropping python version, imo. I would compare it more with bumping min libvirt version then (which is not supported by some platform). > Your upgrade path would be to first update your runtime on 2023.x from python3.8 to 3.9/3.10 then upgrade to 2024.1. Sorry, I can't agree here with you Clark. With that we can also drop platforms as well and tell users to upgrade OS in between of SLURP releases. I know I m a bit exaggerating here, but it's kinda close. Simple example of operations life - we have to schedule all planned maintenances in a 6month before doing them. And if our plan was, for example, involving openstack upgrade to 2024.1, but then, from the blue sky, despite decision that was just taken by TC, I see that I need to do some OS upgrades before that, it would break plans quite dramatically. This would put into position, that by the time of the next possible planned maintenance OpenStack version I am using (and was supposed to be upgraded 6 month ago) is already unmaintained. I am not saying that py3.8 drop is affecting us specifically in this way (dropping a platform would though), but just giving a perspective that such decisions might result in quite some overhead for end users, especially when they in a way contradict with existing TC decisions. I'm not even saying that such thing can easily deduct available time for upstream contributions... But also there should be a trust in TC decisions from end users, so that they know these can't change overnight and can be relied on. As otherwise it would be pretty much hard to convince enyone, that you can rely on OpenStack as a project. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Oct 11 20:50:00 2023 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 11 Oct 2023 13:50:00 -0700 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <31cacfcbd9120c87e56efe0f7507f9250d8de7c3.camel@redhat.com> Message-ID: <54025d95-8f38-4a52-bf15-05598c2e5b17@app.fastmail.com> On Wed, Oct 11, 2023, at 1:41 PM, Dmitriy Rabotyagov wrote: >> >> deprecateion are allowed in non slurps we just cant remove supprot for a depreaction >> that has not been release in slurp. > > Yes, thanks for correcting me, that's eventually what I meant, though > picked misguiding words to the express that. > > >> ingoring that for a moment the deprecation poilcy >> only applies to aplciation feature we have never applied it to runtimes or minium python >> version. > > I'm not sure it's specified in resolution? To be frank, dropping an > application feature is way less deal breaker then dropping python > version, imo. I would compare it more with bumping min libvirt version > then (which is not supported by some platform). > >> Your upgrade path would be to first update your runtime on 2023.x from python3.8 to 3.9/3.10 then upgrade to 2024.1. > > Sorry, I can't agree here with you Clark. With that we can also drop > platforms as well and tell users to upgrade OS in between of SLURP > releases. > > I know I m a bit exaggerating here, but it's kinda close. I think users needing to catch up on external dependencies at the boundaries between slurp upgrades is the expectation, and what I described takes that into account. Basically for a release X consider any releases A, B, C that a user might be able to directly upgrade to X from. Make that possible. In this case we have through overlapping python versions on A and B between old and new allowing you to get to new on X. It is more than an exaggeration: it just isn't realistic to support anything else in my opinion. You'll be stuck supporting ancient everything if you don't take a position like this. > > Simple example of operations life - we have to schedule all planned > maintenances in a 6month before doing them. And if our plan was, for > example, involving openstack upgrade to 2024.1, but then, from the blue > sky, despite decision that was just taken by TC, I see that I need to > do some OS upgrades before that, it would break plans quite > dramatically. This would put into position, that by the time of the > next possible planned maintenance OpenStack version I am using (and was > supposed to be upgraded 6 month ago) is already unmaintained. > > I am not saying that py3.8 drop is affecting us specifically in this > way (dropping a platform would though), but just giving a perspective > that such decisions might result in quite some overhead for end users, > especially when they in a way contradict with existing TC decisions. > I'm not even saying that such thing can easily deduct available time > for upstream contributions... > > But also there should be a trust in TC decisions from end users, so > that they know these can't change overnight and can be relied on. As > otherwise it would be pretty much hard to convince enyone, that you can > rely on OpenStack as a project. We are one week post 2023.2 release. I think now is exactly the time to figure this stuff out for 2024.1. Yes, it could be done earlier, but I don't think it is too late to make decisions that allow the software to be sustainable into the future. From noonedeadpunk at gmail.com Wed Oct 11 21:05:24 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 11 Oct 2023 23:05:24 +0200 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <54025d95-8f38-4a52-bf15-05598c2e5b17@app.fastmail.com> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <31cacfcbd9120c87e56efe0f7507f9250d8de7c3.camel@redhat.com> <54025d95-8f38-4a52-bf15-05598c2e5b17@app.fastmail.com> Message-ID: > > > > It is more than an exaggeration: it just isn't realistic to support > anything else in my opinion. You'll be stuck supporting ancient everything > if you don't take a position like this. > Then I'd say that SLURP approach should be cancelled as I'm not sure any more what real-users situation it is solving then. And if 1 year is a too long period which makes everything "ancient". Because as I said, removal of py3.8 had been raised at beginning of 2023.2 (and I bet everyone recall the mess we got into because of that), when we _really_ could drop it, but just by handling that in a better way. But now in non-SLURP I feel pretty much wrong in doing so. And again, I am not protecting py3.8 specifically, but I am standing for expectations that we set by introducing SLURPs, which IMO was one of the biggest deals for operators lately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Oct 11 23:23:16 2023 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 11 Oct 2023 16:23:16 -0700 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <31cacfcbd9120c87e56efe0f7507f9250d8de7c3.camel@redhat.com> <54025d95-8f38-4a52-bf15-05598c2e5b17@app.fastmail.com> Message-ID: <2c4ab947-0163-4b6e-8de4-4d508c49cc71@app.fastmail.com> On Wed, Oct 11, 2023, at 2:05 PM, Dmitriy Rabotyagov wrote: >> >> >> It is more than an exaggeration: it just isn't realistic to support anything else in my opinion. You'll be stuck supporting ancient everything if you don't take a position like this. > > Then I'd say that SLURP approach should be cancelled as I'm not sure > any more what real-users situation it is solving then. > And if 1 year is a too long period which makes everything "ancient". It isn't one year. Python 3.8 has been out since October 2019, was part of the Ubuntu Focal release in April 2020, was first supported in OpenStack Victoria (October 2020), and if kept as part of the 2024.1 release would be kept alive in OpenStack until approximately October 2025. Python 3.8 will be EOL'd by its maintainers in ~October 2024. We have seen that when python releases become EOL'd many libraries stop supporting that release. This potentially leads to gaps in security and general bugfixing. I think giving users a clear signal of when they should update runtimes before that runtime becomes problematic is a good thing. > > Because as I said, removal of py3.8 had been raised at beginning of > 2023.2 (and I bet everyone recall the mess we got into because of > that), when we _really_ could drop it, but just by handling that in a > better way. But now in non-SLURP I feel pretty much wrong in doing so. This is the part I do not understand. What is the functional difference for operators going through an upgrade process if we drop it in the .1 release vs the .2 release? Let's draw up the resulting upgrade paths. If 2023.2 had dropped python3.8: * Run 2023.1 on python3.8 * Convert 2023.1 deployment to python3.10 * Upgrade 2023.1 to 2023.2 or 2024.1 If 2024.1 drops python3.8 support: * Run 2023.1 on python3.8 * Optionally upgrade deployment to 2023.2 * Convert 2023.x to python3.10 * Upgrade to 2024.1 There isn't a huge difference here. And in both cases users can still skip an OpenStack version in the upgrade process if they choose to do so. The only real difference is if we want people to continue running 2024.1 on python3.8 before converting to probably python3.11 or python3.12 at that point. I don't think that is the sort of signal we want to give because deployments in that situation are very likely to run into struggles with library updates. > > And again, I am not protecting py3.8 specifically, but I am standing > for expectations that we set by introducing SLURPs, which IMO was one > of the biggest deals for operators lately. From skaplons at redhat.com Thu Oct 12 07:42:18 2023 From: skaplons at redhat.com (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=) Date: Thu, 12 Oct 2023 09:42:18 +0200 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <2c4ab947-0163-4b6e-8de4-4d508c49cc71@app.fastmail.com> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <2c4ab947-0163-4b6e-8de4-4d508c49cc71@app.fastmail.com> Message-ID: <5980355.lOV4Wx5bFT@p1gen4> Hi, Dnia czwartek, 12 pa?dziernika 2023 01:23:16 CEST Clark Boylan pisze: > On Wed, Oct 11, 2023, at 2:05 PM, Dmitriy Rabotyagov wrote: > >> > >> > >> It is more than an exaggeration: it just isn't realistic to support anything else in my opinion. You'll be stuck supporting ancient everything if you don't take a position like this. > > > > Then I'd say that SLURP approach should be cancelled as I'm not sure > > any more what real-users situation it is solving then. > > And if 1 year is a too long period which makes everything "ancient". > > It isn't one year. Python 3.8 has been out since October 2019, was part of the Ubuntu Focal release in April 2020, was first supported in OpenStack Victoria (October 2020), and if kept as part of the 2024.1 release would be kept alive in OpenStack until approximately October 2025. > > Python 3.8 will be EOL'd by its maintainers in ~October 2024. We have seen that when python releases become EOL'd many libraries stop supporting that release. This potentially leads to gaps in security and general bugfixing. I think giving users a clear signal of when they should update runtimes before that runtime becomes problematic is a good thing. > > > > > Because as I said, removal of py3.8 had been raised at beginning of > > 2023.2 (and I bet everyone recall the mess we got into because of > > that), when we _really_ could drop it, but just by handling that in a > > better way. But now in non-SLURP I feel pretty much wrong in doing so. > > This is the part I do not understand. What is the functional difference for operators going through an upgrade process if we drop it in the .1 release vs the .2 release? Let's draw up the resulting upgrade paths. > > If 2023.2 had dropped python3.8: > > * Run 2023.1 on python3.8 > * Convert 2023.1 deployment to python3.10 > * Upgrade 2023.1 to 2023.2 or 2024.1 > > If 2024.1 drops python3.8 support: > > * Run 2023.1 on python3.8 > * Optionally upgrade deployment to 2023.2 > * Convert 2023.x to python3.10 > * Upgrade to 2024.1 > > There isn't a huge difference here. And in both cases users can still skip an OpenStack version in the upgrade process if they choose to do so. The only real difference is if we want people to continue running 2024.1 on python3.8 before converting to probably python3.11 or python3.12 at that point. I don't think that is the sort of signal we want to give because deployments in that situation are very likely to run into struggles with library updates. That's exactly my understanding. I don't see big difference there too. > > > > > And again, I am not protecting py3.8 specifically, but I am standing > > for expectations that we set by introducing SLURPs, which IMO was one > > of the biggest deals for operators lately. > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From noonedeadpunk at gmail.com Thu Oct 12 09:35:24 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Thu, 12 Oct 2023 11:35:24 +0200 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <5980355.lOV4Wx5bFT@p1gen4> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <2c4ab947-0163-4b6e-8de4-4d508c49cc71@app.fastmail.com> <5980355.lOV4Wx5bFT@p1gen4> Message-ID: > It isn't one year. We could drop things in April 2023 (for 2023.2, which we didn't do), or we can drop them in April 2024 (for 2024.2), which is roughly 1 year? I think I was talking more about ability to strategically plan such removal so that they were not matching non-SLURP releases. > Python 3.8 will be EOL'd by its maintainers in ~October 2024. We have seen that when python releases become EOL'd many libraries stop supporting that release. This potentially leads to gaps in security and general bugfixing. Well, while I do agree with concerns here, I think it's missing some important notes: 1. Py3.8. is going to be supported by Ubuntu 20.04 until April 2025 at worst, and then there's also ESM till 2030. I bet RHEL has smth like that as well. So security backports for python packages and python itself is not really our pain. 2. We still can test things quite consistently, since we have all our dependencies maintained in upper-constraints.. Basically due to U-C I can build and install Ocata as of today with Py2.7 (I'm not saying it's good or bad, just stating the fact). Also the proposal is quite clear to minimize testing to bare minimal, i.e. unit tests, so I don't see how it's a deal breaker here. And it's in fact an operator's responsibility to ensure their environments are secure enough. > If 2023.2 had dropped python3.8: > > * Run 2023.1 on python3.8 > * Convert 2023.1 deployment to python3.10 > * Upgrade 2023.1 to 2023.2 or 2024.1 Yes, that is right. Important thing here is that it would be expected. Like I know in advance (with a year in my pocket), that in order to upgrade to 2024.1 or to 2023.2 I need to upgrade python first, and that is totally fine. So such thing can be planned, tested and executed in 2 separate steps. > If 2024.1 drops python3.8 support: > > * Run 2023.1 on python3.8 > * Optionally upgrade deployment to 2023.2 > * Convert 2023.x to python3.10 > * Upgrade to 2024.1 And here it's not precisely correct. As with that, upgrade to 2023.2 is not _really_ optional. And such upgrade (having temporary version in between) from operational perspective is really a nightmare: * Firstly it would take a 3x more time get the upgrade done (as you're in fact doing 3 upgrades) * Secondly you need to somehow explain the users that the API versions (or microversions) you get in between is not smth you should get used to, because it they will really soon * Should you also upload new images for magnum/amphora/trove and failover everything for this intermediate upgrade? * Way more time you need to spend on testing such upgrade, as there's another release now that needs to be handled. * Last, but not least, each upgrade still brings some disturbance (like restart of heat-engine might got stack borked), and now you need to do that twice So eventually, with all that, you come to the same flow as you described if 2023.2 had dropped python3.8, meaning you should upgrade your 2023.1 deployment to python3.10 and then proceed to 2024.1. Except: 1. You relied on TC's decision that things will not be removed on non-SLURP so invested your time into 2024.1 upgrade preparation(and maybe development of features for 2024.1). Now you need to scrap all your upgrade (or contribution) plans and plan python upgrade internally instead, as this becomes basically a pre-requirement. 2. Python upgrade for operations is not only about OpenStack, actually. As you also need to adopt your tooling, scripts and plenty of stuff to work against the new Python version. 3. If you've already notified end-users to expect new features that 2024.1 will bring to them in 6 month - you have to disappoint them now, since upgrades won't happen in that timeline, because you need to deal with other things. > I don't see a strong relationship with SLURP or non-SLURP In the motivation part for SLURP there's that thing: "Upgrades will be supported between ?SLURP? releases, in addition to between adjacent major releases (as they are today)." IMO, with removing py38 now we're breaking that. If you're on 2023.1 and running py3.8 you can't just jump to 2024.1, which means that this upgrade path is not _really_ supported. Or we really need to clarify what we mean under "supported upgrades". I see how I can be wrong in that assumption and that there're arguments against this vision. But I assume this perspective has a right to exist as well. Again. I really don't care about py3.8 as we are not running it since upgrade to 2023.1 as switched to py3.10 right after it. I'm more fighting for consistency between our decisions and actions that we take afterwards, so that any openstack user or operator could rely on decisions that were taken and promoted, so that OpenStack was perceived as a solid platform. ??, 12 ???. 2023??. ? 09:42, S?awek Kap?o?ski : > > Hi, > > Dnia czwartek, 12 pa?dziernika 2023 01:23:16 CEST Clark Boylan pisze: > > On Wed, Oct 11, 2023, at 2:05 PM, Dmitriy Rabotyagov wrote: > > >> > > >> > > >> It is more than an exaggeration: it just isn't realistic to support anything else in my opinion. You'll be stuck supporting ancient everything if you don't take a position like this. > > > > > > Then I'd say that SLURP approach should be cancelled as I'm not sure > > > any more what real-users situation it is solving then. > > > And if 1 year is a too long period which makes everything "ancient". > > > > It isn't one year. Python 3.8 has been out since October 2019, was part of the Ubuntu Focal release in April 2020, was first supported in OpenStack Victoria (October 2020), and if kept as part of the 2024.1 release would be kept alive in OpenStack until approximately October 2025. > > > > Python 3.8 will be EOL'd by its maintainers in ~October 2024. We have seen that when python releases become EOL'd many libraries stop supporting that release. This potentially leads to gaps in security and general bugfixing. I think giving users a clear signal of when they should update runtimes before that runtime becomes problematic is a good thing. > > > > > > > > Because as I said, removal of py3.8 had been raised at beginning of > > > 2023.2 (and I bet everyone recall the mess we got into because of > > > that), when we _really_ could drop it, but just by handling that in a > > > better way. But now in non-SLURP I feel pretty much wrong in doing so. > > > > This is the part I do not understand. What is the functional difference for operators going through an upgrade process if we drop it in the .1 release vs the .2 release? Let's draw up the resulting upgrade paths. > > > > If 2023.2 had dropped python3.8: > > > > * Run 2023.1 on python3.8 > > * Convert 2023.1 deployment to python3.10 > > * Upgrade 2023.1 to 2023.2 or 2024.1 > > > > If 2024.1 drops python3.8 support: > > > > * Run 2023.1 on python3.8 > > * Optionally upgrade deployment to 2023.2 > > * Convert 2023.x to python3.10 > > * Upgrade to 2024.1 > > > > There isn't a huge difference here. And in both cases users can still skip an OpenStack version in the upgrade process if they choose to do so. The only real difference is if we want people to continue running 2024.1 on python3.8 before converting to probably python3.11 or python3.12 at that point. I don't think that is the sort of signal we want to give because deployments in that situation are very likely to run into struggles with library updates. > > That's exactly my understanding. I don't see big difference there too. > > > > > > > > > And again, I am not protecting py3.8 specifically, but I am standing > > > for expectations that we set by introducing SLURPs, which IMO was one > > > of the biggest deals for operators lately. > > > > > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From pdeore at redhat.com Thu Oct 12 09:59:38 2023 From: pdeore at redhat.com (Pranali Deore) Date: Thu, 12 Oct 2023 15:29:38 +0530 Subject: [Glance] Cancelling IRC Meeting Today Message-ID: Hi All, Cancelling today's glance meeting since there is nothing much on agenda & I will also not be around as i've Dr. appointment around the same time. See you next week ! Thanks & Regards, Pranali Deore -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Oct 12 11:19:38 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Thu, 12 Oct 2023 12:19:38 +0100 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <2c4ab947-0163-4b6e-8de4-4d508c49cc71@app.fastmail.com> <5980355.lOV4Wx5bFT@p1gen4> Message-ID: <0543f4536d1f4c16006b3f64ade0cde78836e96c.camel@redhat.com> On Thu, 2023-10-12 at 11:35 +0200, Dmitriy Rabotyagov wrote: > > It isn't one year. > > We could drop things in April 2023 (for 2023.2, which we didn't do), actully we did. we drop supprot for py38 and hten re-added it because some project were not ready to drop it. i stongly belive that was a mistake. to be clear the 2023.1 release was intened to be the final release to suport python 3.8 and ubuntu 20.04 https://github.com/openstack/governance/blob/master/reference/runtimes/2023.1.rst it was the release where ubuntu 20.04 was supproted for one addtional release for smooth upgrades. when the 2023.2 runtime was selected we removed support for ubuntu 20.04 because all user of 2023.2 are expected to have upgraded to ubuntu 22.04 prior to upgrading openstack to 2023.2 https://github.com/openstack/governance/commit/6ecb9c7210b152bc4ad36d1e830f4ed6008c0198 is the orgianl 2023.2 testing runtime and then in may python 3.8 was readded https://github.com/openstack/governance/commit/05ac02c2acd6117092c628fb5b04a17c640c415e i really dont think that the correct decsion espically after we had already started removing support for py38 > or we can drop them in April 2024 (for 2024.2), which is roughly 1 > year? I think I was talking more about ability to strategically plan > such removal so that they were not matching non-SLURP releases. if we dont drop supprot in 2024.1 then i think we must drop it in 2024.2 i dont think it would be reasonable to ask porject to continue to supprot it and i honestly dont think it is a reasonbale ask for 2024.1 or 2023.2 i was very unhappy with teh late readdtion of it in may after it was finally removed. > > > Python 3.8 will be EOL'd by its maintainers in ~October 2024. We have seen that when python releases become EOL'd > > many libraries stop supporting that release. This potentially leads to gaps in security and general bugfixing. > > Well, while I do agree with concerns here, I think it's missing some > important notes: > 1. Py3.8. is going to be supported by Ubuntu 20.04 until April 2025 at > worst, and then there's also ESM till 2030. I bet RHEL has smth like > that as well. So security backports for python packages and python > itself is not really our pain. we do not use packages form the disto in our testing outside the standar libary and intepreter so it is a problem for the libs we use from pypi > 2. We still can test things quite consistently, since we have all our > dependencies maintained in upper-constraints.. Basically due to U-C I > can build and install Ocata as of today with Py2.7 (I'm not saying > it's good or bad, just stating the fact) not entirly. in ci or a contienr yes but some of our deps will not compile on newer operating systems. i.e. to run nova's pep8 tox env on wallaby i need to create an vm/container to do that because it wont actully work on a modern operating system. the system packages that we leverage are too new and cause issues. > . Also the proposal is quite > clear to minimize testing to bare minimal, i.e. unit tests, so I don't > see how it's a deal breaker here. And it's in fact an operator's > responsibility to ensure their environments are secure enough. > > > If 2023.2 had dropped python3.8: > > > > * Run 2023.1 on python3.8 > > ?* Convert 2023.1 deployment to python3.10 > > ?* Upgrade 2023.1 to 2023.2 or 2024.1 > > Yes, that is right. Important thing here is that it would be expected. > Like I know in advance (with a year in my pocket), that in order to > upgrade to 2024.1 or to 2023.2 I need to upgrade python first, and > that is totally fine. So such thing can be planned, tested and > executed in 2 separate steps. you have already been given that notice as we notified everyone that ubuntu 20.04 was last supported in 2023.1 an all othe supported distos in that release ship python 3.9+ debian 11 and centos 9 stream both already use python 3.9, ubuntu 22.04 ships 3.10 so this is not a issue for the removal of python 3.8 i dont know of any suppproted distribution where openstack 2023.2+ was shiped on py38 so keeping supprot in 2023.2 or 2024.1 seam academic to me. do we know of any operator or deployment that would actully deploy this way? > > > If 2024.1 drops python3.8 support: > > > > ?* Run 2023.1 on python3.8 > > ?* Optionally upgrade deployment to 2023.2 > > ?* Convert 2023.x to python3.10 > > ?* Upgrade to 2024.1 this was not intended to be a supproted upgrade path on yoga and zed the only supported runtime that had python 3.8 was ubuntu 20.04 https://github.com/openstack/governance/blob/master/reference/runtimes/yoga.rst https://github.com/openstack/governance/blob/master/reference/runtimes/zed.rst so when you upgraded to 2023.1 ( the first slurp release) you did that on 20.04 then for smooth upgrade we supported ubuntu 20.04 and 22.04 in antelop(2023.1) so the expecation was if your on ubuntu using 20.04 and 2023.1 you would upgrade to 22.04 before doing any futher openstack upgrades. ill note that canonical has supported openstack on ubuntu 22.04 since yoga https://ubuntu.com/openstack/docs/supported-versions so they expected people to upgrade form 20.04 to 22.04 after upgrading to yoga on 18.04. we dropped supprot for it well after when any operator should have still be using it if the installed openstack form packages. what is also relevant to this conversation is openstack 2024.1 will be supported on ubuntu 22.04 and 24.04 by canonical for smoth upgrade and we will additionally suprot 2024.2 on 22.04 for smoth upgrade. so to go form 2023.1 -> 2023.2 or 2024.1 the upgrade path is oepnstack 2023.1 to oepnstack 2023.1 on ubuntu 22.04 and python 3.10 then and only then can you upgrade to 2023.2 or 2024.1 note that nova does not support the libvirt/qemu shiped on ubuntu 20.04 in 2023.2 we declared our next min version in wallaby and deferd the bump twice as it was orgianlly planned for z so all users of the the 2023.1 slrup release were notified that this was going to happen well before we did it. > > And here it's not precisely correct. As with that, upgrade to 2023.2 > is not _really_ optional. it is optional ubuntu 20.04 and python 3.8 was not intended to be supprot in 2023.2 which is why you are ment to do the os upgrade and python upgrade on 2023.1 or eariler. > And such upgrade (having temporary version > in between) from operational perspective is really a nightmare: > * Firstly it would take a 3x more time get the upgrade done (as you're > in fact doing 3 upgrades) > * Secondly you need to somehow explain the users that the API versions > (or microversions) you get in between is not smth you should get used > to, because it they will really soon > * Should you also upload new images for magnum/amphora/trove and > failover everything for this intermediate upgrade? > * Way more time you need to spend on testing such upgrade, as there's > another release now that needs to be handled. > * Last, but not least, each upgrade still brings some disturbance > (like restart of heat-engine might got stack borked), and now you need > to do that twice > > So eventually, with all that, you come to the same flow as you > described if 2023.2 had dropped python3.8, meaning you should upgrade > your 2023.1 deployment to python3.10 and then proceed to 2024.1. yes that is the expected flow. upgrade to 3.9+ on 2023.1 before proceeding to 2024.1 > Except: > 1. You relied on TC's decision that things will not be removed on > non-SLURP so invested your time into 2024.1 upgrade preparation(and > maybe development of features for 2024.1). Now you need to scrap all > your upgrade (or contribution) plans and plan python upgrade > internally instead, as this becomes basically a pre-requirement. i would consieer anyoen that takes that perspecte to have miss understood the intent of our supproted runtimes and tc decisions. im sorry but form my perspective we did not agree to supproting python 3.8 until 2024.1 when we were selecting the the runtime for 2023.1 or agreeing on the new slurp process. form my perspective it was clear to me that 2023.1 was intened to be the last release to supprot eith ubuntu 20.04 or python 3.8 > 2. Python upgrade for operations is not only about OpenStack, > actually. As you also need to adopt your tooling, scripts and plenty > of stuff to work against the new Python version. > 3. If you've already notified end-users to expect new features that > 2024.1 will bring to them in 6 month - you have to disappoint them > now, since upgrades won't happen in that timeline, because you need to > deal with other things. form my perspecitve we notified them that 2023.1 would be the last release to supprot python 3.8 and then revers couse on that in 2023.2 for what i condider to be invalid reasons. yes i know our ci broke when we start enforce a min version but i think we shoudl have fixed the project that coudl not drop 38 not readded it. > > > > I don't see a strong relationship with SLURP or non-SLURP > > In the motivation part for SLURP there's that thing: > "Upgrades will be supported between ?SLURP? releases, in addition to > between adjacent major releases (as they are today)." > IMO, with removing py38 now we're breaking that. and in my view py38 was deprecated for removal in 2023.1 so there is no breakage. > If you're on 2023.1 > and running py3.8 you can't just jump to 2024.1, which means that this > upgrade path is not _really_ supported. Or we really need to clarify > what we mean under "supported upgrades". I see how I can be wrong in > that assumption and that there're arguments against this vision. But I > assume this perspective has a right to exist as well. > > Again. I really don't care about py3.8 as we are not running it since > upgrade to 2023.1 as switched to py3.10 right after it. > I'm more fighting for consistency between our decisions and actions > that we take afterwards, so that any openstack user or operator could > rely on decisions that were taken and promoted, so that OpenStack was > perceived as a solid platform. i think retoactivly adding platform and python version that were previously agreed to be drop undermines that platfrom and our ablity to maintain it. to be clear as a contibutoer and core review i felt underminded by the tc when python 3.8 was readded in may. and i felt like we sent the wrong message to operators because we said somethign was supproted when none of the testing runtim operationg systems supproted a unifed deployment of openstack on that versoin of python. > > ??, 12 ???. 2023??. ? 09:42, S?awek Kap?o?ski : > > > > Hi, > > > > Dnia czwartek, 12 pa?dziernika 2023 01:23:16 CEST Clark Boylan pisze: > > > On Wed, Oct 11, 2023, at 2:05 PM, Dmitriy Rabotyagov wrote: > > > > > > > > > > > > > > > It is more than an exaggeration: it just isn't realistic to support anything else in my opinion. You'll be > > > > > stuck supporting ancient everything if you don't take a position like this. > > > > > > > > Then I'd say that SLURP approach should be cancelled as I'm not sure > > > > any more what real-users situation it is solving then. > > > > And if 1 year is a too long period which makes everything "ancient". > > > > > > It isn't one year. Python 3.8 has been out since October 2019, was part of the Ubuntu Focal release in April 2020, > > > was first supported in OpenStack Victoria (October 2020), and if kept as part of the 2024.1 release would be kept > > > alive in OpenStack until approximately October 2025. > > > > > > Python 3.8 will be EOL'd by its maintainers in ~October 2024. We have seen that when python releases become EOL'd > > > many libraries stop supporting that release. This potentially leads to gaps in security and general bugfixing. I > > > think giving users a clear signal of when they should update runtimes before that runtime becomes problematic is a > > > good thing. > > > > > > > > > > > Because as I said, removal of py3.8 had been raised at beginning of > > > > 2023.2 (and I bet everyone recall the mess we got into because of > > > > that), when we _really_ could drop it, but just by handling that in a > > > > better way. But now in non-SLURP I feel pretty much wrong in doing so. > > > > > > This is the part I do not understand. What is the functional difference for operators going through an upgrade > > > process if we drop it in the .1 release vs the .2 release? Let's draw up the resulting upgrade paths. > > > > > > If 2023.2 had dropped python3.8: > > > > > > ? * Run 2023.1 on python3.8 > > > ? * Convert 2023.1 deployment to python3.10 > > > ? * Upgrade 2023.1 to 2023.2 or 2024.1 > > > > > > If 2024.1 drops python3.8 support: > > > > > > ? * Run 2023.1 on python3.8 > > > ? * Optionally upgrade deployment to 2023.2 > > > ? * Convert 2023.x to python3.10 > > > ? * Upgrade to 2024.1 > > > > > > There isn't a huge difference here. And in both cases users can still skip an OpenStack version in the upgrade > > > process if they choose to do so. The only real difference is if we want people to continue running 2024.1 on > > > python3.8 before converting to probably python3.11 or python3.12 at that point. I don't think that is the sort of > > > signal we want to give because deployments in that situation are very likely to run into struggles with library > > > updates. > > > > That's exactly my understanding. I don't see big difference there too. > > > > > > > > > > > > > And again, I am not protecting py3.8 specifically, but I am standing > > > > for expectations that we set by introducing SLURPs, which IMO was one > > > > of the biggest deals for operators lately. > > > > > > > > > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > From noonedeadpunk at gmail.com Thu Oct 12 12:02:11 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Thu, 12 Oct 2023 14:02:11 +0200 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <0543f4536d1f4c16006b3f64ade0cde78836e96c.camel@redhat.com> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <2c4ab947-0163-4b6e-8de4-4d508c49cc71@app.fastmail.com> <5980355.lOV4Wx5bFT@p1gen4> <0543f4536d1f4c16006b3f64ade0cde78836e96c.camel@redhat.com> Message-ID: > > we drop supprot for py38 and hten re-added it because some project were not > ready to drop it. i stongly belive that was a mistake. > Well, I'm not ready to judge if it was a mistake or not - there were good arguments from both sides. But from projects perspective (not libraries) it was pretty much in line to stop supporting it for 2023.2, yes. > if we dont drop supprot in 2024.1 then i think we must drop it in 2024.2 > i dont think it would be reasonable to ask porject to continue to supprot it and i honestly dont think it > is a reasonbale ask for 2024.1 or 2023.2 i was very unhappy with teh late readdtion of it in may after it was > finally removed. Yes, absolutely, it must go away in 2024.2 for sure. > > 1. Py3.8. is going to be supported by Ubuntu 20.04 until April 2025 at > > worst, and then there's also ESM till 2030. I bet RHEL has smth like > > that as well. So security backports for python packages and python > > itself is not really our pain. > we do not use packages form the disto in our testing outside the standar libary > and intepreter so it is a problem for the libs we use from pypi But how much is the security of packages a concern inside a containerized CI, when we're talking about running unit testing only (and not dropping it from setup.cfg)? > > 2. We still can test things quite consistently, since we have all our > > dependencies maintained in upper-constraints.. Basically due to U-C I > > can build and install Ocata as of today with Py2.7 (I'm not saying > > it's good or bad, just stating the fact) > not entirly. in ci or a contienr yes but some of our deps will not compile > on newer operating systems. i.e. to run nova's pep8 tox env on wallaby i need > to create an vm/container to do that because it wont actully work on a modern > operating system. the system packages that we leverage are too new and cause issues. We still have Ubuntu 20.04 among nodepool images for older releases, which can handle py3.8 nicely. So technically - there should be no issues in getting dependencies. And even if external dependencies will start dropping support - we have u-c specifically for this reason. > you have already been given that notice as we notified everyone that ubuntu 20.04 > was last supported in 2023.1 an all othe supported distos in that release ship python 3.9+ > debian 11 and centos 9 stream both already use python 3.9, ubuntu 22.04 ships 3.10 Was you? As I can technically run python 3.8 pretty much easily on one of the supported platforms. If I am opening PTI for 2023.2 (https://governance.openstack.org/tc/reference/runtimes/2023.2.html), what I'm told is: * I need to have ubuntu 22.04/Debian 11 * I need to have Python 3.10 or 3.9, but 3.8 is also a thing. It's not said anywhere that I can't do py3.8 for $reasons on Ubuntu 22.04, am I? > so this is not a issue for the removal of python 3.8 > i dont know of any suppproted distribution where openstack 2023.2+ was shiped on py38 > so keeping supprot in 2023.2 or 2024.1 seam academic to me. do we know of any operator > or deployment that would actully deploy this way? > Have we _anywhere_ in our guidelines connected Python versions to distros? On the contrary, in "Extending support and testing for release with the newer disto version" I see: "When any release bumps the minimum supported distro platform OR python version", meaning these are 2 distinct things that are handled separately? > i think retoactivly adding platform and python version that were previously > agreed to be drop undermines that platfrom and our ablity to maintain it. > to be clear as a contibutoer and core review i felt underminded by the tc > when python 3.8 was readded in may. and i felt like we sent the wrong message > to operators because we said somethign was supproted when none of the testing runtim > operationg systems supproted a unifed deployment of openstack on that versoin of python. No, I don't think we were adding a platform retroactively? As of today, what I see and read makes me think that platform and python version are 2 distinct things. Despite that I know, that intention was slightly different, but that is what we need to deal with as of today. And I don't know how a regular operator should guess that we inside the community perceive this information differently. However, I fully understand your frustration that support was re-added. And I also agree that it was my fault as well to fail to see the absence of a process for removing older python versions when we were agreeing to remove it, which resulted in pretty much chaos once this became happening, as there was not any alignment on how to proceed. And I totally agree that the message sent was wrong. But it was already sent and I'm not sure it can be revoked as of today. So the best we can do now, IMO, is to focus on making up a clean and organized process for Python deprecation in 2024.2. From kurt at garloff.de Thu Oct 12 07:35:54 2023 From: kurt at garloff.de (Kurt Garloff) Date: Thu, 12 Oct 2023 09:35:54 +0200 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: <2c4ab947-0163-4b6e-8de4-4d508c49cc71@app.fastmail.com> References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <31cacfcbd9120c87e56efe0f7507f9250d8de7c3.camel@redhat.com> <54025d95-8f38-4a52-bf15-05598c2e5b17@app.fastmail.com> <2c4ab947-0163-4b6e-8de4-4d508c49cc71@app.fastmail.com> Message-ID: <6D61243E-D482-483C-9E94-C591DCA7F9F6@garloff.de> Hi, Am 12. Oktober 2023 01:23:16 MESZ schrieb Clark Boylan : >It isn't one year. Python 3.8 has been out since October 2019, was part of the Ubuntu Focal release in April 2020, was first supported in OpenStack Victoria (October 2020), and if kept as part of the 2024.1 release would be kept alive in OpenStack until approximately October 2025. > >Python 3.8 will be EOL'd by its maintainers in ~October 2024. We have seen that when python releases become EOL'd many libraries stop supporting that release. This potentially leads to gaps in security and general bugfixing. I think giving users a clear signal of when they should update runtimes before that runtime becomes problematic is a good thing. Fully agree. A strong deprecation is definitely appropriate as 2024.1 on py3.8 can only be reliably supported with maintenance for ~1/2 year. A warning message whenever you try to run 2024.1 code on py3.8. Leaves the question open whether or not we want to invest (likely small) effort to keep py3. 8 technically working or not. Do we consider treating clients different from server side? I.e. avoid supporting running OpenStack 2024.1 on py3.8 while being more relaxed with users that want to run sdk & client-tools on py3.8 (Ubuntu 20.04LTS)? PS: I don't see a strong relationship with SLURP or non-SLURP. Upgrades always go through a number of steps and the important thing is that we take responsibility for documenting and validating them. HTH, -- Kurt From gmann at ghanshyammann.com Thu Oct 12 15:26:57 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 12 Oct 2023 08:26:57 -0700 Subject: [tc] dropping python 3.8 support for 2024.1 In-Reply-To: References: <6e4eb54c-bcf7-721e-1d34-3593b9c5cba8@gmail.com> <2c4ab947-0163-4b6e-8de4-4d508c49cc71@app.fastmail.com> <5980355.lOV4Wx5bFT@p1gen4> <0543f4536d1f4c16006b3f64ade0cde78836e96c.camel@redhat.com> Message-ID: <18b247f80ee.ee5d9c69308820.3949757845772669075@ghanshyammann.com> ---- On Thu, 12 Oct 2023 05:02:11 -0700 Dmitriy Rabotyagov wrote --- > > > > we drop supprot for py38 and hten re-added it because some project were not > > ready to drop it. i stongly belive that was a mistake. > > > > Well, I'm not ready to judge if it was a mistake or not - there were > good arguments from both sides. But from projects perspective (not > libraries) it was pretty much in line to stop supporting it for > 2023.2, yes. > > > > if we dont drop supprot in 2024.1 then i think we must drop it in 2024.2 > > i dont think it would be reasonable to ask porject to continue to supprot it and i honestly dont think it > > is a reasonbale ask for 2024.1 or 2023.2 i was very unhappy with teh late readdtion of it in may after it was > > finally removed. > > Yes, absolutely, it must go away in 2024.2 for sure. > > > > 1. Py3.8. is going to be supported by Ubuntu 20.04 until April 2025 at > > > worst, and then there's also ESM till 2030. I bet RHEL has smth like > > > that as well. So security backports for python packages and python > > > itself is not really our pain. > > we do not use packages form the disto in our testing outside the standar libary > > and intepreter so it is a problem for the libs we use from pypi > > But how much is the security of packages a concern inside a > containerized CI, when we're talking about running unit testing only > (and not dropping it from setup.cfg)? > > > > 2. We still can test things quite consistently, since we have all our > > > dependencies maintained in upper-constraints.. Basically due to U-C I > > > can build and install Ocata as of today with Py2.7 (I'm not saying > > > it's good or bad, just stating the fact) > > not entirly. in ci or a contienr yes but some of our deps will not compile > > on newer operating systems. i.e. to run nova's pep8 tox env on wallaby i need > > to create an vm/container to do that because it wont actully work on a modern > > operating system. the system packages that we leverage are too new and cause issues. > > We still have Ubuntu 20.04 among nodepool images for older releases, > which can handle py3.8 nicely. So technically - there should be no > issues in getting dependencies. And even if external dependencies will > start dropping support - we have u-c specifically for this reason. > > > > you have already been given that notice as we notified everyone that ubuntu 20.04 > > was last supported in 2023.1 an all othe supported distos in that release ship python 3.9+ > > debian 11 and centos 9 stream both already use python 3.9, ubuntu 22.04 ships 3.10 > > Was you? As I can technically run python 3.8 pretty much easily on one > of the supported platforms. If I am opening PTI for 2023.2 > (https://governance.openstack.org/tc/reference/runtimes/2023.2.html), > what I'm told is: > * I need to have ubuntu 22.04/Debian 11 > * I need to have Python 3.10 or 3.9, but 3.8 is also a thing. > > It's not said anywhere that I can't do py3.8 for $reasons on Ubuntu 22.04, am I? Yes, also we need to keep in mind that testing runtime is minimum expectation of supported/tested distro or python versions. We might not be able to support more versions of distro but we can support/test more python versions which can be done on distro old version images or install manually in new version images. I also still feel dropping it in 2024.2 is better way where we give deprecation in 2024.1 (SLURP) and so that any upgrade from 2024.1->2024.2 or 2025.1 will not be issue. -gmann > > > so this is not a issue for the removal of python 3.8 > > i dont know of any suppproted distribution where openstack 2023.2+ was shiped on py38 > > so keeping supprot in 2023.2 or 2024.1 seam academic to me. do we know of any operator > > or deployment that would actully deploy this way? > > > > Have we _anywhere_ in our guidelines connected Python versions to > distros? On the contrary, in "Extending support and testing for > release with the newer disto version" I see: > "When any release bumps the minimum supported distro platform OR > python version", meaning these are 2 distinct things that are handled > separately? > > > > > i think retoactivly adding platform and python version that were previously > > agreed to be drop undermines that platfrom and our ablity to maintain it. > > to be clear as a contibutoer and core review i felt underminded by the tc > > when python 3.8 was readded in may. and i felt like we sent the wrong message > > to operators because we said somethign was supproted when none of the testing runtim > > operationg systems supproted a unifed deployment of openstack on that versoin of python. > > No, I don't think we were adding a platform retroactively? As of > today, what I see and read makes me think that platform and python > version are 2 distinct things. > Despite that I know, that intention was slightly different, but that > is what we need to deal with as of today. And I don't know how a > regular operator should guess that we inside the community perceive > this information differently. > > However, I fully understand your frustration that support was > re-added. And I also agree that it was my fault as well to fail to see > the absence of a process for removing older python versions when we > were agreeing to remove it, which resulted in pretty much chaos once > this became happening, as there was not any alignment on how to > proceed. > > And I totally agree that the message sent was wrong. But it was > already sent and I'm not sure it can be revoked as of today. So the > best we can do now, IMO, is to focus on making up a clean and > organized process for Python deprecation in 2024.2. > >