openstack-discuss search results for query "#eventlet-removal"
openstack-discuss@lists.openstack.org- 150 messages
[tc][all] OpenStack Technical Committee Weekly Summary and Meeting Agenda (2025.2/R-13)
by Goutham Pacha Ravi
Hello Stackers,
We're thirteen weeks away from the coordinated release of OpenStack
2025.2 "Flamingo" [1]. This Thursday marks Milestone-2 of the release
cycle. This is a bug-targeting milestone for OpenStack project teams,
as well as a checkpoint for the OpenStack Release Management team to
line up deliverables that will be part of the coordinated release at
the end of this development cycle. Currently, the number of
deliverables to be released has only changed slightly from the
OpenStack 2025.1 release. No significant deliverables have been
deprecated or removed.
In the past week, the OpenStack Technical Committee did not make any
new governance changes; however, several proposals are under the
community's review. Most significantly, today (2025-06-30) marks the
end of the OpenStack Contributor License Agreement (CLA). The CLA was
adopted in July 2010 (do you feel old yet?) [2], and thousands of
contributors have signed it. However, for over a decade, we've felt
encumbered by how it was arcane, perceived to be more permissive than
Apache v2 and caused friction with individuals and organizations. We
believe it has limited contributor involvement. Today, we join several
other open source projects in requiring the "Developer Certificate of
Origin" as a way to sign off your contributions. As we close the lid
on the CLA regime, you'll need to "git commit -s" each of your
contributions from 2025-07-01 to adhere to the DCO [3][4]. We do
anticipate hiccups and request your cooperation in ironing out any
issues in the code review system and contributor tooling during this
transition. If you notice something broken, please chime in on this
mailing list or on OFTC's #opendev channel.
=== Weekly Meeting ===
The last weekly meeting of the OpenStack Technical Committee was held
on 2025-06-24 [5]. The meeting was well attended and covered several
topics. The proposal to mark the Cyborg project as inactive [6] was
withdrawn after critical CI fixes were merged. While activity has
resumed for this cycle, there are concerns about the project’s
long-term maintenance, particularly in light of future OpenStack-wide
changes like eventlet removal and dependency updates. It was opined
that Cyborg is not "feature complete" and must stay responsive to
ongoing ecosystem changes, even if its core functionality appears
stable. The upcoming cycle's elections will serve as a checkpoint to
reassess the project’s trajectory and whether new maintainers need
onboarding.
We then discussed the timeline of the ongoing effort to phase out
eventlet. A new governance patch [7] proposes a timeline that aligns
with operator and distro expectations, especially in light of Python
3.13 adoption. Python 3.12 will continue to be a fallback for some
time, but when supporting Python 3.13, we hope not to depend on
eventlet across the board. The TC emphasized that continued discussion
on the Gerrit change is necessary to finalize acceptable timelines.
The next major topic concerned setuptools changes that will impact PBR
(Python Build Reasonableness). Setuptools will remove "ScriptWriter"
and "pkg_resources" by October 31, 2025. These removals break current
functionality in PBR and could jeopardize CI and release workflows. A
critical bug was reported against PBR by maintainers of setuptools
[8]. The TC discussed options ranging from vendoring replacements to
rewriting PBR logic or adopting upstream tools despite performance
tradeoffs. A key concern is that PBR’s CI is currently broken and must
be fixed before any meaningful changes can be implemented. We'd like
to seek a volunteer to resolve this issue. Please chime in on OFTC's
#openstack-tc or #openstack-oslo if you're keen to help with this.
In closing, we discussed the OpenInfra Summit 2025 CFP for Forum
Sessions and Project Updates. CFP submissions must be made before
23:45 PST on 2025-07-08 [9].
The next meeting of the OpenStack Technical Committee will be held on
2025-07-01 at 1700 UTC. This meeting will be hosted simultaneously
over Meetpad and IRC. You're welcome to join whichever platform you
prefer. The Meetpad session will be recorded, and the recording will
be shared on the TC's YouTube channel [10]. Please find the agenda and
joining information on the meeting's wiki page [11]. I hope you'll be
able to join us there.
=== Governance Proposals ===
==== Open for review ====
- Add service-types-authority to SDK deliverables |
https://review.opendev.org/c/openstack/governance/+/953548
- Deprecate shade, os-client-config |
https://review.opendev.org/c/openstack/governance/+/953549
- Remove Monasca from active project |
https://review.opendev.org/c/openstack/governance/+/953671
- Make Eventlet removal deadlines more acceptable for operators |
https://review.opendev.org/c/openstack/governance/+/952903
- Require declaration of affiliation from TC Candidates |
https://review.opendev.org/c/openstack/governance/+/949432
=== Upcoming Events ===
- 2025-07-03: OpenStack's 15th Birthday, Colombia User Group:
https://www.meetup.com/colombia-openinfra-user-group/events/308383244
- 2025-07-08: OpenInfra Board meeting: https://board.openinfra.org/
- 2025-07-19: OpenInfra Days, Indonesia: https://2025.openinfra.id/
Thank you very much for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] 2025.2 "Flamingo" Release Schedule:
https://releases.openstack.org/flamingo/schedule.html
[2] OpenStack and its CLA: https://wiki.openstack.org/wiki/OpenStackAndItsCLA
[3] OpenStack will replace CLA with DCO:
https://governance.openstack.org/tc/resolutions/20250520-replace-the-cla-wi…
[4] OpenStack Contributor Guide to DCO:
https://docs.openstack.org/contributors/common/dco.html
[5] TC Meeting IRC Log 2025-06-24:
https://meetings.opendev.org/meetings/tc/2025/tc.2025-06-24-17.00.html
[6] Cyborg will not be marked inactive:
https://review.opendev.org/c/openstack/governance/+/952798
[7] Timeline changes for the Eventlet Removal goal:
https://review.opendev.org/c/openstack/governance/+/952903
[8] PBR setuptools incompatibility: https://bugs.launchpad.net/pbr/+bug/2107732
[9] CFP for OpenInfra Summit 2025: https://summit2025.openinfra.org/cfp/
[10] OpenStack TC YouTube Channel: https://www.youtube.com/@openstack-tc
[11] TC Meeting Agenda, 2025-07-01:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
1 month, 2 weeks
Re: [eventlet-removal]When to drop eventlet support
by Ghanshyam Maan
---- On Fri, 13 Jun 2025 08:33:25 -0700 Jay Faulkner <jay(a)gr-oss.io> wrote ---
>
> On 6/13/25 5:08 AM, Balazs Gibizer wrote:
> > Hi Stackers!
> >
> > I would like to sync about the planned timeline of dropping eventlet
> > support from OpenStack / Oslo.
> >
> > Nova definitely needs at least the full 2026.1 cycle to have a chance
> > to transform the nova-compute service. But this plan already feels
> > stretched based on the progress in the current cycle. So being
> > conservative means we need the 2026.2 cycle as a buffer.
> >
> > Nova would like to keep a release where we support both eventlet and
> > threading in parallel. So that operators can do the switching from
> > eventlet to threading outside of the upgrade procedure. (This was an
> > explicit request from them during the PTG). So 2026.2 could be that
> > version where nova fully supports both concurrency mode, while
> > eventlet can be marked deprecated. Then the 2027.1 release could be
> > the first release dropping eventlet.
> >
> > However we need to align with the SLURP upgrade as well. 2026.1 is a
> > SLURP. But in that release Nova might not be ready to have all
> > services running in threading mode. So the 2026.1 - 2027.1 SLURP
> > upgrade would force the operators to change the concurrency mode
> > during the upgrade itself.
> >
> > I see two ways forward:
> > * A) We say that operators who want to do the concurrency mode change
> > outside of an upgrade could not skip the 2026.2 release, i.e. they
> > cannot do SLURP directly from 2026.1. to 2027.1.
This has a big impact on upgrades and breaks our SLURP model.
> > * B) We keep supporting the eventlet mode in the 2027.1 release as
> > well and only dropping support in 2028.1.
I am in favour of this option.
I was reading the goal doc about the timeline and found something in
'Completion Criteria' section[1] which says:
- (2027.1) Get usage of Eventlet in oslo deliverables removed;
- "(2027.2) Get Eventlet retired from OpenStack;"
Again, 2027.2 (non-SLURP) is mentioned as eventlet retirement, I do not know
if any technical reason to do it in non-SLURP or it can be moved to SLURP release.
Maybe hberaud knows.
Anyway, thanks gibi for bringing this. There are many projects that have not started the
work yet (Nova might have more work compared to others), but I think we should discuss/re-discuss
the timelines considering all projects/challenges. Accordingly, update the goal doc for the exact timelines
and what the impact will be for projects that do not finish work as per the timelines (for example upgrade
issue, workaround etc).
[1] https://governance.openstack.org/tc/goals/selected/remove-eventlet.html#com…
-gmaan
>
> Keeping eventlet running for that long is not something that is a worthy
> investment of time. The oslo libraries are showing a deprecation of
> 2026.2, I've been using that date as the target for all Ironic work as
> well.
>
> Beyond the oslo team (who I don't speak for), there are folks -- like
> Itamar on behalf of GR-OSS -- who are doing work behind the scenes to
> keep eventlet running - barely. I do not expect the GR-OSS investment in
> this work to extend much past the midpoint of 2026.
>
>
> My $.02,
>
> Jay Faulkner
> Open Source Developer
> G-Research Open Source Software
>
>
2 months
[neutron][ptg] 2025.2 Flamingo PTG summary
by Brian Haley
Hi all,
Thanks for all that attended the meetings, we had a good turnout for
Neutron, the Nova/Cinder cross-project and the eventlet removal
discussions that were important to the team.
For complete notes see the etherpad [0] but a summary is below. And
please comment if you feel I missed anything.
Thank you,
-Brian
### Epoxy retrospective ###
Like any other PTG, it started with a retrospective of the past cycle,
with the highlights and improvement points.
Good:
- Eventlet removal is going well. Even the transition period (for the
Neutron API with ML2/OVN) finished with good results.
- Permanent "open doors" status to welcome newcomers. Our meetings (team
meeting, drivers meeting) are a good forum for them.
- Removed experimental code (Linuxbridge, IPv6 PD).
- Migration from ML2/Linux Bridge to ML2/OVN discussions have started in
Operators community
(https://etherpad.opendev.org/p/neutron-lb-ovn-migration)
Bad:
- The loss of 2 important core developers.
- No new specs merged in Epoxy cycle.
### Gate Stability ###
Team has been very good at addressing gate issues quickly, and meets on
a regular basis. We have noticed a few issues related to eventlet
changes and are reverting them until we can address them.
### Spring cleanup / Code base modernization ###
At the beginning of every cycle, we always try and cleanup the code
base, for example, removing deprecated and dead code, doc updates, job
consolidation. We have made progress on those already.
- During Epoxy, several changes were made to match the expected output
of pyupgrade/autopep8 for py39+. We will make a similar change in
Flamingo for py310+ as that is now the lower bounds (haleyb).
- It was also discussed to include "ruff" to our checklist (mlavalle).
- Now that the OVN agent is in place and the default in gate, it was
discussed about removing the OVN metadata agent as it is unnecessary to
have both.
- CI job reduction: move the ML2/OVS iptables-hybrid jobs to periodic
(and experimental) and determine if iptables-hybrid driver can be moved
to experimental config space.
- TODOs: spend some time addressing the current TODO notes in the code,
fixing what is proposed, already started.
- Remove the unneeded OVN maintenance tasks, according to the comments
in the methods.
- Will make sure beginning/end of cycle docs are up-to-date on things
like CI jobs, etc. (haleyb)
### Eventlet removal ###
The "oslo.service" library will implement two backends (eventlet or
threading), that will allow us to test both implementations during the
removal. It could be useful to have temporary CI jobs, using both
implementations.
The testing CI (unit test, functional and fullstack) should start the
migration during this cycle, but only once the Neutron service code is
migrated.
Please see [1] and [2] for more information and meetings notes from the
eventlet-specific agenda.
### Migration to SDK (neutronclient removal) ###
Etherpad link:
https://etherpad.opendev.org/p/python-neutronclient_deprecation
During the last cycle a number of Horizon patches merged (thanks Lajos!).
Heat patches are still under review (and not attended).
Nova patch is WIP: https://review.opendev.org/c/openstack/nova/+/928022
Neutron team will add deprecation warnings in the neutronclient code as
there are still a number of places in both in and out-of tree code that
is using the neutronclient.
The fullstack tests will migrate to SDK during this cycle.
### Migrate the OVN L3 scheduler to use HA_Chassis_Group ###
Link: https://bugs.launchpad.net/neutron/+bug/2092271
As recommended by the OVN core team, the usage of ``Gateway_Chassis`` to
bind a ``Logical_Router_Port`` is deprecated. Instead of this,
``HA_Chassis_Group`` should be used. This bug tracks this effort that
should be done during the next cycle.
### An OVN L3 router tool ###
Link: https://bugs.launchpad.net/neutron/+bug/2103521
The goal of this tool is:
- To be able to list the current GW LRP assignment, according to the
``HA_Chassis_Group``. It will show the GW chassis assignation level
(according to the priority) and the number of routers per chassis.
- To be able to reschedule the current assignments. In case of unbalance
(chassis deletion, router deletion), this tool will allow to reschedule
the GW chassis across the existing LRPs.
This is something that is useful to cloud operators that currently must
manually try and re-balance LRPs.
Status: Approved
### Alembic idempotency ###
Link: https://bugs.launchpad.net/neutron/+bug/2100770
This request comes from an issue seen in RHOSO18 with a DB migration
back port. It is not possible to change the cycle milestones (last DB
migrations of each cycle), thus the D/S migrations (in U/S are
forbidden) are done before these milestones. The problem happens when a
user, that is already in the last DB migration step, needs to re-execute
the DB migration tool to receive the new back ported migration. That
will fail because it is not possible to execute a DB migration script twice.
This proposal includes:
- Implementing a new test class that checks any new DB migration in
order to ensure it works, as we found issues recently.
- Enforce any new DB migration (code developers and reviewers) to be
idempotent.
- This would be supported on all existing migrations as well, not just
from today forward.
- Provide a usage guide on the alembic methods if necessary
Status: Approved
### Stop DB drop migrations, with field drop exceptions ###
Link: https://bugs.launchpad.net/neutron/+bug/2104160
In a DB migration, developers should add a TODO note to drop anything
not used in the next SLURP release. That will prevent the case described
in the bug, where a user executed the "expand" phase migration code with
the old Neutron API code (as specified by the upgrade guide), resulting
in an error due to a missing DB column, which was dropped by a DB
migration script.
The initial output of this bug is a new policy in the development guide.
### Create a restricted default policy for undefined API calls ###
Link: https://bugs.launchpad.net/neutron/+bug/2098737
The goal of this RFE is to define a default restrictive rule for every
non-defined policy in the Neutron API, as "oslo.policy" allows us to
create a "default" policy that will be used in case the API call is not
defined.
The "default" Neutron policy is RULE_ADMIN_OR_OWNER, we propose to
change this to RULE_ADMIN_ONLY. Any new API not defined in the policies
would be executable only by the admin user.
This change cannot be done now, due to the high number of missing API
rules not defined that are using the default rule, see
https://review.opendev.org/c/openstack/neutron/+/945687 - but this rule
migration should be a background tasks for any team member that will
improve the quality of the RBAC system in Neutron.
Status: Approved
### Nova/Cinder cross-project ###
We discussed two topics at the Nova/Cinder cross-project meeting.
Project-Specific QoS Controls for Granular Resource Management. Driven
by Bloomberg. Link: https://bugs.launchpad.net/neutron/+bug/2102184
Despite many questions still existing about the implementation, the
Neutron team agreed to accept a spec describing it. The Neutron team
agreed to have a per-project QoS policy that will affect to a
non-defined set of resources (ports, networks, routers, FIPs). Bloomberg
agreed to work on this later this cycle.
For live-migration with OVN, Nova currently has a vif-plugged event
timer to work-around any missed (or not sent) notifications.
Link: https://bugs.launchpad.net/nova/+bug/2073254
The code for the multiple binding is in core OVN (23.09) and Neutron.
Nova team will propose a patch to remove the condition in Nova and wait
for the vif-plugged event when using the ML2/OVN backend as the
condition should not happen any more.
Patch link: https://review.opendev.org/c/openstack/nova/+/946950
### Service role permissions ###
The Octavia allowed address pair driver requires new service role
permissions.
Link: https://bugs.launchpad.net/neutron/+bug/2105502
Three patches proposed:
- Add new default policy for device_id field on ports:
https://review.opendev.org/c/openstack/neutron/+/861169
- Allow service role to create/update port device_id:
https://review.opendev.org/c/openstack/neutron/+/947003
- Allow service role more RBAC access for Octavia:
https://review.opendev.org/c/openstack/neutron/+/945329
### L3/DHCP thread pool resizing ###
Link: https://review.opendev.org/c/openstack/neutron/+/938411
This proposal is related to the eventlet removal topic. There is no
direct replacement for ``eventlet.GreenPool``. The resizable mechanism
does not provide a resource improvement in speed or memory and there is
no kernel library that provides this functionality. The default and
static maximum number of threads will be 32 (the current maximum).
### VXLAN physical networks ###
Link: https://bugs.launchpad.net/neutron/+bug/2105855
The goal is to be able to schedule a VXLAN port depending on the
physical mappings as the VTEP interfaces in each compute node could be
connected to different physical networks.
This would help improve the "networking-generic-switch" back end as it
could use this new information.
An alternative to this could be to implement a new type driver: l2vni,
used for tunneled drivers (VXLAN) with a physical back end. A spec is
required to study this proposal further.
### vlan_transparent ###
Link: https://bugs.launchpad.net/neutron/+bug/2092174
Proposal to deprecate and remove the vlan_transparent config option.
By default, the loaded drivers will determine if the functionality is
supported or not.
Status: Approved
### OVN "Add-Only" Sync Mode ###
Link: https://bugs.launchpad.net/neutron/+bug/2099818
Proposal to add a new OVN sync mode option that will only add things to
the OVN database and never remove from either OVN or Neutron.
There were a few worries, as turning-off delete might just make things
more out of sync, and not address any issues being seen in
ovn-controller or ovn-northd.
It was also pointed out that we did spend some time over past cycles
tagging neutron-owned objects in OVN explicitly, and there might just be
some additional edge cases we need to fix. This might address the issue
where other objects change unexpectedly.
Need to discuss further at future Drivers meeting as submitter did not
attend.
### RFE discussions ###
On-demand topic to discuss any in-progress or proposed RFE changes,
please see etherpad for more information.
[0] https://etherpad.opendev.org/p/apr2025-ptg-neutron
[1] https://etherpad.opendev.org/p/neutron-eventlet-deprecation
[2] https://etherpad.opendev.org/p/apr2025-ptg-eventlet
3 months, 2 weeks
[tc][all] OpenStack Technical Committee Weekly Summary and Meeting Agenda (2025.1/R-10)
by Goutham Pacha Ravi
Hello Stackers,
We're 10 weeks away from the 2025.1 coordinated "Epoxy" release [1].
Starting this week, there are several deadlines that project teams
have adopted to ensure that code, specifications, and documentation
changes are adequately peer-reviewed. As Slawek Kaplonski (slaweq)
shared to this mailing list [2], the nomination period for the
upcoming Technical Committee and Project Team Leads elections will
begin on 2025-02-05 and remain open until 2025-02-19. Early
nominations are highly encouraged in case you'll be unavailable during
this period.
In the last week, the OpenStack Technical Committee selected a
community goal to migrate away from the eventlet library across its
codebase [3]. As with many community goals, this will be a
multi-release goal and needs contributors. Please join the
#openstack-eventlet-removal IRC channel on OFTC to participate in the
discussion.
=== Weekly Meeting ===
Dmitriy Rabotyagov (noonedeadpunk) chaired the TC's weekly IRC meeting
on 2025-01-14 [4]. We discussed the maintenance of the
"openstackdocstheme" Sphinx plugin and called out some early attempts
to switch documentation to a stock theme that's more maintainable and
sustainable. We also reviewed a proposal from Freezer project
maintainers to remove the project team from the "inactive" teams list
[5]. The TC also considered requesting an exception for Freezer to
release within 2025.1. In this vein, we also highlighted the need for
consistent criteria to move projects out of inactive status and better
tooling for indicating project health and activity.
The next meeting of the OpenStack Technical Committee is today,
2025-01-21. This meeting will be hosted via IRC on OFTC's
#openstack-tc channel. Please find the meeting agenda on the wiki [6].
I hope you'll be able to join us there.
=== Governance Proposals ===
==== Merged ====
- Propose to select the eventlet-removal community goal |
https://review.opendev.org/c/openstack/governance/+/934936
- Rework the eventlet-removal goal proposal |
https://review.opendev.org/c/openstack/governance/+/931254
- Add ansible-role-httpd repo to OSA-owned projects |
https://review.opendev.org/c/openstack/governance/+/935694
==== Open for Review ====
- Resolve to adhere to non-biased language |
https://review.opendev.org/c/openstack/governance/+/934907
- Retire Freezer DR | https://review.opendev.org/c/openstack/governance/+/938183
- Retire qdrouterd role |
https://review.opendev.org/c/openstack/governance/+/938193
- Remove Freezer from inactive state |
https://review.opendev.org/c/openstack/governance/+/938938
- Reset the DPL model for oslo project |
https://review.opendev.org/c/openstack/governance/+/939485
- Reset the DPL model for Release project |
https://review.opendev.org/c/openstack/governance/+/939486
- Reset the DPL model for Requirement project |
https://review.opendev.org/c/openstack/governance/+/939487
- Reset the DPL model for Watcher project |
https://review.opendev.org/c/openstack/governance/+/939488
- Define 2025 upstream investment opportunities |
https://review.opendev.org/c/openstack/governance/+/939507
=== Upcoming Events ===
- 2025-01-28: OpenInfra Board Monthly Meeting: https://board.openinfra.org/
- 2025-02-01: FOSDEM 2025 (https://fosdem.org/2025/) OpenStack's 15th
Birthday Celebration
- 2025-02-05: Nominations open for 2025.2 TC/PTL Elections:
https://governance.openstack.org/election/
- 2025-02-28: 2025.1 ("Epoxy") Feature Freeze and release milestone 3 [1]
- 2025-03-06: SCALE 2025 + OpenInfra Days NA
(https://www.socallinuxexpo.org/scale/22x)
Thank you very much for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] 2025.1 "Epoxy" Release Schedule:
https://releases.openstack.org/epoxy/schedule.html
[2] TC/PTL Election dates for the 2025.2 cycle:
https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack…
[3] Goal to Remove Eventlet from OpenStack:
https://governance.openstack.org/tc/goals/selected/remove-eventlet.html
[4] TC Meeting IRC Log 2025-01-14:
https://meetings.opendev.org/meetings/tc/2025/tc.2025-01-14-18.00.log.html
[5] Remove Freezer from inactive state:
https://review.opendev.org/c/openstack/governance/+/938938
[6] TC Meeting Agenda, 2025-01-21:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
6 months, 3 weeks
Re: [ops][api][all] What are we going to do about pastedeploy usage?
by Sean Mooney
On 17/07/2025 13:37, Stephen Finucane wrote:
> oslo.service is going through a lot of churn right now as part of the
> eventlet migration. We recently noticed that some unrelated WSGI
> routing code had been inadvertently deprecated, and moved to
> undeprecate this [1].
so honestly while i think we should likely move off that stack to
fastapi or
flask ectra im not sure we should do this as a community until the
eventlet removal
goal is completed.
the main thing that paste gave that other frameworks didn't at the time was
the ability to configure middleware pipeline via a simple declarative file.
> In the long-term, however, this code doesn't
> really belong in oslo.service and should be moved elsewhere. I took a
> stab at bootstrapping an oslo.wsgi package, and after some hacking I
> arrived at a dependencies list containing the following:
>
> * Paste [2]
> * PasteDeploy [3]
> * Routes [4]
> * WebOb [5]
long term i think we could consider all of those to be tech debt that we
want to
move off of right.
we discussed this briefly int eh context of the eventlet removal 18
month ago as you noted
but that is likely not the tech stack we want an oslo.wsgi to use long term.
>
> As some of you might know, all of these packages are minimally
> maintained bordering on unmaintained. I'm not entirely sure we want to
> want to bootstrap a new project using these libraries as opposed to
> migrating off of them.
exactly this.
> My question is this: how important is the
> pastedeploy framework and the paste.ini files nowadays, particularly
> for deployers?
i would be very interested to see ops feedback here too as that ignoring
time for a moment
is the only gap i personally see with moving to a better maintained
project like fastapi or flask
for nova at least. flask is already used by keystone and neutron i think.
if i had time i would also like to move watcher off of its current
PasteDeploy + pecan + WebOb + WSME stack to fastapi or flask
pecan had a long period of inactivity. now it has picked back up in 2025
so its
not in a bad place but WebOb, WSME and PasteDeploy is defintly tech debt.
> While their use is relatively consistent across projects
> (see below), not every service uses them and for those that don't, I
> personally haven't heard complaints about their absence. Rather than
> migrating the pastedeploy stuff elsewhere, would it make more sense for
> affected projects to simply define a static set of middleware (with
> some config knobs for those we want to be able to enable/disable) and
> call it a day?
+1 my inclination was to just have a comma separate list of middle-ware
to run in order as part
of the standard service config.
default to what is enabled in paste today.
if need provide a way to load extra middle-ware using
https://opendev.org/openstack/stevedore the same
way we do plugins in the rest of openstack.
>
> Cheers,
> Stephen
>
> PS: This topic came up about 18 months ago [6], but we don't appear to
> have reached a conclusion. Thus my bringing it up again.
>
> [1] https://review.opendev.org/c/openstack/oslo.service/+/954055
I'm glad you spotted this as yes only the eventlet web-sever part was
intended to be deprecated.
> [2] https://pypi.org/project/Paste/#history
> [3] https://pypi.org/project/PasteDeploy/#history
> [4] https://pypi.org/project/Routes/#history
> [5] https://pypi.org/project/WebOb/#history
> [6] https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack…
>
> ---
>
> fwict, the following services all rely on this combo to build their own
> frameworks, with Nova most likely the progenitor in each case (I'm
> guessing)
>
> * Nova
> * Barbican
> * Cinder
> * Designate
> * Freezer
> * Glance
> * Heat
> * Manila
> * Monasca
> * Neutron
> * Swift
> * Trove
> * Vitrage
> * Watcher
>
> The following services do *not* use these libraries:
>
> * Cyborg (pecan)
> * Ironic (pecan)
> * Keystone (Flask + Flask-RESTful, with some webob)
> * Magnum (pecan)
> * Masakari (homegrown framework using webob)
> * Zaqar (falcon)
> * Zun (Flask)
>
3 weeks, 6 days
[tc][all] OpenStack Technical Committee Weekly Summary and Meeting Agenda (2025.1/R-24)
by Goutham Pacha Ravi
Hello Stackers,
The Virtual Project Team Gathering (PTG) is next week [1][2]. There
are 32 discussion tracks planned throughout the week, along with
ad-hoc virtual hallway/water cooler discussions. As a result, many
regular weekly IRC meetings will be canceled. Please take a moment to
review the PTG Etherpad for the Technical Committee [3]. We encourage
community-wide participation on topics such as:
- Community Leaders Meeting (2024-10-21 / 1400 UTC): This event is
open to all project maintainers, especially PTLs and DPLs. We'll cover
several topics of community-wide importance.
- Eventlet Removal (2024-10-21 / 1500 UTC): This session will kick off
discussions around addressing the technical debt associated with
OpenStack services migrating away from the eventlet library.
- ...and many more!
There may be some minor changes to the schedule based on other
cross-project topics being planned. The best way to stay up-to-date is
to join OFTC's #openinfra-events channel or watch the event stream on
the PTG website [2].
This week, Indiana University will be hosting the OpenInfra community
for OpenInfra Days, North America (15-16 Oct 2024). This is a hybrid
event, and both in-person and online registrations are still open. If
you're attending in person, I’d love to catch up and discuss how the
OpenStack TC can assist you. Please don’t hesitate to say hello!
=== Weekly Meeting ===
The OpenStack Technical Committee held its weekly meeting on
2024-10-08 [5]. The meeting was well attended. During the meeting, we
approved a change to mark the "kuryr-kubernetes" and
"kuryr-tempest-plugin" projects as "inactive." This means we won’t be
producing any releases from these projects during the 2025.1 "Epoxy"
release cycle, and the repositories may soon be retired. If you depend
on these projects, please let us know. It’s worth noting that there
have been no releases since the 2024.1 "Caracal" cycle. The TC also
spent time brainstorming PTG topics and scheduling. We discussed a
regression in python-openstackclient that necessitates an update to
the recently released 2024.2 "Dalmatian" release. We took action to
strengthen the CI jobs for python-openstackclient, adding more
scenarios such as upgrade testing with "trunk."
The next weekly IRC meeting of the OpenStack Technical Committee is on
2024-10-15, chaired by Dmitriy Rabotyagov (noonedeadpunk). The agenda
is available on the meeting wiki page [6]. I hope you can join us.
Please remember that you can propose topics by editing the wiki page,
and if you do, please include your IRC nick so we can call on you
during the meeting.
=== Governance Proposals ===
==== Merged ====
- Mark kuryr-kubernetes and kuryr-tempest-plugin inactive:
https://review.opendev.org/c/openstack/governance/+/929698
==== Open for Review ====
- Propose to select the eventlet-removal community goal:
https://review.opendev.org/c/openstack/governance/+/931254
- Propose a pop-up team for eventlet-removal:
https://review.opendev.org/c/openstack/governance/+/931978
- Adding Axel Vanzaghi as PTL for Mistral:
https://review.opendev.org/c/openstack/governance/+/927962
=== How to Contact the TC ===
You can reach the TC in several ways:
1. Email: Send an email with the tag [tc] on this mailing list.
2. Ping us using the 'tc-members' keyword on the #openstack-tc IRC channel.
3. Join us at our weekly meeting: The Technical Committee meets every
Tuesday at 1800 UTC [6].
---
=== Upcoming Events ===
- 2024-10-15: OpenInfra Days NA, Indianapolis [4]
- 2024-10-21: OpenInfra Project Teams Gathering: https://openinfra.dev/ptg/
Thank you for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] 2025.1 "Epoxy" Release Schedule:
https://releases.openstack.org/epoxy/schedule.html
[2] "Epoxy" PTG Schedule: https://ptg.opendev.org/ptg.html
[3] OpenStack TC PTG Etherpad:
https://etherpad.opendev.org/p/oct2024-ptg-os-tc
[4] OpenInfra Days North America:
https://ittraining.iu.edu/explore-topics/titles/oid-iu/
[5] TC Meeting IRC Log, 2024-10-08:
https://meetings.opendev.org/meetings/tc/2024/tc.2024-10-08-18.00.log.html
[6] TC Meeting Agenda, 2024-10-15:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
10 months
[watcher] Call for maintainers - MAAS integration
by Douglas Viroel
Hi everyone,
At the last watcher's weekly meeting[1], the team discussed the impacts of
the eventlet removal in Watcher's code, and it was brought up that the
integration with MAAS (Metal As a Service) support is affected.
At the moment, we do not have any active members in Watcher that have any
experience with MAAS. CI also doesn't have any job running and testing this
integration, and we don't have documentation of how it works.
This is a call for maintainers interested in supporting MAAS in Watcher to
help fill these gaps.
It is worth mentioning that, as discussed in the PTG[2], we also plan to
mark this integration as *experimental* once it does not provide minimal
testing and documentation (this is also the case of other integrations).
But we will cover this topic in a different thread.
[1]
https://meetings.opendev.org/meetings/watcher/2025/watcher.2025-05-15-12.01…
[2] https://etherpad.opendev.org/p/apr2025-ptg-watcher#L71
Thanks and regards,
--
Douglas Viroel - dviroel
2 months, 3 weeks
[glance] Epoxy PTG summary
by Abhishek Kekane
Hi Team,
We had our Epoxy virtual PTG between 21st to 25th October 2024. Thanks to
everyone who joined the virtual PTG sessions. Using gmeet we had lots of
discussion around different topics for glance, glance + cinder, eventlet
removal etc. You can find the topics discussed in etherpad [1] with Notes
from the session and which also includes the recordings of each discussion.
Here I am going to highlight a few important topics which we are going to
target this cycle.
# Property protection
Access to extra properties through Glance’s public API calls may be
restricted to certain sets of users, using a property protections
configuration file. It can be either role based protection or policy based
protection. During this session we have decided to conduct a survey to
verify who is using this feature, whether they are using role based or
policy based protection. Can we remove either of the protection and do the
refactoring? If not then glance will update the documentation for property
protection to include reserved properties.
Recording:
https://drive.google.com/file/d/1WrhkM2dVk6E8KQVDJCt1Bc2aaHi-txGJ/view?usp=…
# Periodic job for cache-cleaner and cache-pruner
At the moment we have command line utilities to perform these jobs which
need to be configured as cron jobs. If we move this under service as a
periodic job we can eliminate maintenance of these command line tools and
avoid external configuration processes. In this session we decided to
introduce two new admin only API calls which will help to clean the cached
images. This will help to have an easier deployment process.
Recording:
https://drive.google.com/file/d/1pmYmVD72zl97gdXRWegYZskDvP50wi6E/view?usp=…
# New API to list cached images on all glance nodes
As we now have a centralized database as one of the cache drivers, we can
now have an api to list images cached on different nodes to give end user
overall view of the cache. The actual plan was to list all images cached
across the nodes but during the session we found that it is better to get a
picture of a single image cached on the nodes. Here we will add a new API
call (admin only?) which will accept image id as an input and return the
node details where the said image is cached.
Recording:
https://drive.google.com/file/d/1fv0tBEWbtHcKKfMVLVwc8lPHwN5OM3uh/view?usp=…
# Distributed image download if filesystem is enabled
We already use distributed imports to avoid configuring shared staging
areas across glance nodes, similarly we can use this mechanism to download
the image if the filesystem driver is used and the operator wants to avoid
configuring shared filesystem in case we have more than one glance node. We
have decided to introduce a new configuration option
`distributed_download_strategy` with options `None,proxy,redirect`. If it
is None then we will return a 404 File Not Found error to the user. If it
is a proxy then we will proxy the request to the node where image data is
actually present. If it is redirect then we will return a fully accessible
image url to the user which he can use to issue a new download request.
Recording:
https://drive.google.com/file/d/1W5mwifcEfne5HJfNRJ84EyuYGc7NCDJK/view?usp=…
# Remove deprecated features/options
* sqlite cache driver - Remove in F cycle
* Glance scrubber - Remove in Epoxy
* Windows support - Sync with gmann to check whether we should remove it
now or follow deprecation process of 2 cycles
* glance-cache-prefetcher, glance-cache-cleaner, glance-cache-pruner -
Deprecate these in Epoxy and remove those in G cycle
Recording:
https://drive.google.com/file/d/1DFmpy6JRIZw9OTPaw19FD4bW7SR36iNF/view?usp=…
# Deprecating python-glanceclient
We need to merge pending patches in OSC on priority basis. We will be
announcing the deprecation of glance client on openstack mailing list which
will help stakeholders to plan OSC migration in next two cycles. Plan is to
deprecate the shell in Epoxy with a deprecation warning to each command and
then remove the glance client in G cycle.
Recording:
https://drive.google.com/file/d/1gL6oEpoRL1igjervLFLkNE6eHVWJp3A2/view?usp=…
# Horizon Feature Gaps - Cross project session with Horizon team
Horizon team is stepping up to provide user interface for new features
introduced in glance. You can find the details about it in etherpad [2].
Glance team will help them to migrate from glance-client to OSC and verify
and review new user interface.
Recording:
https://drive.google.com/file/d/12AeTnaQytUhG_tHWN06G78kFYg2YuSGl/view?usp=…
# S3 support
In this session we have discussed the feature gaps in S3 and other backends
of glance. Plan is to find out how we can configure S3 store using swift or
ceph and then add a CI job upstream to find out if there are any failures
in current supported features.
Recording:
https://drive.google.com/file/d/1MKEGh7qR9HgCnlhsa4MDjH6Rpl0OaUXe/view?usp=…
# Add Glance as first-line defense for image format attacks
Here we want glance to act as primary defender for any kind of security
vulnerability. You can find the initial idea about it in glance spec [3].
In this session we discussed;
* adding new disk-format `gpt`
* leaves raw as unstructured data which will be rejected by nova to boot
from it.
* Migration of existing images from raw to gpt
* LUKS inspector (see image encryption session for more details)
* Impact on image conversion plugin if gpt disk-format is added
Recording:
https://drive.google.com/file/d/1xWM7IMCuZ_UenlMUd_rh1KmeuLY6bzSS/view?usp=…
# Image Encryption(cross project session with cinder and nova)
Here we have discussed the changes in current proposed design with
CVE-2024-32498 and how to inspect an image if it is encrypted. Plan is to
re-propose the spec with current changes like;
* Keep the qcow format as it is and discover the encrypted qcow through
metadata (encryption_key_id)
* Introduce LUKS as new disk-format
Recording: To be updated*
# New Location APIs Adoption in Nova and Cinder
As Image encryption cross project session took much of the time, we have
decided to discuss this topic in the following weekly meeting. Plan is to
make nova and cinder use this new location API call so that we can get rid
of split deployment (glance internal vs glance external) in this cycle.
# Eventlet removal
It is a community goal to remove eventlet from OpenStack by 2027.2 (see
https://governance.openstack.org/tc/goals/proposed/remove-eventlet.html)
Eventlet is causing issues with every new CPython release (at the time of
writing, tests are hanging on CPython 3.13). This is a SLURP release so we
cannot have "big" changes, but:
* we can get "easy" fixes out of the way
* we should start working on this so we can merge patches right at the
start of the F cycle
Glance planning for eventlet removal:
* WSGI server
Make sure all jobs are running on uwsgi
Deprecate/disable the wsgi eventlet server
Remove it in 3/4 cycles
* Plan for epoxy cycle
Deprecate eventlet/wsgi related config option
Devstack to by default configure uwsgi
Migrate jobs to use uwsgi
Migrate from eventlet threadpool to native or io thread pool
* Plan for F cycle
Fix any issues if occurs for uwsgi
* Plan for G cycle
Remove wsgi functionality
Recording:
https://drive.google.com/file/d/1J5nYOCwp2MrrFZAIIvg1YGH4UnaVpk6s/view?usp=…
Apart from above topics you can find other miscellaneous topics discussed
in PTG etherpad [1].
If you have any questions/suggestions then please join us in our weekly
meeting (each Thursday #openstack-meeting irc channel at 1400 UTC).
*PS: Since Image encryption was a cross-project session with cinder and
nova, once cinder publishes recordings I will update it here on thread.
[1] https://etherpad.opendev.org/p/oct2024-ptg-glance
[2] https://etherpad.opendev.org/p/horizon-feature-gap
[3] https://review.opendev.org/c/openstack/glance-specs/+/925111
Thanks and Regards,
Abhishek Kekane
9 months, 2 weeks
[nova][elections][ptl] Application for the PTL role 2026.1
by René Ribaud
Hello,
I am announcing my new candidacy for the PTL role for Nova and Placement.
Serving as PTL in the Flamingo cycle was both a rewarding and instructive
experience. It allowed me to learn the responsibilities and workload of the
role, build stronger connections across the community, and help contributors
move their work forward.
In the next cycle, I want to continue fostering a welcoming and
collaborative
environment for all contributors.
I will continue and improve our efforts to enhance bug triage, reduce the
number of open bugs, and support timely reviews to help features progress
toward completion.
I am also eager to follow and participate in the Eventlet removal effort,
which began in Flamingo and represents a major step for OpenStack. This is
a particularly interesting challenge—not highly visible to most users,
but crucial for the long-term health, performance, and maintainability of
the project.
My main focus, and the most important thing to me, will remain ensuring
that Nova and Placement stay robust, maintainable, and aligned with our
users’ needs.
Thank you for your trust so far, I’d be happy to continue serving in
this role.
Thanks,
René
10 hours, 25 minutes
[cinder][PTG] 2025.2 Flamingo PTG summary
by Jon Bernard
Hello everyone, thanks for a very productive PTG for the Flamingo cycle.
Below is a brief summary of the highlights. The raw etherpad notes are
here [1] and both the summary and transcribed etherpad notes are
captured in out wiki here [2].
# Eventlet Removal
This cycle we will try complete the removal of Eventlet from our
codebase. We dicussed our current usage, including calls to sleep() and
tpool.Proxy(). We agreed to split the work between Eric and me. I hope
we can get these patches up and reviewed with time to spare before the
release. An idea to revive our fake volume driver was raised to
potentially gain greater confidence in high concurrency test
configurations.
# Testing Volume Replication
Although volume replication is support by a few of our in-tree drivers,
there are no integration tests that verify correct behaviour in tempest.
Discussions on adding tempest tests lead to Liron being interested in
driving this effort. Changes will need to be made to devstack-ceph
plugin to support deploying multiple ceph clusters, as well as gate
configuration to run all of this in a multi-node environment. There are
a few important moving pieces which may take some time to sort out, but
we agreed to commit to working towards a solution in this cycle.
# Follow-up from Epoxy
We have a few patch series that we didn't have time to review by the
deadline of the Epoxy cycle. Volume type metadata, encryption format,
and dm-clone just to name a few. Effort in reviewing and gathering
feedback needs to be prioritized in the first half of the cycle to make
sure we can confidently land these in Flamingo.
# CI status
We should review our 3rd party CI instances to make sure they're
actually running, running the correct tests in the context of the
related change, and reporting results that can be viewed by a gerrit
reviewer. It was noted that some of them fall short of some or all of
these requirements. It would be helpful to gather a current status so
that we can help vendors with awareness.
# 2023.2 Final Stable Release
On April 30, the stable/2023.2 branch will have it's final release.
This will mark the end of cinderlib. We still have a few backports that
need reviews currently, see etherpad for details.
# Backup Improvements
crohmann raised 2 patches that both improve the speed of cinder-backup's
startup routine, and decouple a volume's backup status from it's overall
status. Eric and Brian agreed to review to help move these patches
forward.
# RBD Geometry Option
We've have an old patch to allow RBD's disk geometry to be configurable.
This reportedly improves performance of Windows guests significantly.
We agreed to make sure this gets reviewed in this cycle.
# Stephen's Patches
We have 3 patches carried over from last cycle that need review. These
include an OpenAPI series, a patch to map block storage AZs to Compute
AZs, and updates to OSC. It would be really nice to get these reviewed
in this cycle.
# Tobias' Patches
We have several patches from Tobias all related to cinder-backup
improvements. Including adding support for backup service to
os-availability-zone, support for filters in query parameters, the
addition of a summary API, and potentially adding a `type` field to the
API to open the posibility of support backups types beyond just `full`
or `incremental`.
[1] https://etherpad.opendev.org/p/apr2025-ptg-cinder
[2] https://wiki.openstack.org/wiki/CinderFlamingoPTGSummary
--
Jon
3 months, 3 weeks