Le jeu. 17 avr. 2025 à 19:07, Clay Gerrard <clay.gerrard@gmail.com> a écrit :
> - Swift: Evaluating alternatives with impressive performance results (FastWsgi showing 10x better performance!).

A swift core maintainer did a "hello world" application benchmark comparison of eventlet & fast-wsgi and reported something similar:

FastWSGI server is about ~10 times faster than Eventlet WSGI server

AFAIK it's non-trivial and unproven if anything like that comparison would hold up in a relatively complex application that's actually making backend requests to other http/rest APIs or talking to ancillary services like memcache and auth systems inline w/ every request/response.

This mailing thread is just a summary of all the things that have been said about the Eventlet removal during the all sessions of the PTG, so this sentence is just a quote of the same thing you refer to.
You can find further details here https://removal.eventlet.org/guide/openstack/flamingo-ptg/
 

IIUC, FastWSGI is using "one thread per request" (actually re-reading the code that's no longer obvious to me, it's using libuv to handle connections and parse sockets - but are all call_wsgi_application just blocking!?).  IMHO on modern systems a WSGI server using "one thread per request" is probably reasonable for the kind of concurrency a swift proxy server running CPU intensive operations like encryption and erasure coding (and even decoding JWTs) will want to handle (i.e scale wide, more proxies doing less concurrent RPS, e.g. with 100s of proxy-application worker nodes we turn down max_clients from default 1000 to closer to 100 as part of a solution to provide some back-pressure against overly aggressive concurrent request spikes), and on the storage nodes OTOH one-thred-per-request would probably *help* provide more *consistent* performance in the face of full disks hitting seeks and flush when their blocking filesystem and sqlite database calls are starving the hub anyway.

The "hardest" part of a swift migration off eventlet from my perspective is the gratuitous use of lightweight concurrency to talk to multiple backend services *within a request* - i.e. each request greenlet spawns/waits on *multiple* greenlets (think trio nursery) like "connect to 12 backend servers and stream each EC chunk for every client segment concurrently" or "start a metadata server update request while waiting on a thread doing an fsync and if either errors add a entry to a repair journal before responding" using custom wrappers around GreenPool like a GreenAsyncPile https://github.com/NVIDIA/swift/blob/master/swift/common/utils/__init__.py#L1927

Can a FastWSGI one-thread-per-request server still share a ThreadPoolExecutor if you want to offload waiting on a blocking call while running other coroutines talking to non-blocking APIs like a socket w/i each request thread?  I don't think I see why not... 

Honestly the idea of a focused effort to wean a large system like swift off of eventlet is daunting to say the least; certainly difficult to do "optionally" or "experimentally" - it seems like once we port some small backend service like ... idk the account-auditor - even "just" to use some kind asyncio wrapper ... I think you just want to *delete* any kind of "optional" monkey patching?  Or is there some way towards a magic translator (awaitlet?) that lets us try turning off "transparent single thread w/ implicit concurrency" for "experimental multiple threads each with explicit cooperative concurrency"? IIUC at some point near the bottom of the layers of translation wrappers (to help avoid plumbing "async def" and "return await" all the way from your "__main__ asyncio.run" to the very bottom of your app) - you DO *eventually* get to the bottom and have to s/from eventlet.green.http.client import HTTPConnection/from some_async_lib import AwaitableHTTPConnectionLike/ *right*???  LIke a minimum you have to "__main__ s/monkey_patch/asyncio.run/" - then you get to play games where you can say "return awaitlet" instead of every function being a generator/coroutine ... but whereas "chunk = sock.read()" would "just" block the greenlet you'd spawned while "doing other stuff in that same request" we do now actually have to write *some* async def coroutines?  Or maybe at the top you "if BETTER_CONCURRENCY: asyncio.run else monkey_patch" and at the bottom do "return AwaitableConnection if BETTER_CONCURRENCY else AwaitableLookingButActuallySomehowJustBoringBustedGreenTransparentBlockingConnection"???

I'd love to do some targeted experiments to get a feel for the options; I'm not sure where to start!

On Thu, Apr 17, 2025 at 10:44 AM Herve Beraud <hberaud@redhat.com> wrote:
Following the intensive discussions at the April 2025 Project Teams Gathering about the Eventlet removal initiative, this thread summarizes the current status, challenges, and strategies of all the OpenStack teams with a transversal  perspective.

A full and detailed report is available at:

## Python 3.13 Compatibility

The countdown has begun! As documented in the April 2025 PTG discussions, the OpenStack community is accelerating efforts to remove Eventlet dependencies across all projects. This initiative has become critical due to Eventlet's compatibility problems with Python 3.13 and the upcoming "GILectomy" (PEP 703).

With Ubuntu 2025.4 planning to ship Python 3.13 by default, we're facing a hard deadline for this work

https://discourse.ubuntu.com/t/plucky-puffin-release-schedule/36461

## Migration Status

### Significant Progress
- Octavia: Fully migrated since 2017! Their approach is now documented as a community case study.
- Mistral: Almost there with a comprehensive migration approach.
- Neutron: Significant progress with numerous patches already merged.
- Glance: Can be deployed without Eventlet, though some optional features still need work.

### Work in Progress
- Nova: Planning a service-by-service migration with dual-mode support during transition.
- Swift: Evaluating alternatives with impressive performance results (FastWsgi showing 10x better performance!).
- Manila, Cinder, Heat: All have active plans for the Flamingo cycle.
- Designate, Blazar, Watcher: Starting their migration journeys.
- Ironic: Facing complex challenges, particularly with the Ironic Python Agent component.

## Technical Spotlight: Oslo.service's Threading Backend

A key development is the new threading backend for oslo.service that eliminates the Eventlet dependency:
- Review in progress: https://review.opendev.org/c/openstack/oslo.service/+/945720
- No longer provides WSGI support
- Each service will need to deprecate implementations dependent on Eventlet's WSGI server

## Migration Strategies: One Size Doesn't Fit All

Projects are adopting diverse approaches:

- Dual-mode support: Nova and Glance are supporting both Eventlet and native threads during transition
- Canary approach: Swift is considering starting with proxy nodes
- Component-by-component: Cinder is starting with the Volume Manager
- Complete replacement: Heat is planning a full discontinuation of WSGI server implementations

## Nova's Detailed Roadmap

Nova has a comprehensive two-cycle plan:

Flamingo Cycle (2025.2):
- API modernization
- Architecture updates with environment variable controls
- Performance improvements
- Test transition to native threading

Guppy Cycle (2026.1):
- Core event loop conversion
- Adoption of oslo.service's new threading backend

## Issues to Watch: RabbitMQ Heartbeat Problems

The problem with RabbitMQ heartbeats story:
- Timeouts and API failures in "green" environments
- Partial solution using pthreads exists but has logging issues
- Track the fix: https://review.opendev.org/c/openstack/oslo.log/+/937729

## How to Get Involved

- Join the #openstack-eventlet-removal channel on OFTC
- Review the official guide: https://removal.eventlet.org/
- Look for patches under the "eventlet-removal" topic: https://review.opendev.org/q/prefixtopic:%22eventlet-removal%22

## Recommended Reading

For teams starting their migration:
1. Official goal documentation: https://governance.openstack.org/tc/goals/selected/remove-eventlet.html
2. Migration preparation guide: https://removal.eventlet.org/guide/preparing-for-migration/
3. Octavia case study: https://removal.eventlet.org/guide/case-studies/octavia/

## Stay Updated

- The complete version of this report is available online at: https://removal.eventlet.org/guide/openstack/flamingo-ptg/
- Track the progress of the migration across all projects at: https://removal.eventlet.org/guide/openstack/#migration-status

Thanks for reading!

PS: This PTG summary is based on the etherpads following the April 2025 PTG discussions (https://ptg.opendev.org/etherpads.html). For more details, refer to the full report (https://removal.eventlet.org/guide/openstack/flamingo-ptg/).


--
Clay Gerrard
210 788 9431


--
Hervé Beraud
Principal Software Engineer at Red Hat
irc: hberaud