So yes. Unfortunately there’s not a good fix because of how uWSGI works. Essentially the default in OpenStack Helm is to use 1 uWSGI process, which would normally be the Kubernetes way (not having a process manager inside of a pod and using horizontal pod scaling). But when uWSGI is servicing a request when it’s only 1 process, it won’t even service the health check from Kubernetes. So Kubernetes marks it unhealthy and sends it a signal to shutdown. Except uWSGI never handled signals correctly. OpenStack Helm attempts to handle that by setting
https://opendev.org/openstack/openstack-helm/src/commit/79d4b68951d17a00d8455d8ff26ee51cadbe2bf1/glance/values.yaml#L424 to “gracefully_kill_them_all”. Unfortunately that just means it spikes the http router and uWSGI dies on the next epoll() fire so it doesn’t finish the interaction.
The “hack” is to increase the number of processes. But the real fix is to move away from uWSGI.
Hope that helps.
—
Doug
On Feb 1, 2025, at 7:00 AM, cbh12032@gmail.com wrote:
Did you find the solution?
I also had the same problem as you. (exactly)