AFAIK we are running vanilla Swift. Clients usually connect to the Swift endpoint by running "swift" or "openstack container" commands, and Swift uses exabgp to route the traffic. I'm not exactly sure what the Java library is doing but I can dig more into that if it would help. I see swift_proxy_server and swift_haproxy containers running on every Swift node. Is that not the normal configuration? According to [1] it's pretty easy to reproduce: "You can reproduce this by issuing a GET request for a few hundred MB file and never consuming the response, but keep the client socket open. Swift will log a 499 but the socket does not always close." On Friday, June 24, 2022, 12:02:05 PM EDT, Clay Gerrard <clay.gerrard@gmail.com> wrote: On Fri, Jun 24, 2022 at 10:29 AM Albert Braden <ozzzo@yahoo.com> wrote:
"Having another look at that issue, it sounds like slow client shouldn't be handled by OpenStack services but rather with a load balancer, especially if the service is Internet facing"
I don't understand what is being recommended here.
I think they were suggesting using a http proxy application - maybe haproxy - will have more options to protect network resources from misbehaving clients than the swift proxy application. Like kicking off keep-alive connections after a while, or slow clients that hang up resources.
We have 60 Swift servers, and customer traffic goes directly to those servers. It seems like a load-balancer would be a performance-reducing bottleneck.
That's cool, do you use round robin dns or something?
Is there any hope of getting this bug fixed?
If we can reproduce the problem you're seeing there's some chance we could offer a solution through just a code change, but it's going to be difficult if repro requires haproxy in the pipeline. If there is a problem w/o haproxy, it might have more to do with eventlet.wsgi or python's base http server than swift... can you affirm the issue when clients talk directly to the python/eventlet/swift application? -- Clay Gerrard