[Openstack] Memory leaks from greenthreads

Johannes Erdfelt johannes at erdfelt.com
Wed Feb 29 22:32:34 UTC 2012


On Wed, Feb 29, 2012, Joshua Harlow <harlowja at yahoo-inc.com> wrote:
> Just a thought I was having, that others might want to chime in on.
> 
> Has there been any thinking around only using eventlet/greenlet for
> webserver endpoints and using something like multiprocessing for
> everything else?
> 
> I know its a fundamental change, but it would force people to think
> about how to break up there code into something that would work with
> a message passing architecture (this is already happening with
> nova + rabbitmq). Nova is a good example, but my thought was to go even
> further and have anything that needs to run for a long time (ie a
> equivalent of a nova manager) that is shared inside a application also
> be a separate "process" with a queue for message passing. Then maybe
> eventlet/greenlet isn't needed at all? This would force good
> interfaces, and we wouldn't have to worry about missing a monkey patch.
> Maybe the python people plan for multiprocess to replace
> eventlet/greenlet in the end anyway???

I personally would be for an effort to remove eventlet. It does a pretty
good job, but monkey patching is error prone. For instance, my patch for
this eventlet memory leak also had to workaround lockfile patching the
threading.Thread object.

The biggest problem is how the exception context gets cleared if
eventlet threads get switched. It's very hard to determine what
exception handlers need a sprinkling of save_and_reraise_exception() to
workaround that issue. That isn't the only reason for that function,
but it's difficult to know when it's necessary.

That said, the work won't be trivial, especially with things like
connection pooling for XenAPI connections.

JE





More information about the Openstack mailing list