[openstack-dev] memcache connections and multiprocess api

Pitucha, Stanislaw Izaak stanislaw.pitucha at hp.com
Fri Oct 11 15:08:02 UTC 2013


Hi all,
I'm seeing a lot of memcache connections from my api hosts to memcached and it looks like the number is way too high for what I expect.
The way I understand the code at the moment, there's going to be a specified number of workers (20 in my case) and each will have access to a greenthread pool (which at the moment is set to the default size of 1000). This seems a bit unreasonable I think. I'm not sure what the model for sql connections is, but at least memcache will get a connection per each greenthread... and it will almost never disconnect in practice.

This results in hundreds idle of connections to memcache at the moment, which quickly hits any reasonable open files limit on the memcached side.
Has anyone seen this behavior before and tried to play with tweaking the pool_size at all? I'd expect that 1000 greenthreads in one process' pool is too much for any typical usecase apart from trying not to miss bursts of connections (but they will have to wait for db and rpc pools anyway and there's 128 connections backlog for that).

So... has anyone looked at fixing this in context of memcache connections? Lower wsgi pool_size? Timing out wsgi greenthreads? Pooling memcache connections?

Regards,
Stanisław Pitucha
Cloud Services 
Hewlett Packard




More information about the OpenStack-dev mailing list