[Openstack] Memory leaks from greenthreads

Vishvananda Ishaya vishvananda at gmail.com
Tue Mar 6 20:33:36 UTC 2012


There isn't an easy repro case for the problem, which is why it snuck in in the first place. IIRC It only happened when trying to run with reasonable concurrency at a time on a multimachine install.  One thing that was very obvious during debugging though is that the number of threads started was incorrect.  As I remember, with a db_pool of 10 for example, you often got 14 threads and even a db_pool of one would start 3+ threads and have strange results.

If you can verify that it works in a reasonably sized install with multiple concurrent requests then we can try putting it back in experimentally with a flag to gate it.

Vish

On Mar 6, 2012, at 10:44 AM, Chris Behrens wrote:

> 
> I wonder if the issue is gone as well.
> 
> After the db_pool code was removed, I was doing some separate testing of sqlalchemy's with_lockmode()..  and I was able to reproduce a traceback from sqlalchemy that looked extremely similar or identical to the traceback we were hitting when using eventlet's db_pool.  This was many months ago.  I moved on to other things and I didn't keep any notes on what I was doing.  A couple months ago.. I decided I should go look at it again.  And I could no longer reproduce the issue... doing what I believe was the same test I was doing initially.
> 
> So...  I wasn't testing db_pool, however I thought I was seeing the same traceback.  And then I couldn't reproduce it anymore.  So, I wonder if these were the same issue... and that the issue is gone now.
> 
> I say it's worth trying the db_pool code again... but I'd make it an option so that we can enable/disable it.
> 
> - Chris
> 
> On Mar 6, 2012, at 9:39 AM, Yuriy Taraday wrote:
> 
>> Now I had no luck trying to reproduce problem that appeared with
>> db_pool. Maybe it's gone?
>> Who can walk me through the way to that bug? I think, we should fix it
>> if it's not fixed already, bring back db_pool and then eventlet will
>> be all good again, am I right?
>> 
>> Kind regards, Yuriy.
>> 
>> 
>> 
>> On Fri, Mar 2, 2012 at 04:57, Vishvananda Ishaya <vishvananda at gmail.com> wrote:
>>> I agree.  It would be awesome if someone could actually make it work.  We
>>> had a totally broken version using the eventlet db pool 6 months ago.
>>> 
>>> Vish
>>> 
>>> On Mar 1, 2012, at 4:20 PM, Joshua Harlow wrote:
>>> 
>>> Sad, especially since so much is using the database :-(
>>> 
>>> On 3/1/12 2:43 PM, "Adam Young" <ayoung at redhat.com> wrote:
>>> 
>>> On 03/01/2012 02:48 PM, Vishvananda Ishaya wrote:
>>>> On Mar 1, 2012, at 9:39 AM, Adam Young wrote:
>>>> 
>>>>> What would the drawbacks be? Probably the first thing people would look
>>>>> to from Eventlet is performance. I don't have the hard numbers to compare
>>>>> Eventlet to Apache HTTPD, but I do know that Apache is used in enough high
>>>>> volume sites that I would not be overly concerned. The traffic in an
>>>>> Openstack deployment to a Keystone server is going to be about two orders of
>>>>> magnitude less than any other traffic, and is highly unlikely to be the
>>>>> bottleneck.
>>>> How did you arrive at this number? Every user has to hit keystone before
>>>> making a request to any other service (unless they already have a token) and
>>>> each service needs to authenticate that token. Any request that hits
>>>> multiple services will hit keystone multiple times.  Without caching,
>>>> keystone is by far the busiest service in an openstack install. Caching
>>>> should fix some of this, but I don't know that I would expect it to be two
>>>> orders of magnitude less.
>>>> 
>>>> Vish
>>> 
>>> 
>>> Seeing as the SQL Alchemy code is blocking on each request,  I suspect
>>> that performance is now soundly *not* a reason to want to stick with
>>> eventlet.  My statement  that Eventlet is performant is based on the
>>> assumption that the benefits of using Greenthreads are realized.  It
>>> looks like that is not the case.
>>> 
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to     : openstack at lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to     : openstack at lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>> 
>> 
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.





More information about the Openstack mailing list