[openstack-dev] [Neutron] The three API server multi-worker process patches.

Baldwin, Carl (HPCS Neutron) carl.baldwin at hp.com
Fri Sep 6 16:35:25 UTC 2013


This is a great lead on 'pool_recycle'.  Thank you.  Last night I was
poking around in the sqlalchemy pool code but hadn't yet come to a
complete solution.  I will do some testing on this today and hopefully
have an updated patch out soon.

Carl

From:  Yingjun Li <liyingjun1988 at gmail.com>
Reply-To:  OpenStack Development Mailing List
<openstack-dev at lists.openstack.org>
Date:  Thursday, September 5, 2013 8:28 PM
To:  OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
Subject:  Re: [openstack-dev] [Neutron] The three API server multi-worker
process patches.


+1 for Carl's patch, and i have abandoned my patch..

About the `MySQL server gone away` problem, I fixed it by set
'pool_recycle' to 1 in db/api.py.

在 2013年9月6日星期五,Nachi Ueno 写道:

Hi Folks

We choose https://review.openstack.org/#/c/37131/ <-- This patch to go on.
We are also discussing in this patch.

Best
Nachi



2013/9/5 Baldwin, Carl (HPCS Neutron) <carl.baldwin at hp.com>:
> Brian,
>
> As far as I know, no consensus was reached.
>
> A problem was discovered that happens when spawning multiple processes.
> The mysql connection seems to "go away" after between 10-60 seconds in my
> testing causing a seemingly random API call to fail.  After that, it is
> okay.  This must be due to some interaction between forking the process
> and the mysql connection pool.  This needs to be solved but I haven't had
> the time to look in to it this week.
>
> I'm not sure if the other proposal suffers from this problem.
>
> Carl
>
> On 9/4/13 3:34 PM, "Brian Cline" <bcline at softlayer.com> wrote:
>
>>Was any consensus on this ever reached? It appears both reviews are still
>>open. I'm partial to review 37131 as it attacks the problem a more
>>concisely, and, as mentioned, combined the efforts of the two more
>>effective patches. I would echo Carl's sentiments that it's an easy
>>review minus the few minor behaviors discussed on the review thread
>>today.
>>
>>We feel very strongly about these making it into Havana -- being confined
>>to a single neutron-server instance per cluster or region is a huge
>>bottleneck--essentially the only controller process with massive CPU
>>churn in environments with constant instance churn, or sudden large
>>batches of new instance requests.
>>
>>In Grizzly, this behavior caused addresses not to be issued to some
>>instances during boot, due to quantum-server thinking the DHCP agents
>>timed out and were no longer available, when in reality they were just
>>backlogged (waiting on quantum-server, it seemed).
>>
>>Is it realistically looking like this patch will be cut for h3?
>>
>>--
>>Brian Cline
>>Software Engineer III, Product Innovation
>>
>>SoftLayer, an IBM Company
>>4849 Alpha Rd, Dallas, TX 75244
>>214.782.7876 direct  |  bcline at softlayer.com
>>
>>
>>-----Original Message-----
>>From: Baldwin, Carl (HPCS Neutron) [mailto:carl.baldwin at hp.com]
>>Sent: Wednesday, August 28, 2013 3:04 PM
>>To: Mark McClain
>>Cc: OpenStack Development Mailing List
>>Subject: [openstack-dev] [Neutron] The three API server multi-worker
>>process patches.
>>
>>All,
>>
>>We've known for a while now that some duplication of work happened with
>>respect to adding multiple worker processes to the neutron-server.  There
>>were a few mistakes made which led to three patches being done
>>independently of each other.
>>
>>Can we settle on one and accept it?
>>
>>I have changed my patch at the suggestion of one of the other 2 authors,
>>Peter Feiner, in attempt to find common ground.  It now uses openstack
>>common code and therefore it is more concise than any of the original
>>three and should be pretty easy to review.  I'll admit to some bias
>>toward
>>my own implementation but most importantly, I would like for one of these
>>implementations to land and start seeing broad usage in the community
>>earlier than later.
>>
>>Carl Baldwin
>>
>>PS Here are the two remaining patches.  The third has been abandoned.
>>
>>https://review.openstack.org/#/c/37131/
>>https://review.openstack.org/#/c/36487/
>>
>>
>>_______________________________________________
>>OpenStack-dev mailing list
>>OpenStack-dev at lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org




More information about the OpenStack-dev mailing list