Hi, having the DB "max connections" ~ 1000 is not unreasonable and I have been doing it since long ago. This is also related to the number of nodes running the services. For example in Nova, related to the number of nodes running APIs, conductors, schedulers... cheers, Belmiro On Fri, Oct 21, 2022 at 5:07 PM Arnaud Morin <arnaud.morin@gmail.com> wrote:
Hey all,
TLDR: How can I fine tune the number of DB connection on OpenStack services?
Long story, with some inline questions:
I am trying to figure out the maximum number of db connection we should allow on our db cluster.
For this short speech, I will use neutron RPC as example service, but I think nova is acting similar.
So, to do so, I identified few parameters that I can tweak: rpc_workers [1] max_pool_size [2] max_overflow [3] executor_thread_pool_size [4]
rpc_worker default is half CPU threads available (result of nproc) max_pool_size default is 5 max_overflow default is 50 executor_thread_pool_size is 64
Now imagine I have a server with 40 cores, So rpc_worker will be 20. Each worker will have a DB pool with 5+50 connections available. Each worker will use up to 64 "green" thread.
The theorical max connection that I should set on my database is then: rpc_workers*(max_pool_size+max_overflow) = 20*(5+50) = 1100
Q1: am I right here? I have the feeling that this is huge.
Now, let's assume each thread is consuming 1 connection from the DB pool. Under heavy load, I am affraid that the 64 threads could exceed the number of max_pool_size+max_overflow.
Also, I noticed that some green threads were consuming more than 1 connection from the pool, so I can reach the max even sooner!
Another thing, I notice that I have 21 RPC workers, not 20. Is it normal?
[1] https://docs.openstack.org/neutron/latest/configuration/neutron.html#DEFAULT... [2] https://docs.openstack.org/neutron/latest/configuration/neutron.html#databas... [3] https://docs.openstack.org/neutron/latest/configuration/neutron.html#databas... [4] https://docs.openstack.org/neutron/latest/configuration/neutron.html#DEFAULT...
Cheers,
Arnaud.