[large-scale][oslo.messaging] RPC workers and db connection

Arnaud Morin arnaud.morin at gmail.com
Mon Oct 24 09:30:49 UTC 2022


Yup, I was exactly thinking about something like that, I did check the
source code of oslo_metrics again, but this is only related to rabbit
messaging metrics so far.

That's a good point to include something in order to catch the
db metrics as well.

On our side, we use the "percona monitoring and management" [1] as sidecar,
which help us a lot identifying which part of OpenStack is consuming
db resources.

[1] https://www.percona.com/software/pmm/quickstart

Cheers,
Arnaud.

On 24.10.22 - 09:25, Felix Hüttner wrote:
> Hi everyone,
> 
> we have also struggled to find reasonable values for our settings. These are currently based on some experience but not on actual data.
> Does anyone by chance know of a metric that can show the usage of the database connection pools?
> 
> Otherwise that might be something to add to oslo.metrics maybe?
> 
> --
> Felix Huettner
> 
> > -----Original Message-----
> > From: Arnaud Morin <arnaud.morin at gmail.com>
> > Sent: Monday, October 24, 2022 10:58 AM
> > To: Belmiro Moreira <moreira.belmiro.email.lists at gmail.com>
> > Cc: discuss openstack <openstack-discuss at lists.openstack.org>
> > Subject: Re: [large-scale][oslo.messaging] RPC workers and db connection
> >
> > Hey Belimiro,
> >
> > Thanks for your answer.
> >
> > Having ~1000 connection for each service is a lot to me.
> > With the example below, I only talked about neutron RPC service on one
> > node.
> > On our biggest region, we are using multiple nodes (like 8 neutron
> > controllers), which are running both neutron API and neutron RPC.
> >
> > So, we can end-up with something like 16k connections only for neutron
> > :(
> >
> > Of courses, we limited that by lowering the default values, but we are
> > still struggling figuring out the correct values for this.
> >
> > Cheers,
> >
> > Arnaud.
> >
> >
> > On 22.10.22 - 18:37, Belmiro Moreira wrote:
> > > Hi,
> > > having the DB "max connections" ~ 1000 is not unreasonable and I have been
> > > doing it since long ago.
> > > This is also related to the number of nodes running the services. For
> > > example in Nova, related to the number of nodes running APIs, conductors,
> > > schedulers...
> > >
> > > cheers,
> > > Belmiro
> > >
> > > On Fri, Oct 21, 2022 at 5:07 PM Arnaud Morin <arnaud.morin at gmail.com> wrote:
> > >
> > > > Hey all,
> > > >
> > > > TLDR: How can I fine tune the number of DB connection on OpenStack
> > > >       services?
> > > >
> > > >
> > > > Long story, with some inline questions:
> > > >
> > > > I am trying to figure out the maximum number of db connection we should
> > > > allow on our db cluster.
> > > >
> > > > For this short speech, I will use neutron RPC as example service, but I
> > > > think nova is acting similar.
> > > >
> > > > So, to do so, I identified few parameters that I can tweak:
> > > > rpc_workers [1]
> > > > max_pool_size [2]
> > > > max_overflow [3]
> > > > executor_thread_pool_size [4]
> > > >
> > > >
> > > > rpc_worker default is half CPU threads available (result of nproc)
> > > > max_pool_size default is 5
> > > > max_overflow default is 50
> > > > executor_thread_pool_size is 64
> > > >
> > > > Now imagine I have a server with 40 cores,
> > > > So rpc_worker will be 20.
> > > > Each worker will have a DB pool with 5+50 connections available.
> > > > Each worker will use up to 64 "green" thread.
> > > >
> > > > The theorical max connection that I should set on my database is then:
> > > > rpc_workers*(max_pool_size+max_overflow) = 20*(5+50) = 1100
> > > >
> > > > Q1: am I right here?
> > > > I have the feeling that this is huge.
> > > >
> > > > Now, let's assume each thread is consuming 1 connection from the DB pool.
> > > > Under heavy load, I am affraid that the 64 threads could exceed the
> > > > number of max_pool_size+max_overflow.
> > > >
> > > > Also, I noticed that some green threads were consuming more than 1
> > > > connection from the pool, so I can reach the max even sooner!
> > > >
> > > > Another thing, I notice that I have 21 RPC workers, not 20. Is it
> > > > normal?
> > > >
> > > >
> > > > [1]
> > > >
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Fneutron%2Flatest%2Fconfiguration%2
> > Fneutron.html%23DEFAULT.rpc_workers&data=05%7C01%7C%7C2fcb72675ac148f6c8ee08dab59efc49%7Cd04f47175a6e4b98b3f
> > 96918e0385f4c%7C0%7C0%7C638021991684328050%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI
> > 6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=8ZbaSFRcEUc0wZnaikxR4sUZjbQiZe%2Fj0Q1kUsvrTGk%3D&rese
> > rved=0
> > > > [2]
> > > >
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Fneutron%2Flatest%2Fconfiguration%2
> > Fneutron.html%23database.max_pool_size&data=05%7C01%7C%7C2fcb72675ac148f6c8ee08dab59efc49%7Cd04f47175a6e4b98b
> > 3f96918e0385f4c%7C0%7C0%7C638021991684328050%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJB
> > TiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=IYhAbvxlASqJ34SMjPQ581JmoICHP7UPPGK%2BVPYRfeY%3D&
> > reserved=0
> > > > [3]
> > > >
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Fneutron%2Flatest%2Fconfiguration%2
> > Fneutron.html%23database.max_overflow&data=05%7C01%7C%7C2fcb72675ac148f6c8ee08dab59efc49%7Cd04f47175a6e4b98b
> > 3f96918e0385f4c%7C0%7C0%7C638021991684484231%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJB
> > TiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=On94ArtrzPcn9S3R4QAWDvnJiOj%2FeBQ8AP4%2FsVDx9V8%3D&a
> > mp;reserved=0
> > > > [4]
> > > >
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Fneutron%2Flatest%2Fconfiguration%2
> > Fneutron.html%23DEFAULT.executor_thread_pool_size&data=05%7C01%7C%7C2fcb72675ac148f6c8ee08dab59efc49%7Cd04f47
> > 175a6e4b98b3f96918e0385f4c%7C0%7C0%7C638021991684484231%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoi
> > V2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=YP%2FqDxDi%2F2SksnsHKw2N9bQTdk0OU3n2m5sy3
> > NhTOgQ%3D&reserved=0
> > > >
> > > > Cheers,
> > > >
> > > > Arnaud.
> > > >
> > > >
> 
> Diese E Mail enthält möglicherweise vertrauliche Inhalte und ist nur für die Verwertung durch den vorgesehenen Empfänger bestimmt. Sollten Sie nicht der vorgesehene Empfänger sein, setzen Sie den Absender bitte unverzüglich in Kenntnis und löschen diese E Mail. Hinweise zum Datenschutz finden Sie hier<https://www.datenschutz.schwarz>.



More information about the openstack-discuss mailing list