properly sizing openstack controlplane infrastructure

Daniel Speichert daniel at speichert.pl
Tue Apr 30 16:17:20 UTC 2019


----- Original Message -----
> From: "Hartwig Hauschild" <openstack at hauschild.it>
> To: openstack-discuss at lists.openstack.org
> Sent: Tuesday, April 30, 2019 9:30:22 AM
> Subject: properly sizing openstack controlplane infrastructure

> The requirements we've got are basically "here's 50 compute-nodes, make sure
> whatever you're building scales upwards from there".

It depends what's your end goal. 100? 500? >1000 nodes?
At some point things like Nova Cells will help (or become necessity).

> The pike-stack has three servers as control-plane, each of them with 96G of
> RAM and they don't seem to have too much room left when coordinating 14
> compute-nodes.

96 GB of RAM per controller is much more than enough for 14 compute nodes.
There's room for improvement in configuration.

> We're thinking about splitting the control-nodes into infrastructure
> (db/rabbit/memcache) and API.
> 
> What would I want to look for when sizing those control-nodes? I've not been
> able to find any references for this at all, just rather nebulous '8G RAM
> should do' which is around what our rabbit currently inhales.

You might want to check out Performance Docs:
https://docs.openstack.org/developer/performance-docs/

For configuration tips, I'd suggest looking at what openstack-ansible
or similar projects provide as "battle-tested" configuration.
It's a good baseline reference before you tune yourself.

-Daniel



More information about the openstack-discuss mailing list