properly sizing openstack controlplane infrastructure

Fox, Kevin M Kevin.Fox at pnnl.gov
Tue Apr 30 15:55:11 UTC 2019


I've run that same network config at about 70 nodes with no problems. I've run the same without dvr at 150 nodes.

Your memory usage seems very high. I ran 150 nodes with a small 16g server ages ago. Might double check that.

Thanks,
Kevin
________________________________________
From: Hartwig Hauschild [openstack at hauschild.it]
Sent: Tuesday, April 30, 2019 8:30 AM
To: openstack-discuss at lists.openstack.org
Subject: properly sizing openstack controlplane infrastructure

Hi everyone,

A colleague and I have more or less recently been tasked with redesigning
my employers openstack-infrastructure and it turned out that starting over
will be easier than fixing the existing stack as we've managed to lock
ourselves in a corner quite good.

The requirements we've got are basically "here's 50 compute-nodes, make sure
whatever you're building scales upwards from there".

We have the existing pike-stack as a reference but we don't really know how
the different services scale up with more compute-nodes to handle.

The pike-stack has three servers as control-plane, each of them with 96G of
RAM and they don't seem to have too much room left when coordinating 14
compute-nodes.

We're thinking about splitting the control-nodes into infrastructure
(db/rabbit/memcache) and API.

What would I want to look for when sizing those control-nodes? I've not been
able to find any references for this at all, just rather nebulous '8G RAM
should do' which is around what our rabbit currently inhales.

Also: We're currently running Neutron in OVS-DVR-VXLAN-Configuration.
Does that properly scale up and above 50+ nodes or should we look into
offloading the networking onto something else?

I'm aware that the questions may sound a bit funny, but we have to spec out
the controlplane-hardware before we can start testing and we'd prefer not to
retrofit the servers because we goofed up.

Thanks for your time,

--
Cheers,
        Hardy




More information about the openstack-discuss mailing list