[OpenStack-Infra] opensource infra: server sizes

Shadi Akiki shadi at autofitcloud.com
Mon Aug 19 16:36:13 UTC 2019


>
> the sizing details for control plane servers are not really listed anywhere


If they're not listed anywhere, I suppose that nobody follows up on the
sizing until something breaks?

Sometimes the cloud provider will provide us with custom flavors, or ask us
> to use a particular variant


Why would they ask for a particular variant? Is it because these resources
are donated by the cloud provider?
That's my best guess to justify this.

for something not load-balanced that requires production downtime, it's a
> very manual process


Is it also the case that the load-balancing settings are not recorded
anywhere?
eg minimum and maximum number of machines in a load-balancing cluster,
machine flavor

It's just a trade-off between development time to create and maintain
> that, and how often we actually start control-plane servers.
>

Are the control-plane servers the only cloud cost aspect to outweigh the
development costs?
I'm surprised there isn't already a tool out there that interfaces with
rrdtool and/or cacti to help with this.
rrdtool seems to have been around since 20 years now [1] [2]

[1]  https://tobi.oetiker.ch/webtools/appreciators.txt
[2]
https://github.com/oetiker/rrdtool-1.x/commit/37fc663811528ddf3ded4fe236ea26f4f76fa32d#diff-dee0aab09da2b4d69b6722a85037700d
--
Shadi Akiki
Founder & CEO, AutofitCloud
https://autofitcloud.com/
+1 813 579 4935


On Thu, Aug 15, 2019 at 3:15 AM Ian Wienand <iwienand at redhat.com> wrote:

> On Tue, Aug 13, 2019 at 11:44:36AM +0200, Shadi Akiki wrote:
> > 2- how the allocated resource can be downsized (which I was hoping to
> find
> > in the opendev/system-config <https://opendev.org/opendev/system-config>
> >  repo)
>
> You are correct that the sizing details for control plane servers are
> not really listed anywhere.
>
> This is really an artifact of us manually creating control-plane
> servers.  When we create a new control-plane server, we use the launch
> tooling in [1] where you will see we manually select a flavor size.
> This is dependent on the cloud we launch the server in and the flavors
> they provide us.
>
> There isn't really a strict rule on what flavor is chosen; it's more
> art than science :) Basically the smallest for what seems appropriate
> for what the server is doing.
>
> After the server is created the exact flavor used is not recorded
> separately (i.e. other than querying nova directly).  So there is no
> central YAML file or anything with the server and the flavor it was
> created with.  Sometimes the cloud provider will provide us with
> custom flavors, or ask us to use a particular variant.
>
> So in terms of resizing the servers, we are limited to the flavors
> provided to us by the providers, which varies.  In terms of the
> practicality of resizing, as I'm sure you know this can be harder or
> easier depending on a big variety of things from the provider.  We
> have resized servers before when it becomes clear they're not
> performing (recently adding swap to the gitea servers comes to mind).
> Depending on the type of service it varies; for something not
> load-balanced that requires production downtime, it's a very manual
> process.
>
> Nobody is opposed to making any of this more programatic, I'm sure.
> It's just a trade-off between development time to create and maintain
> that, and how often we actually start control-plane servers.
>
> In terms of ask.o.o, that is a "8 GB Performance" flavor, as defined
> by RAX's flavors.  This was rebuilt when we upgraded it to Xenial as
> an 8GB node (from 4GB) as investigation at the time showed 4GB was a
> bit tight [2].  8GB is the next quanta up of flavor provided by RAX
> over 4GB.
>
> I hope this helps!
>
> -i
>
> [1] https://opendev.org/opendev/system-config/src/branch/master/launch
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2018-April/129078.html
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-infra/attachments/20190819/701f58d2/attachment-0001.html>


More information about the OpenStack-Infra mailing list