Also let's keep in mind, that only nova (with placement) will spawn 64 threads on such setup by default. And then all really depends on set of services to launch on such setup. So from deployment tooling protective you have all required data to rollout not instantly oom-ing setup at cost of amount of request services can process in parallel. On Mon, 16 Jun 2025, 15:24 Dmitriy Rabotyagov, <noonedeadpunk@gmail.com> wrote:
In case you try to use a 32gb box with 16 cores as a controller for OpenStack - it will blow off with default amount of workers for wsgi and /or eventlet apps.
While you can argue this should not be used as production setup, this can be totally valid for sandboxes and we wanna provide consistent and reliable behavior for users.
But my argument was not in if/how we want to fine-tune deployments, but also understand and provide means to define what's needed as well as potential ability to revert in worst case scenario as a temporary workaround. So still some variables and logic would be introduced from what I understand today.
On Mon, 16 Jun 2025, 14:43 Sean Mooney, <smooney@redhat.com> wrote:
On 16/06/2025 13:27, Dmitriy Rabotyagov wrote:
sayint its FUD is not helpful.
we got a driect ask form operator and soem core to not do a hard switch over.
and while i wanted to only support one model for each binary at a time we were sepcificly ask to make it configurable.
> In the later case, your only available action is help fixing bugs. It > is not up to the operators to double-guess what may or may not work.
correct we are not planning to document how to change mode we were planning to only use this configuration in ci and operator would be
Well, we'd need to have that communicated so that deployment toolings could adopt their setup to changes, as, for instance, in OSA amount of eventlet workers are calculated based on the system facts, so we'd need to change the logic and also suggest how users should treat this new logic for their systems.
why is OSA doing that at all today? we generally don't recommend changing those values from the default unless you really know what your doing. i don't think other installer do that. tripleo, kolla-ansbile and our new golang based installer do not, nor does devstack so its surprising to me that OSA would change such low level values by default.
we will document any new config options we and and we are documentation how to tune the new options for thread pools but we do not expect installation tools to modify them by default. we are explicitly not making the options based on the amount of resources on the host i.e. dynamically calculated based on the number of CPU cores.
for example we are explicitly setting the number of scatter_gather thread in the the dedicated thread pool to 5 why its a nice small number that will work for most people out of the box.
can you adjust it, yes but it scale based on the number of nova cells you have an 99% wont have more then 5 cells.
using information about the host where the API is deployed to infer the value of that would be incorrect.
you can really only make an informed decision about how to tune that based on monitoring the usage of the pool.
that how we expect most of the other tuning options to go as well.
our defaults in nova tend to be higher then you would actually need in a real environment so while it may make sense to reduce them we try to make sure the work out of the box for most people.
gibi id building up
https://review.opendev.org/c/openstack/nova/+/949364/13/doc/source/admin/con...
as part of nova move to encode this but our goal is that deployment tools shoudl not need to be modifyed to tune these valued by defualt.
So it will be kinda documented in a way after all.
told for a given release deploy this way.
this is an internal impelation detail however we are not prepared to deprecate usign eventlet until we are convicned
that we can run properly without it.
> For beginners, this would be a horrible nightmare if default options > simply wouldn't work. We *must* ship OpenStack working by default. no one is suggesting we do otherwise. > > Cheers, > > Thomas Goirand (zigo) >