On 2021-04-07 09:20:55 -0700 (-0700), Clark Boylan wrote: [...]
This change was made at the request of mnaser to better support resource allocation in vexxhost (the flavors we use now use their standard ratio for memory:cpu). One (likely bad) option would be to select a flavor based on memory rather than cpu count. In this case I think we would go from 8vcpu + 32GB memory to 2vcpu + 8GB of memory.
At the time I was surprised the change merged so quickly [...]
Based on the commit message and the fact that we were pinged in IRC to review, I got the impression it was relatively urgent.
I suspect that the kernel limit is our best option. We can set this via DIB_BOOTLOADER_DEFAULT_CMDLINE [0] which i expect will work in many cases across the various distros. The problem with this approach is that we would need different images for the places we want to boot with more memory (the -expanded labels for example).
For completeness other possibilities are: * Convince the clouds that the nova flavor is the best place to control this and set them appropriately * Don't use clouds that can't set appropriate flavors * Accept Fungi's argument in the IRC log above and accept that memory as with other resources like disk iops and network will be variable
To be clear, this was mostly a "devil's advocate" argument, and not really my opinion. We saw first hand that disparate memory sizing in HPCloud was allowing massive memory usage jumps to merge in OpenStack, and took action back then to artificially limit the available memory at boot. We now have fresh evidence from the Zuul community that this hasn't ceased to be a problem. On the other hand, we also see projects merge changes which significantly increase disk utilization and then can't run on some environments where we get smaller disks (or depend on having multiple network interfaces, or specific addressing schemes, or certain CPU flags, or...), so heterogeneity the problem isn't limited exclusively to memory.
* Kernel module that inspects some attribute at boot time and sets mem appropriately [...]
Not to downplay the value of the donated resources, because they really are very much appreciated, but these currently account for less than 5% of our aggregate node count so having to maintain multiple nearly identical images or doing a lot of additional engineering work seems like it may outweigh any immediate benefits. With the increasing use of special node labels like expanded, nested-virt and NUMA, it might make more sense to just limit this region to not supplying standard nodes, which sidesteps the problem for now. -- Jeremy Stanley