[openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata
melanie witt
melwittt at gmail.com
Tue Jan 16 21:24:12 UTC 2018
Hello Stackers,
This is a heads up to any of you using the AggregateCoreFilter,
AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
These filters have effectively allowed operators to set overcommit
ratios per aggregate rather than per compute node in <= Newton.
Beginning in Ocata, there is a behavior change where aggregate-based
overcommit ratios will no longer be honored during scheduling. Instead,
overcommit values must be set on a per compute node basis in nova.conf.
Details: as of Ocata, instead of considering all compute nodes at the
start of scheduler filtering, an optimization has been added to query
resource capacity from placement and prune the compute node list with
the result *before* any filters are applied. Placement tracks resource
capacity and usage and does *not* track aggregate metadata [1]. Because
of this, placement cannot consider aggregate-based overcommit and will
exclude compute nodes that do not have capacity based on per compute
node overcommit.
How to prepare: if you have been relying on per aggregate overcommit,
during your upgrade to Ocata, you must change to using per compute node
overcommit ratios in order for your scheduling behavior to stay
consistent. Otherwise, you may notice increased NoValidHost scheduling
failures as the aggregate-based overcommit is no longer being
considered. You can safely remove the AggregateCoreFilter,
AggregateRamFilter, and AggregateDiskFilter from your enabled_filters
and you do not need to replace them with any other core/ram/disk
filters. The placement query takes care of the core/ram/disk filtering
instead, so CoreFilter, RamFilter, and DiskFilter are redundant.
Thanks,
-melanie
[1] Placement has been a new slate for resource management and prior to
placement, there were conflicts between the different methods for
setting overcommit ratios that were never addressed, such as, "which
value to take if a compute node has overcommit set AND the aggregate has
it set? Which takes precedence?" And, "if a compute node is in more than
one aggregate, which overcommit value should be taken?" So, the
ambiguities were not something that was desirable to bring forward into
placement.
More information about the OpenStack-dev
mailing list