I propose using a "group_policy=same_tree:$GROUP_A:$GROUP_B" query parameter for enabling users to describe the affinity constraints for various resources involved in different RequestGroups in the request spec.
At first glance this seems pretty reasonable. Does anyone hate it?
We've talked about this previously. The two objections raised were: a) It assumes the meaning of "same tree" is "one level down from the root". This satisfies NUMA affinity, and also allows us to do things like [1] (scroll down to the pretty picture where networking agents are subtree roots). But it may prove too limiting in the future if, for example, we need to represent sockets *under* NUMA nodes and do L3 cache affinity. (Come to think of it, if we need to do [1] in the presence of NUMA and need to affine the network devices to the CPUs, what would that whole tree look like, and how would it affect this proposal?) b) It assumes the various pieces of the request (flavor, image, port, device profile) are able to know each others' request group numbers ahead of time. Or we need provide some other mechanism for the scheduler code that dynamically assigns the numbers [2] to understand which ones need to be (sub)grouped together. IIUC this has been Sundar's main objection. efried [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005111.htm... [2] https://opendev.org/openstack/nova/src/branch/master/nova/scheduler/utils.py...