[placement][nova][ptg] NUMA Topology with placement

Chris Dent cdent+os at anticdent.org
Wed Apr 10 12:32:09 UTC 2019

>From the cross-project etherpad [1]

* Spec: https://review.openstack.org/#/c/552924/

This is probably the biggest topic, in the sense that modeling NUMA
in placement and how we do that has a big impact across a large
number of other pending features, including several specs that state
things like "this would be different if we had NUMA in placement".

Similarly, if we do have NUMA in placement, we also end up with
questions about and requirements with:

* json payload for getting allocation candidates:

* increased complexity in protecting driver provided traits

* resource provider - request group mapping

* resource providers with traits but no resources

* resource provider (subtree) affinity

And there's probably cascades over to dedicated cpus, cpu
capabilities, network bandwidth mgt, etc etc.

>From the placement perspective, the problem isn't representing the
NUMA info in placement, it's getting candidates back out in a useful
fashion once they are in there, so the resources can be claimed. It
would be useful if someone could make explicit and enumerate:

* What (if any) ways in which the current handling of nested
   providers does not support _writing_ NUMA-related info to placement.

* What (we know there are some) ways the current handling of allocation
   candidates and the underlying database queries do not support
   effective use of NUMA info once it is in placement.


[1] https://etherpad.openstack.org/p/ptg-train-xproj-nova-placement

Chris Dent                       ٩◔̯◔۶           https://anticdent.org/
freenode: cdent                                         tw: @anticdent

More information about the openstack-discuss mailing list