[placement][nova][ptg] NUMA Topology with placement
Chris Dent
cdent+os at anticdent.org
Wed Apr 10 12:32:09 UTC 2019
>From the cross-project etherpad [1]
* Spec: https://review.openstack.org/#/c/552924/
This is probably the biggest topic, in the sense that modeling NUMA
in placement and how we do that has a big impact across a large
number of other pending features, including several specs that state
things like "this would be different if we had NUMA in placement".
Similarly, if we do have NUMA in placement, we also end up with
questions about and requirements with:
* json payload for getting allocation candidates:
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004724.html
* increased complexity in protecting driver provided traits
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004779.html
* resource provider - request group mapping
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004819.html
* resource providers with traits but no resources
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004817.html
* resource provider (subtree) affinity
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004782.html
And there's probably cascades over to dedicated cpus, cpu
capabilities, network bandwidth mgt, etc etc.
>From the placement perspective, the problem isn't representing the
NUMA info in placement, it's getting candidates back out in a useful
fashion once they are in there, so the resources can be claimed. It
would be useful if someone could make explicit and enumerate:
* What (if any) ways in which the current handling of nested
providers does not support _writing_ NUMA-related info to placement.
* What (we know there are some) ways the current handling of allocation
candidates and the underlying database queries do not support
effective use of NUMA info once it is in placement.
Thanks.
[1] https://etherpad.openstack.org/p/ptg-train-xproj-nova-placement
--
Chris Dent ٩◔̯◔۶ https://anticdent.org/
freenode: cdent tw: @anticdent
More information about the openstack-discuss
mailing list