[placement][nova][ptg] resource provider affinity
openstack at fried.cc
Sat May 4 20:02:12 UTC 2019
> It looks like that this can be done without impacting the performance of
> existing requests that have no queryparam for affinity,
Well, the concern is that doing this at _merge_candidates time (i.e. in
python) may be slow. But yeah, let's not solve that until/unless we see
it's truly a problem.
> but I'd like to say that looking into tracking PCPU feature in Nova and
> see the related problems should precede any Nova related items to model
> NUMA in Placement.
To be clear, placement doesn't need any changes for this. I definitely
don't think we should wait for it to land before starting on the
placement side of the affinity work.
> I thought the negative folks were just refusing to be with in the
> positive folks.
> Looks like there are use cases where we need multiple group_resources?
Yes, certainly eventually we'll need this, even just for positive
affinity. Example: I want two VCPUs, two chunks of memory, and two
accelerators. Each VCPU/memory/accelerator combo must be affined to the
same NUMA node so I can maximize the performance of the accelerator. But
I don't care whether both combos come from the same or different NUMA nodes:
and what I want to get in return is:
(1) NUMA1 has VCPU:1,MEMORY_MB:1024,FPGA:1; NUMA2 likewise
(2) NUMA1 has everything
(3) NUMA2 has everything
Slight aside, could we do this with can_split and just one same_subtree?
I'm not sure you could expect the intended result from:
Intuitively, I think the above *either* means you don't get (1), *or* it
means you can get (1)-(3) *plus* things like:
(4) NUMA1 has VCPU:2,MEMORY_MB:2048; NUMA2 has FPGA:2
> - I want 1, 2 in the same subtree, and 3, 4 in the same subtree but the
> two subtrees should be separated:
> * group_resources=1:2:!3:!4&group_resources=3:4
Right, and this too.
As a first pass, I would be fine with supporting only positive affinity.
And if it makes things significantly easier, supporting only a single
group_resources per call.
More information about the openstack-discuss