[placement][ptg] Aggregate on root spans whole tree policy:
From the etherpad [1]:
* Last PTG for Stein, we decided the following policies and have done so in Stein A) Aggregate on root spans whole tree for ``members_of=`` requests in 'GET /allocation_candidates' B) This spanning policy doesn't apply to granular requests ``members_of<N>=`` or to requests in 'GET /resource_providers' C) This change is a bug fix without microversion However, I now feel the policy B is weird. Consider a case where only granular requests are used in the request. If operator puts aggA on root, aggA applies the child or not depends on cases how you created the request. That's very difficult for operators to debug... This is from Tetsuro, so perhaps he can add some additional info, but basically I think what's being requested here is some discussion on whether changing B is warranted. [1] https://etherpad.openstack.org/p/placement-ptg-train -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent
A) Aggregate on root spans whole tree for ``members_of=`` requests in 'GET /allocation_candidates' B) This spanning policy doesn't apply to granular requests ``members_of<N>=`` or to requests in 'GET /resource_providers' C) This change is a bug fix without microversion
For example, let's say two non-root NUMA RPs have the resources. In this case, if the root compute node RP is in aggA, aggA applies to the NUMA nodes *IF* a user requests only 1 NUMA node via the non-granular request while aggA does *NOT* apply to NUMA nodes when a user wants for example two separate NUMA nodes using only granular requests. Setup ----- * Add compute1 in aggA by putting aggA on the root RP of compute1 * compute A has two NUMA nodes and each NUMA RP has 4 VCPUs Actual ------ * GET /allocation_candidates?resources=VCPU:2&member_of=<aggA> -> returns 2 allocation requests of each numa node of the compute1 since placement thinks that the NUMA nodes are in aggA * GET /allocation_candidates?resources1=VCPU:1&member_of1=<aggA>&resources2=VCPU:1&member_of2=<aggA> -> returns nothing since it is granular request so placement thinks that the NUMA nodes are not in aggA Expected -------- The latter should return 1 allocation request on compute1 which contains one allocation to one NUMA node and the other allocation to the other NUMA node. In other words, whether an RP is in the aggregate or not must NOT rely on how user searches it, IMO. The aggregate is not dynamic thing. We'd like to be able to answer to the question "Is it in agg A?" simply without any request info. "Is it in aggA?" "Well, that depends on how you ask that. non-granularly speaking, it is in aggA, but granularly speaking, it is not in aggA" ...would be too difficult. Resource provider is not a Schrödinger's cat. On 2019/04/09 22:00, Chris Dent wrote:
From the etherpad [1]:
* Last PTG for Stein, we decided the following policies and have done so in Stein
A) Aggregate on root spans whole tree for ``members_of=`` requests in 'GET /allocation_candidates' B) This spanning policy doesn't apply to granular requests ``members_of<N>=`` or to requests in 'GET /resource_providers' C) This change is a bug fix without microversion
However, I now feel the policy B is weird. Consider a case where only granular requests are used in the request. If operator puts aggA on root, aggA applies the child or not depends on cases how you created the request. That's very difficult for operators to debug...
This is from Tetsuro, so perhaps he can add some additional info, but basically I think what's being requested here is some discussion on whether changing B is warranted.
-- Tetsuro Nakamura <tetsuro.nakamura.bc@hco.ntt.co.jp> NTT Network Service Systems Laboratories TEL:0422 59 6914(National)/+81 422 59 6914(International) 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan
On Apr 9, 2019, at 8:00 AM, Chris Dent <cdent+os@anticdent.org> wrote:
* Last PTG for Stein, we decided the following policies and have done so in Stein
A) Aggregate on root spans whole tree for ``members_of=`` requests in 'GET /allocation_candidates' B) This spanning policy doesn't apply to granular requests ``members_of<N>=`` or to requests in 'GET /resource_providers' C) This change is a bug fix without microversion
However, I now feel the policy B is weird. Consider a case where only granular requests are used in the request. If operator puts aggA on root, aggA applies the child or not depends on cases how you created the request. That's very difficult for operators to debug...
It seems that a lot of the other efforts around trees have the goal of keeping trees and subtrees more integral, rather than separate pieces. Given that, it would make sense that if the root is in an aggregate, the entire tree is in the agg. I'm trying to remember why we decided on policy B, but my brain is failing me. -- Ed Leafe
On 04/09/2019 09:00 AM, Chris Dent wrote:
From the etherpad [1]:
* Last PTG for Stein, we decided the following policies and have done so in Stein
A) Aggregate on root spans whole tree for ``members_of=`` requests in 'GET /allocation_candidates' B) This spanning policy doesn't apply to granular requests ``members_of<N>=`` or to requests in 'GET /resource_providers' C) This change is a bug fix without microversion
However, I now feel the policy B is weird. Consider a case where only granular requests are used in the request. If operator puts aggA on root, aggA applies the child or not depends on cases how you created the request. That's very difficult for operators to debug...
This is from Tetsuro, so perhaps he can add some additional info, but basically I think what's being requested here is some discussion on whether changing B is warranted.
We have a similar issue with traits. I actually think there should be a single "apply membership or traits using self-and-children" policy. I've been unable to think of any use case that would *not* be serviced by this policy. Providers that are matched for a particular request group's resources should THEN have any member_of constraints applied to them and their children. Same for traits, IMHO. In other words, the algorithm for matching allocation candidates should do the following in order for each request group: 1) Find the provider IDs having capacity for the resources contained in the request group 2) If there is a member_of constraint in this request group, reduce the matched set to only those providers that are associated (or have any children associated) with the aggregates listed in the member_of constraint 3) If there is a required trait constraint in this request group, reduce the matched set to only those providers that have the required trait(s) or where their children have the required traits 4) If there is a forbidden trait constraint in this request group, remove from the matched set any providers that have the forbidden trait(s) or where their children have the forbidden traits Repeat for each request group, applying whatever group_policy=isolate constraint when >1 granular request group. Best, -jay
On Apr 21, 2019, at 5:20 PM, Jay Pipes <jaypipes@gmail.com> wrote:
On 04/09/2019 09:00 AM, Chris Dent wrote:
From the etherpad [1]: * Last PTG for Stein, we decided the following policies and have done so in Stein A) Aggregate on root spans whole tree for ``members_of=`` requests in 'GET /allocation_candidates' B) This spanning policy doesn't apply to granular requests ``members_of<N>=`` or to requests in 'GET /resource_providers' C) This change is a bug fix without microversion However, I now feel the policy B is weird. Consider a case where only granular requests are used in the request. If operator puts aggA on root, aggA applies the child or not depends on cases how you created the request. That's very difficult for operators to debug... This is from Tetsuro, so perhaps he can add some additional info, but basically I think what's being requested here is some discussion on whether changing B is warranted.
We have a similar issue with traits.
I actually think there should be a single "apply membership or traits using self-and-children" policy. I've been unable to think of any use case that would *not* be serviced by this policy.
Not only that, but I can’t imagine a scenario where we would want membership of one RP but forbid membership by a child. That would be… strange. -- Ed Leafe
On 04/22/2019 09:00 AM, Ed Leafe wrote:
On Apr 21, 2019, at 5:20 PM, Jay Pipes <jaypipes@gmail.com> wrote:
On 04/09/2019 09:00 AM, Chris Dent wrote:
From the etherpad [1]: * Last PTG for Stein, we decided the following policies and have done so in Stein A) Aggregate on root spans whole tree for ``members_of=`` requests in 'GET /allocation_candidates' B) This spanning policy doesn't apply to granular requests ``members_of<N>=`` or to requests in 'GET /resource_providers' C) This change is a bug fix without microversion However, I now feel the policy B is weird. Consider a case where only granular requests are used in the request. If operator puts aggA on root, aggA applies the child or not depends on cases how you created the request. That's very difficult for operators to debug... This is from Tetsuro, so perhaps he can add some additional info, but basically I think what's being requested here is some discussion on whether changing B is warranted.
We have a similar issue with traits.
I actually think there should be a single "apply membership or traits using self-and-children" policy. I've been unable to think of any use case that would *not* be serviced by this policy.
Not only that, but I can’t imagine a scenario where we would want membership of one RP but forbid membership by a child. That would be… strange.
Yuup. +100. -jay
On Sun, 21 Apr 2019, Jay Pipes wrote:
In other words, the algorithm for matching allocation candidates should do the following in order for each request group:
1) Find the provider IDs having capacity for the resources contained in the request group 2) If there is a member_of constraint in this request group, reduce the matched set to only those providers that are associated (or have any children associated) with the aggregates listed in the member_of constraint 3) If there is a required trait constraint in this request group, reduce the matched set to only those providers that have the required trait(s) or where their children have the required traits 4) If there is a forbidden trait constraint in this request group, remove from the matched set any providers that have the forbidden trait(s) or where their children have the forbidden traits
Is this the same as or sort of the opposite of the "traits flow down" or "everything is spanning" ideas discussed near http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005201.htm... ? Whichever it is, I think both have merit because they provide a somewhat universal way to interpret trait and aggregate membership, which will help make this stuff more clear. I'm tending to think that resolving this question is the one main things we should do at the PTG, but I'm not clear where to fit it in the schedule without conflicting with a nova topic and also being able to include Jay in the discussion. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent
On 04/28/2019 07:18 PM, Chris Dent wrote:
On Sun, 21 Apr 2019, Jay Pipes wrote:
In other words, the algorithm for matching allocation candidates should do the following in order for each request group:
1) Find the provider IDs having capacity for the resources contained in the request group 2) If there is a member_of constraint in this request group, reduce the matched set to only those providers that are associated (or have any children associated) with the aggregates listed in the member_of constraint 3) If there is a required trait constraint in this request group, reduce the matched set to only those providers that have the required trait(s) or where their children have the required traits 4) If there is a forbidden trait constraint in this request group, remove from the matched set any providers that have the forbidden trait(s) or where their children have the forbidden traits
Is this the same as or sort of the opposite of the "traits flow down" or "everything is spanning" ideas discussed near
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005201.htm...
It represents the same idea as "traits flow down". Best, -jay
participants (4)
-
Chris Dent
-
Ed Leafe
-
Jay Pipes
-
Tetsuro Nakamura