[placement][nova][ptg] Resource provider - request group mapping
openstack at fried.cc
Wed Apr 10 14:31:13 UTC 2019
> It is on my TODO list to create a story for it in placement and move
> the spec to the placement repo. I don't know when I will reach this
> item on my list, sorry.
I was getting ready to volunteer (again) to help move the ball on this
because it's really important that we get this done.
But then I started thinking, is it really? The workarounds we have in
the client-side code right now are pretty sucky, but they work. The
effort of $subject is an optimization and suck-reducer, but is it
crucial? Probably not. Though I would like to hear from Cyborg before we
decide we can live without it for Train.
> When I move the spec I can add the open questions from the nova spec
> review to the placement spec directly to help continuity. Is that OK?
> Pinging Cyborg folks. Does Cyborg needs something similar?
I know for sure this is a yes (somewhere around ?), but I won't be
able to express the details as well as Sundar.
> I can own the first alternative in the spec .
I'll champion the one I described in the third comment at , where we
add a "mappings" dict next to "allocations". IMO, it's a tad cleaner
because it's per "allocations" rather than per "allocations.$rp". That
said, both of these options:
- Provide the same information: which request groups got satisfied by
which providers .
- Violate the "black box" principle and require one side or the other to
work around it (either client removes or placement ignores the new key
on PUT /allocations). As I said further down in , I don't care about
- Maintain the existing levels of hierarchy for the existing elements,
which Chris explained was important (see bottom five comments at ).
- Don't require correlation by list index, which was the only thing I
was a hard -1 on.
So if anyone has a strong preference for , I'm not going to fight hard.
 Note that they also both *don't* provide information about which
*resource* satisfied which request group. E.g. this spec doesn't help us
with the "multiple disks" problem:
resources1=DISK_GB:50&resources2=DISK_GB:25&group_policy=none may result
in one RP providing DISK_GB:75, request_groups=[resources1,resources2].
I'm assuming we don't care (yet).
More information about the openstack-discuss