[placement][nova][ptg] resource provider affinity

Alex Xu soulxu at gmail.com
Mon Apr 29 05:02:57 UTC 2019

Chris Dent <cdent+os at anticdent.org> 于2019年4月28日周日 下午10:14写道:

> On Sun, 28 Apr 2019, Eric Fried wrote:
> > We've talked about this previously. The two objections raised were:
> >
> > a) It assumes the meaning of "same tree" is "one level down from the
> > root".
> Does it? I had casually interpreted
> "group_policy=same_tree:$GROUP_A:$GROUP_B" as meaning '$GROUP_B is
> somewhere within the tree rooted at $GROUP_A at any level' but it
> could just as easily be interpreted a few different ways, including
> what you say.
> > b) It assumes the various pieces of the request (flavor, image, port,
> > device profile) are able to know each others' request group numbers
> > ahead of time. Or we need provide some other mechanism for the scheduler
> > code that dynamically assigns the numbers [2] to understand which ones
> > need to be (sub)grouped together. IIUC this has been Sundar's main
> > objection.
> As I understand things, this is going to be a problem in most of the
> proposals, for at least one of the many participants in the
> interactions that lead to a complex workload landing.
> Jay suggested extending the JSON schema to allow groups that are
> names like resources_compute, required_network. That might allow for
> some conventions to emerge but still requires some measure of
> knowledge from the participants.

I thought the placement, cyborg, and neutron..etc doesn't what is building.
Placement doesn't know what it is building from 'GET /a_c', it just return
the right RP match the request. Cyborg and neutron only returns a device or
a port requirement. So only Nova knows we are building VM, then nova should
know the affinity of those resources.

> I suspect some form of knowledge is going to be needed. Limiting it
> would be good.
> Also good is making sure that from placement's standpoint the
> knowledge is merely symbolic.
> --
> Chris Dent                       ٩◔̯◔۶           https://anticdent.org/
> freenode: cdent                                         tw: @anticdent
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190428/d1bd0882/attachment-0001.html>

More information about the openstack-discuss mailing list