Contribute an another idea at here. Pretty sure I didn't explore this with all the cases by my limited vision.

So I'm thinking we can continue use query string build a tree structure by the request group number. I know the number request group problem for the cyborg and neutron, but I think there must be some way to describe the cyborg device will be attached to which instance numa node. So I guess that it isn't the fault of number request group, maybe we are just missing a way to describe that.

For the case in the spec https://review.openstack.org/#/c/650476, an instance with one numa node and two VFs from different network. We can write as below:

?resources=DISK_GB:10&
resources1=VCPU:2,MEMORY_MB:128&
resources1.1=VF:1&required=NET_A
resources1.2=VF:1&required=NET_B

Another example, we request an instance with two numa nodes, 2 vcpus and 128mb memory in each node. In each node has two VFs come from different PF to have HA.

?resources=DISK_GB:10&
resources1=VCPU:2,MEMORY_MB:128&
resources1.1=VF:1&
resources1.2=VF:1&
resources2=VCPU:2,MEMORY_MB:128&
resources2.1=VF:1&
resources2.2=VF:1&
group_policy=isolate&
group_policy1=isolate&
group_policy2=isolate

The `group_policy` ensure the resources1 and resources2 aren't coming from the same RP. The 'group_poilcy1' ensures `resource1.x` aren't coming from the same RP. The `group_policy2` ensures `resources2.x` aren't coming from same RP.

For the cyborg case, I think we can propose the flavor extra specs as below:
accel:device_profile.[numa node id]=<profile_name>

Then we will know the user hope the cyborg device being attach to which instance numa node.

The cyborg only needs to return un-numbered request group, then Nova will base on all the 'hw:xxx' extra specs and 'accel:device_profile.[numa node id]' to generate a placement request like above.

For example, if it is PCI device under first numa node, the extra spec will be 'accel:device_profile.0=<profile_name>' the cyborg can return a simple request 'resources=CYBORG_PCI_XX_DEVICE:1', then we merge this into the request group 'resources1=VCPU:2,MEMORY_MB:128,CYBORG_PCI_XX_DEVICE:1'. If the pci device has a special trait, then cyborg should return request group as 'resources1=CYBORG_PCI_XX_DEVICE:1&required=SOME_TRAIT', then nova merge this into placement request as 'resources1.1'.

Chris Dent <cdent+os@anticdent.org> 于2019年4月9日周二 下午8:42写道:

Spec: https://review.openstack.org/650476

>From the commit message:

     To support NUMA and similar concepts, this proposes the ability
     to request resources from different providers nested under a
     common subtree (below the root provider).

There's much in the feature described by the spec and the surrounding
context that is frequently a source of contention in the placement
group, so working through this spec is probably going to require
some robust discussion. Doing most of that before the PTG will help
make sure we're not going in circles in person.k

Some of the areas of potential contention:

* Adequate for limited but maybe not all use case solutions
* Strict trait constructionism
* Evolving the complexity of placement solely for the satisfaction
   of hardware representation in Nova
* Inventory-less resource providers
* Developing new features in placement before existing features are
   fully used in client services
* Others?

I list this not because they are deal breakers or the only thing
that matters, but because they have presented stumbling blocks in
the past and we may as well work to address them (or make an
agreement to punt them until later) otherwise there will be
lingering dread.

And, beyond all that squishy stuff, there is the necessary
discussion over the solution described in the spec. There are
several alternatives listed in the spec, and a few more in the
comments. We'd like to figure out the best solution that can
actually be done in a reasonable amount of time, not the best
solution in the absolute.

Discuss!

--
Chris Dent                       ٩◔̯◔۶           https://anticdent.org/
freenode: cdent                                         tw: @anticdent