Sorry, I missed the mailist address in the reply, there probably discussion and reply missed in the lastest email. So I reply to the mailist address with those reply, hope other people can catch up our discussion. Sean Mooney <smooney@redhat.com> 于2019年4月15日周一 下午9:54写道:
Contribute an another idea at here. Pretty sure I didn't explore this with all the cases by my limited vision.
So I'm thinking we can continue use query string build a tree structure by the request group number. I know the number request group problem for the cyborg and neutron, but I think there must be some way to describe the cyborg device will be attached to which instance numa node. So I guess
it isn't the fault of number request group, maybe we are just missing a way to describe that.
For the case in the spec https://review.openstack.org/#/c/650476, an instance with one numa node and two VFs from different network. We can write as below:
?resources=DISK_GB:10& resources1=VCPU:2,MEMORY_MB:128& resources1.1=VF:1&required=NET_A resources1.2=VF:1&required=NET_B im not sure what NET_A and NET_B correspond to as they are not prefixed with CUSTOM_ that implies they are standard
On Mon, 2019-04-15 at 21:04 +0800, Alex Xu wrote: that traits but how woudl you map dynamically created neutron network to reouce providres as traits.
i can see and have argued for doing something similar for neutron physnet as tehy are mostly static and can be applied by the neutron agent to the RP they create using a CUSTOM_PHYSNET_<physnet name> trait but i dont see how NET_A woudl work.
Yes, it is CUSTOM_PHYSNET_NET_A/CUSTON_PHYSNET_NET_B, just use a simple version. But the case I want to show is two VFs from different physical network.
Another example, we request an instance with two numa nodes, 2 vcpus and 128mb memory in each node. In each node has two VFs come from different
PF
to have HA.
?resources=DISK_GB:10& resources1=VCPU:2,MEMORY_MB:128& resources1.1=VF:1& resources1.2=VF:1& resources2=VCPU:2,MEMORY_MB:128& resources2.1=VF:1& resources2.2=VF:1& group_policy=isolate& group_policy1=isolate& group_policy2=isolate
this gets messy as there is no way to express that i have a 2 numa node guest and i want a vf form either numa node without changing the grouping and group policies.
It can be done by GET /allocation_candidates? resources=DISK_GB:10,VF:1& resources1=VCPU:2,MEMORY_MB:128& resources2=VCPU:2,MEMORY_MB:128& group_policy=isolate The DISK_GB and VF are in a un-numbered request group. So they may come from any RP in the tree. http://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/gran... "The semantic for the (single) un-numbered grouping is unchanged. That is, it may still return results from different RPs in the same tree (or, when “shared” is fully implemented, the same aggregate)."
if we went down this road the we woudl have to generate this request dynamically(im ok with that) but that woudl mean the operator should not add resouce:... extra_spec to falvor ever.
Yes, I prefer the way generate from extra spec, not asking the operator write such complex request by hand. The opearor can continue use 'resources' extra spec, we can merge it into the generate one.
personally i would like to move in the direction of creating the placement queries dynamically and not requiring or allowing operators to specify resouce in the flavor as its the only way i can see to beable to generage a query like the one above. the main gap i see to enabling that is we have no numa infomation from neutron with regards to what numa node we shoudl attach the vf too so we cant create the request above without chaning the neutron api.
Yes, we are one the same side. The neutron problem see below.
The `group_policy` ensure the resources1 and resources2 aren't coming
the same RP. The 'group_poilcy1' ensures `resource1.x` aren't coming from the same RP. The `group_policy2` ensures `resources2.x` aren't coming from same RP.
For the cyborg case, I think we can propose the flavor extra specs as below: accel:device_profile.[numa node id]=<profile_name>
from this i think could work short term but honestly i think we should not do this. in the long term we would want to allow the device_profile to be passed on the nova boot commandlline and manage qouta/billing of device outside of flavors.
We can also allow specify guest numa node id in the boot command. But I want to say, the problem is we miss a way to specify that info for the neutron and cyborg. Other proposal in the spec doesn't resolve this problem. And I think this problem isn't the fault of request group number.
we will also want to provide a policy attibe i think for virtual to host numa affitity for devices.
the other asspect is we curently do not create a pci root complex per numa node until we do that we cant support requesting cyborg device per numa node the numa node id in accel:device_profile.[numa node id]=<profile_name> should be the guest numa node not a host numa node.
Yes, The "[numa node id]" in "accel:device_profile.[numa node id]" is guest numa node id. Just like other extra spec "hw:cpus.0=1,2", we are using the guest numa node id in those extra specs.
personally i woudl prefer to create the pci root complex per numa node first and automaticlly assign the device to the correct root complex before allowing enduser to request cyborge device to be attached to a specific guest numa node as i think accel:device_profile.[numa node id]=<profile_name> might be too constiringing while also leaking to much host specific infomation via our api if it is used to select placment resouce providers and therefore host numa nodes.
Then we will know the user hope the cyborg device being attach to which instance numa node.
The cyborg only needs to return un-numbered request group, then Nova will base on all the 'hw:xxx' extra specs and 'accel:device_profile.[numa node id]' to generate a placement request like above.
For example, if it is PCI device under first numa node, the extra spec
will
be 'accel:device_profile.0=<profile_name>' the cyborg can return a simple request 'resources=CYBORG_PCI_XX_DEVICE:1', then we merge this into the request group 'resources1=VCPU:2,MEMORY_MB:128,CYBORG_PCI_XX_DEVICE:1'. If the pci device has a special trait, then cyborg should return request group as 'resources1=CYBORG_PCI_XX_DEVICE:1&required=SOME_TRAIT', then nova merge this into placement request as 'resources1.1'.
Chris Dent <cdent+os@anticdent.org> 于2019年4月9日周二 下午8:42写道:
Spec: https://review.openstack.org/650476
From the commit message:
To support NUMA and similar concepts, this proposes the ability to request resources from different providers nested under a common subtree (below the root provider).
There's much in the feature described by the spec and the surrounding context that is frequently a source of contention in the placement group, so working through this spec is probably going to require some robust discussion. Doing most of that before the PTG will help make sure we're not going in circles in person.k
Some of the areas of potential contention:
* Adequate for limited but maybe not all use case solutions * Strict trait constructionism * Evolving the complexity of placement solely for the satisfaction of hardware representation in Nova * Inventory-less resource providers * Developing new features in placement before existing features are fully used in client services * Others?
I list this not because they are deal breakers or the only thing that matters, but because they have presented stumbling blocks in the past and we may as well work to address them (or make an agreement to punt them until later) otherwise there will be lingering dread.
And, beyond all that squishy stuff, there is the necessary discussion over the solution described in the spec. There are several alternatives listed in the spec, and a few more in the comments. We'd like to figure out the best solution that can actually be done in a reasonable amount of time, not the best solution in the absolute.
Discuss!
-- Chris Dent ٩◔̯◔۶
freenode: cdent tw: @anticdent