<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">2018-06-05 22:53 GMT+08:00 Eric Fried <span dir="ltr"><<a href="mailto:openstack@fried.cc" target="_blank">openstack@fried.cc</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Alex-<br>
<br>
Allocations for an instance are pulled down by the compute manager and<br>
passed into the virt driver's spawn method since [1]. An allocation<br>
comprises a consumer, provider, resource class, and amount. Once we can<br>
schedule to trees, the allocations pulled down by the compute manager<br>
will span the tree as appropriate. So in that sense, yes, nova-compute<br>
knows which amounts of which resource classes come from which providers.<br></blockquote><div><br></div><div>Eric, thanks, that is the thing I missed. Initial I thought we will return the allocations from the scheduler and down to the compute manager. I see we already pull the allocations in the compute manager now.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
However, if you're asking about the situation where we have two<br>
different allocations of the same resource class coming from two<br>
separate providers: Yes, we can still tell which RCxAMOUNT is associated<br>
with which provider; but No, we still have no inherent way to correlate<br>
a specific one of those allocations with the part of the *request* it<br>
came from. If just the provider UUID isn't enough for the virt driver<br>
to figure out what to do, it may have to figure it out by looking at the<br>
flavor (and/or image metadata), inspecting the traits on the providers<br>
associated with the allocations, etc. (The theory here is that, if the<br>
virt driver can't tell the difference at that point, then it actually<br>
doesn't matter.)<br>
<br>
[1] <a href="https://review.openstack.org/#/c/511879/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/511879/</a><br>
<span class=""><br>
On 06/05/2018 09:05 AM, Alex Xu wrote:<br>
> Maybe I missed something. Is there anyway the nova-compute can know the<br>
> resources are allocated from which child resource provider? For example,<br>
> the host has two PFs. The request is asking one VF, then the<br>
> nova-compute needs to know the VF is allocated from which PF (resource<br>
> provider). As my understand, currently we only return a list of<br>
> alternative resource provider to the nova-compute, those alternative is<br>
> root resource provider.<br>
> <br>
> 2018-06-05 21:29 GMT+08:00 Jay Pipes <<a href="mailto:jaypipes@gmail.com">jaypipes@gmail.com</a><br>
</span>> <mailto:<a href="mailto:jaypipes@gmail.com">jaypipes@gmail.com</a>>>:<br>
<div><div class="h5">> <br>
> On 06/05/2018 08:50 AM, Stephen Finucane wrote:<br>
> <br>
> I thought nested resource providers were already supported by<br>
> placement? To the best of my knowledge, what is /not/ supported<br>
> is virt drivers using these to report NUMA topologies but I<br>
> doubt that affects you. The placement guys will need to weigh in<br>
> on this as I could be missing something but it sounds like you<br>
> can start using this functionality right now.<br>
> <br>
> <br>
> To be clear, this is what placement and nova *currently* support<br>
> with regards to nested resource providers:<br>
> <br>
> 1) When creating a resource provider in placement, you can specify a<br>
> parent_provider_uuid and thus create trees of providers. This was<br>
> placement API microversion 1.14. Also included in this microversion<br>
> was support for displaying the parent and root provider UUID for<br>
> resource providers.<br>
> <br>
> 2) The nova "scheduler report client" (terrible name, it's mostly<br>
> just the placement client at this point) understands how to call<br>
> placement API 1.14 and create resource providers with a parent provider.<br>
> <br>
> 3) The nova scheduler report client uses a ProviderTree object [1]<br>
> to cache information about the hierarchy of providers that it knows<br>
> about. For nova-compute workers managing hypervisors, that means the<br>
> ProviderTree object contained in the report client is rooted in a<br>
> resource provider that represents the compute node itself (the<br>
> hypervisor). For nova-compute workers managing baremetal, that means<br>
> the ProviderTree object contains many root providers, each<br>
> representing an Ironic baremetal node.<br>
> <br>
> 4) The placement API's GET /allocation_candidates endpoint now<br>
> understands the concept of granular request groups [2]. Granular<br>
> request groups are only relevant when a user wants to specify that<br>
> child providers in a provider tree should be used to satisfy part of<br>
> an overall scheduling request. However, this support is yet<br>
> incomplete -- see #5 below.<br>
> <br>
> The following parts of the nested resource providers modeling are<br>
> *NOT* yet complete, however:<br>
> <br>
> 5) GET /allocation_candidates does not currently return *results*<br>
> when granular request groups are specified. So, while the placement<br>
> service understands the *request* for granular groups, it doesn't<br>
> yet have the ability to constrain the returned candidates<br>
> appropriately. Tetsuro is actively working on this functionality in<br>
> this patch series:<br>
> <br>
> <a href="https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/q/status:open+project:<wbr>openstack/nova+branch:master+<wbr>topic:bp/nested-resource-<wbr>providers-allocation-<wbr>candidates</a><br>
> <<a href="https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates" rel="noreferrer" target="_blank">https://review.openstack.org/<wbr>#/q/status:open+project:<wbr>openstack/nova+branch:master+<wbr>topic:bp/nested-resource-<wbr>providers-allocation-<wbr>candidates</a>><br>
> <br>
> 6) The virt drivers need to implement the update_provider_tree()<br>
> interface [3] and construct the tree of resource providers along<br>
> with appropriate inventory records for each child provider in the<br>
> tree. Both libvirt and XenAPI virt drivers have patch series up that<br>
> begin to take advantage of the nested provider modeling. However, a<br>
> number of concerns [4] about in-place nova-compute upgrades when<br>
> moving from a single resource provider to a nested provider tree<br>
> model were raised, and we have begun brainstorming how to handle the<br>
> migration of existing data in the single-provider model to the<br>
> nested provider model. [5] We are blocking any reviews on patch<br>
> series that modify the local provider modeling until these migration<br>
> concerns are fully resolved.<br>
> <br>
> 7) The scheduler does not currently pass granular request groups to<br>
> placement. Once #5 and #6 are resolved, and once the<br>
> migration/upgrade path is resolved, clearly we will need to have the<br>
> scheduler start making requests to placement that represent the<br>
> granular request groups and have the scheduler pass the resulting<br>
> allocation candidates to its filters and weighers.<br>
> <br>
> Hope this helps highlight where we currently are and the work still<br>
> left to do (in Rocky) on nested resource providers.<br>
> <br>
> Best,<br>
> -jay<br>
> <br>
> <br>
> [1]<br>
> <a href="https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py" rel="noreferrer" target="_blank">https://github.com/openstack/<wbr>nova/blob/master/nova/compute/<wbr>provider_tree.py</a><br>
> <<a href="https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py" rel="noreferrer" target="_blank">https://github.com/openstack/<wbr>nova/blob/master/nova/compute/<wbr>provider_tree.py</a>><br>
> <br>
> [2]<br>
> <a href="https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html" rel="noreferrer" target="_blank">https://specs.openstack.org/<wbr>openstack/nova-specs/specs/<wbr>queens/approved/granular-<wbr>resource-requests.html</a><br>
> <<a href="https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html" rel="noreferrer" target="_blank">https://specs.openstack.org/<wbr>openstack/nova-specs/specs/<wbr>queens/approved/granular-<wbr>resource-requests.html</a>><br>
> <br>
> [3]<br>
> <a href="https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833" rel="noreferrer" target="_blank">https://github.com/openstack/<wbr>nova/blob/<wbr>f902e0d5d87fb05207e4a7aca73d18<wbr>5775d43df2/nova/virt/driver.<wbr>py#L833</a><br>
> <<a href="https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833" rel="noreferrer" target="_blank">https://github.com/openstack/<wbr>nova/blob/<wbr>f902e0d5d87fb05207e4a7aca73d18<wbr>5775d43df2/nova/virt/driver.<wbr>py#L833</a>><br>
> <br>
> [4]<br>
> <a href="http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>pipermail/openstack-dev/2018-<wbr>May/130783.html</a><br>
> <<a href="http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>pipermail/openstack-dev/2018-<wbr>May/130783.html</a>><br>
> <br>
> [5] <a href="https://etherpad.openstack.org/p/placement-making-the-(up)grade" rel="noreferrer" target="_blank">https://etherpad.openstack.<wbr>org/p/placement-making-the-(<wbr>up)grade</a><br>
> <<a href="https://etherpad.openstack.org/p/placement-making-the-(up)grade" rel="noreferrer" target="_blank">https://etherpad.openstack.<wbr>org/p/placement-making-the-(<wbr>up)grade</a>><br>
> <br>
> <br>
> ______________________________<wbr>______________________________<wbr>______________<br>
> OpenStack Development Mailing List (not for usage questions)<br>
> Unsubscribe:<br>
> <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
</div></div>> <<a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">http://OpenStack-dev-request@<wbr>lists.openstack.org?subject:<wbr>unsubscribe</a>><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
> <<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a>><br>
<div class="HOEnZb"><div class="h5">> <br>
> <br>
> <br>
> <br>
> ______________________________<wbr>______________________________<wbr>______________<br>
> OpenStack Development Mailing List (not for usage questions)<br>
> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
> <br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>