[openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

Alex Xu soulxu at gmail.com
Tue Jun 5 23:28:18 UTC 2018


2018-06-05 22:53 GMT+08:00 Eric Fried <openstack at fried.cc>:

> Alex-
>
>         Allocations for an instance are pulled down by the compute manager
> and
> passed into the virt driver's spawn method since [1].  An allocation
> comprises a consumer, provider, resource class, and amount.  Once we can
> schedule to trees, the allocations pulled down by the compute manager
> will span the tree as appropriate.  So in that sense, yes, nova-compute
> knows which amounts of which resource classes come from which providers.
>

Eric, thanks, that is the thing I missed. Initial I thought we will return
the allocations from the scheduler and down to the compute manager. I see
we already pull the allocations in the compute manager now.


>
>         However, if you're asking about the situation where we have two
> different allocations of the same resource class coming from two
> separate providers: Yes, we can still tell which RCxAMOUNT is associated
> with which provider; but No, we still have no inherent way to correlate
> a specific one of those allocations with the part of the *request* it
> came from.  If just the provider UUID isn't enough for the virt driver
> to figure out what to do, it may have to figure it out by looking at the
> flavor (and/or image metadata), inspecting the traits on the providers
> associated with the allocations, etc.  (The theory here is that, if the
> virt driver can't tell the difference at that point, then it actually
> doesn't matter.)
>
> [1] https://review.openstack.org/#/c/511879/
>
> On 06/05/2018 09:05 AM, Alex Xu wrote:
> > Maybe I missed something. Is there anyway the nova-compute can know the
> > resources are allocated from which child resource provider? For example,
> > the host has two PFs. The request is asking one VF, then the
> > nova-compute needs to know the VF is allocated from which PF (resource
> > provider). As my understand, currently we only return a list of
> > alternative resource provider to the nova-compute, those alternative is
> > root resource provider.
> >
> > 2018-06-05 21:29 GMT+08:00 Jay Pipes <jaypipes at gmail.com
> > <mailto:jaypipes at gmail.com>>:
> >
> >     On 06/05/2018 08:50 AM, Stephen Finucane wrote:
> >
> >         I thought nested resource providers were already supported by
> >         placement? To the best of my knowledge, what is /not/ supported
> >         is virt drivers using these to report NUMA topologies but I
> >         doubt that affects you. The placement guys will need to weigh in
> >         on this as I could be missing something but it sounds like you
> >         can start using this functionality right now.
> >
> >
> >     To be clear, this is what placement and nova *currently* support
> >     with regards to nested resource providers:
> >
> >     1) When creating a resource provider in placement, you can specify a
> >     parent_provider_uuid and thus create trees of providers. This was
> >     placement API microversion 1.14. Also included in this microversion
> >     was support for displaying the parent and root provider UUID for
> >     resource providers.
> >
> >     2) The nova "scheduler report client" (terrible name, it's mostly
> >     just the placement client at this point) understands how to call
> >     placement API 1.14 and create resource providers with a parent
> provider.
> >
> >     3) The nova scheduler report client uses a ProviderTree object [1]
> >     to cache information about the hierarchy of providers that it knows
> >     about. For nova-compute workers managing hypervisors, that means the
> >     ProviderTree object contained in the report client is rooted in a
> >     resource provider that represents the compute node itself (the
> >     hypervisor). For nova-compute workers managing baremetal, that means
> >     the ProviderTree object contains many root providers, each
> >     representing an Ironic baremetal node.
> >
> >     4) The placement API's GET /allocation_candidates endpoint now
> >     understands the concept of granular request groups [2]. Granular
> >     request groups are only relevant when a user wants to specify that
> >     child providers in a provider tree should be used to satisfy part of
> >     an overall scheduling request. However, this support is yet
> >     incomplete -- see #5 below.
> >
> >     The following parts of the nested resource providers modeling are
> >     *NOT* yet complete, however:
> >
> >     5) GET /allocation_candidates does not currently return *results*
> >     when granular request groups are specified. So, while the placement
> >     service understands the *request* for granular groups, it doesn't
> >     yet have the ability to constrain the returned candidates
> >     appropriately. Tetsuro is actively working on this functionality in
> >     this patch series:
> >
> >     https://review.openstack.org/#/q/status:open+project:
> openstack/nova+branch:master+topic:bp/nested-resource-
> providers-allocation-candidates
> >     <https://review.openstack.org/#/q/status:open+project:
> openstack/nova+branch:master+topic:bp/nested-resource-
> providers-allocation-candidates>
> >
> >     6) The virt drivers need to implement the update_provider_tree()
> >     interface [3] and construct the tree of resource providers along
> >     with appropriate inventory records for each child provider in the
> >     tree. Both libvirt and XenAPI virt drivers have patch series up that
> >     begin to take advantage of the nested provider modeling. However, a
> >     number of concerns [4] about in-place nova-compute upgrades when
> >     moving from a single resource provider to a nested provider tree
> >     model were raised, and we have begun brainstorming how to handle the
> >     migration of existing data in the single-provider model to the
> >     nested provider model. [5] We are blocking any reviews on patch
> >     series that modify the local provider modeling until these migration
> >     concerns are fully resolved.
> >
> >     7) The scheduler does not currently pass granular request groups to
> >     placement. Once #5 and #6 are resolved, and once the
> >     migration/upgrade path is resolved, clearly we will need to have the
> >     scheduler start making requests to placement that represent the
> >     granular request groups and have the scheduler pass the resulting
> >     allocation candidates to its filters and weighers.
> >
> >     Hope this helps highlight where we currently are and the work still
> >     left to do (in Rocky) on nested resource providers.
> >
> >     Best,
> >     -jay
> >
> >
> >     [1]
> >     https://github.com/openstack/nova/blob/master/nova/compute/
> provider_tree.py
> >     <https://github.com/openstack/nova/blob/master/nova/compute/
> provider_tree.py>
> >
> >     [2]
> >     https://specs.openstack.org/openstack/nova-specs/specs/
> queens/approved/granular-resource-requests.html
> >     <https://specs.openstack.org/openstack/nova-specs/specs/
> queens/approved/granular-resource-requests.html>
> >
> >     [3]
> >     https://github.com/openstack/nova/blob/
> f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833
> >     <https://github.com/openstack/nova/blob/
> f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833>
> >
> >     [4]
> >     http://lists.openstack.org/pipermail/openstack-dev/2018-
> May/130783.html
> >     <http://lists.openstack.org/pipermail/openstack-dev/2018-
> May/130783.html>
> >
> >     [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade
> >     <https://etherpad.openstack.org/p/placement-making-the-(up)grade>
> >
> >
> >     ____________________________________________________________
> ______________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <http://OpenStack-dev-request@lists.openstack.org?subject:
> unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> >
> >
> >
> > ____________________________________________________________
> ______________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180606/6ed3214c/attachment.html>


More information about the OpenStack-dev mailing list