[openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

Alex Xu soulxu at gmail.com
Tue Jun 5 14:05:20 UTC 2018


Maybe I missed something. Is there anyway the nova-compute can know the
resources are allocated from which child resource provider? For example,
the host has two PFs. The request is asking one VF, then the nova-compute
needs to know the VF is allocated from which PF (resource provider). As my
understand, currently we only return a list of alternative resource
provider to the nova-compute, those alternative is root resource provider.

2018-06-05 21:29 GMT+08:00 Jay Pipes <jaypipes at gmail.com>:

> On 06/05/2018 08:50 AM, Stephen Finucane wrote:
>
>> I thought nested resource providers were already supported by placement?
>> To the best of my knowledge, what is /not/ supported is virt drivers using
>> these to report NUMA topologies but I doubt that affects you. The placement
>> guys will need to weigh in on this as I could be missing something but it
>> sounds like you can start using this functionality right now.
>>
>
> To be clear, this is what placement and nova *currently* support with
> regards to nested resource providers:
>
> 1) When creating a resource provider in placement, you can specify a
> parent_provider_uuid and thus create trees of providers. This was placement
> API microversion 1.14. Also included in this microversion was support for
> displaying the parent and root provider UUID for resource providers.
>
> 2) The nova "scheduler report client" (terrible name, it's mostly just the
> placement client at this point) understands how to call placement API 1.14
> and create resource providers with a parent provider.
>
> 3) The nova scheduler report client uses a ProviderTree object [1] to
> cache information about the hierarchy of providers that it knows about. For
> nova-compute workers managing hypervisors, that means the ProviderTree
> object contained in the report client is rooted in a resource provider that
> represents the compute node itself (the hypervisor). For nova-compute
> workers managing baremetal, that means the ProviderTree object contains
> many root providers, each representing an Ironic baremetal node.
>
> 4) The placement API's GET /allocation_candidates endpoint now understands
> the concept of granular request groups [2]. Granular request groups are
> only relevant when a user wants to specify that child providers in a
> provider tree should be used to satisfy part of an overall scheduling
> request. However, this support is yet incomplete -- see #5 below.
>
> The following parts of the nested resource providers modeling are *NOT*
> yet complete, however:
>
> 5) GET /allocation_candidates does not currently return *results* when
> granular request groups are specified. So, while the placement service
> understands the *request* for granular groups, it doesn't yet have the
> ability to constrain the returned candidates appropriately. Tetsuro is
> actively working on this functionality in this patch series:
>
> https://review.openstack.org/#/q/status:open+project:opensta
> ck/nova+branch:master+topic:bp/nested-resource-providers-
> allocation-candidates
>
> 6) The virt drivers need to implement the update_provider_tree() interface
> [3] and construct the tree of resource providers along with appropriate
> inventory records for each child provider in the tree. Both libvirt and
> XenAPI virt drivers have patch series up that begin to take advantage of
> the nested provider modeling. However, a number of concerns [4] about
> in-place nova-compute upgrades when moving from a single resource provider
> to a nested provider tree model were raised, and we have begun
> brainstorming how to handle the migration of existing data in the
> single-provider model to the nested provider model. [5] We are blocking any
> reviews on patch series that modify the local provider modeling until these
> migration concerns are fully resolved.
>
> 7) The scheduler does not currently pass granular request groups to
> placement. Once #5 and #6 are resolved, and once the migration/upgrade path
> is resolved, clearly we will need to have the scheduler start making
> requests to placement that represent the granular request groups and have
> the scheduler pass the resulting allocation candidates to its filters and
> weighers.
>
> Hope this helps highlight where we currently are and the work still left
> to do (in Rocky) on nested resource providers.
>
> Best,
> -jay
>
>
> [1] https://github.com/openstack/nova/blob/master/nova/compute/p
> rovider_tree.py
>
> [2] https://specs.openstack.org/openstack/nova-specs/specs/queen
> s/approved/granular-resource-requests.html
>
> [3] https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a
> 7aca73d185775d43df2/nova/virt/driver.py#L833
>
> [4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/
> 130783.html
>
> [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180605/17e953b2/attachment.html>


More information about the OpenStack-dev mailing list