[openstack-dev] Unsubscribe

Henry Nash henrynash9 at mac.com
Tue Jun 5 14:09:55 UTC 2018



> On 5 Jun 2018, at 14:56, Eric Fried <openstack at fried.cc> wrote:
> 
> To summarize: cyborg could model things nested-wise, but there would be
> no way to schedule them yet.
> 
> Couple of clarifications inline.
> 
> On 06/05/2018 08:29 AM, Jay Pipes wrote:
>> On 06/05/2018 08:50 AM, Stephen Finucane wrote:
>>> I thought nested resource providers were already supported by
>>> placement? To the best of my knowledge, what is /not/ supported is
>>> virt drivers using these to report NUMA topologies but I doubt that
>>> affects you. The placement guys will need to weigh in on this as I
>>> could be missing something but it sounds like you can start using this
>>> functionality right now.
>> 
>> To be clear, this is what placement and nova *currently* support with
>> regards to nested resource providers:
>> 
>> 1) When creating a resource provider in placement, you can specify a
>> parent_provider_uuid and thus create trees of providers. This was
>> placement API microversion 1.14. Also included in this microversion was
>> support for displaying the parent and root provider UUID for resource
>> providers.
>> 
>> 2) The nova "scheduler report client" (terrible name, it's mostly just
>> the placement client at this point) understands how to call placement
>> API 1.14 and create resource providers with a parent provider.
>> 
>> 3) The nova scheduler report client uses a ProviderTree object [1] to
>> cache information about the hierarchy of providers that it knows about.
>> For nova-compute workers managing hypervisors, that means the
>> ProviderTree object contained in the report client is rooted in a
>> resource provider that represents the compute node itself (the
>> hypervisor). For nova-compute workers managing baremetal, that means the
>> ProviderTree object contains many root providers, each representing an
>> Ironic baremetal node.
>> 
>> 4) The placement API's GET /allocation_candidates endpoint now
>> understands the concept of granular request groups [2]. Granular request
>> groups are only relevant when a user wants to specify that child
>> providers in a provider tree should be used to satisfy part of an
>> overall scheduling request. However, this support is yet incomplete --
>> see #5 below.
> 
> Granular request groups are also usable/useful when sharing providers
> are in play. That functionality is complete on both the placement side
> and the report client side (see below).
> 
>> The following parts of the nested resource providers modeling are *NOT*
>> yet complete, however:
>> 
>> 5) GET /allocation_candidates does not currently return *results* when
>> granular request groups are specified. So, while the placement service
>> understands the *request* for granular groups, it doesn't yet have the
>> ability to constrain the returned candidates appropriately. Tetsuro is
>> actively working on this functionality in this patch series:
>> 
>> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates
>> 
>> 
>> 6) The virt drivers need to implement the update_provider_tree()
>> interface [3] and construct the tree of resource providers along with
>> appropriate inventory records for each child provider in the tree. Both
>> libvirt and XenAPI virt drivers have patch series up that begin to take
>> advantage of the nested provider modeling. However, a number of concerns
>> [4] about in-place nova-compute upgrades when moving from a single
>> resource provider to a nested provider tree model were raised, and we
>> have begun brainstorming how to handle the migration of existing data in
>> the single-provider model to the nested provider model. [5] We are
>> blocking any reviews on patch series that modify the local provider
>> modeling until these migration concerns are fully resolved.
>> 
>> 7) The scheduler does not currently pass granular request groups to
>> placement.
> 
> The code is in place to do this [6] - so the scheduler *will* pass
> granular request groups to placement if your flavor specifies them.  As
> noted above, such flavors will be limited to exploiting sharing
> providers until Tetsuro's series merges.  But no further code work is
> required on the scheduler side.
> 
> [6] https://review.openstack.org/#/c/515811/
> 
>> Once #5 and #6 are resolved, and once the migration/upgrade
>> path is resolved, clearly we will need to have the scheduler start
>> making requests to placement that represent the granular request groups
>> and have the scheduler pass the resulting allocation candidates to its
>> filters and weighers.
>> 
>> Hope this helps highlight where we currently are and the work still left
>> to do (in Rocky) on nested resource providers.
>> 
>> Best,
>> -jay
>> 
>> 
>> [1]
>> https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py
>> 
>> [2]
>> https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html
>> 
>> 
>> [3]
>> https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833
>> 
>> 
>> [4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html
>> 
>> [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list