[openstack-dev] [nova] Working toward Queens feature freeze and RC1

Eric Fried openstack at fried.cc
Thu Jan 4 21:44:05 UTC 2018


Folks-

>> - NRP affordance in GET /allocation_candidates
>>    . PATCHES: -
>>    . STATUS: Not proposed
>>    . PRIORITY: Critical
>>    . OWNER: jaypipes
>>    . DESCRIPTION: In the current master branch, the placement API will
>> report allocation candidates from [(a single non-sharing provider) and
>> (sharing providers associated via aggregate with that non-sharing
>> provider)].  It needs to be enhanced to report allocation candidates
>> from [(non-sharing providers in a tree) and (sharing providers
>> associated via aggregate with any of those non-sharing providers)].
>> This is critical for two reasons: 1) Without it, NRP doesn't provide any
>> interesting use cases; and 2) It is prerequisite to the remainder of the
>> Queens NRP work, listed below.
>>    . ACTION: Jay to sling some code
> 
> Just as an aside... while I'm currently starting this work, until the
> virt drivers and eventually the generic device manager or PCI device
> manager is populating parent/child information for resource providers,
> there's nothing that will be returned in the GET /allocation_candidates
> response w.r.t. nested providers.
> 
> So, yes, it's kind of a prerequisite, but until inventory records are
> being populated from the compute nodes, the allocation candidates work
> is going to be all academic/tests.
> 
> Best,
> -jay

Agree it's more of a tangled web than a linear sequence.  My thought was
that it doesn't make sense for virt drivers to expose their inventory in
tree form until it's going to afford them some benefit.

But to that point, I did forget to mention that Xen is trying to do just
that in Queens for VGPU support.  They already have a WIP [1] which
would consume the WIPs at the top of the
ComputeDriver.update_provider_tree() series [2].

[1] https://review.openstack.org/#/c/521041/
[2] https://review.openstack.org/#/c/521685/

I also don't necessarily agree that we need PCI manager changes or a
generic device manager for this to work.  As long as the virt driver
knows how to a) expose the resources in its provider tree, b) consume
the allocation candidate coming from the scheduler, and c) create/attach
resources based on that info, those other pieces would just get in the
way.  I'm hoping the Xen VGPU use case proves that.

E
.



More information about the OpenStack-dev mailing list