[openstack-dev] [nova] placement/resource providers update 7

Jay Pipes jaypipes at gmail.com
Mon Jan 16 15:17:10 UTC 2017


On 01/11/2017 01:11 PM, Chris Dent wrote:
> On Fri, 6 Jan 2017, Chris Dent wrote:
>
>> ## can_host, aggregates in filtering
>>
>> There's still some confusion (from at least me) on whether the
>> can_host field is relevant when making queries to filter resource
>> providers. Similarly, when requesting resource providers to satisfy a
>> set of resources, we don't (unless I've completely missed it) return
>> resource providers (as compute nodes) that are associated with other
>> resource providers (by aggregate) that can satisfy a resource
>> requirement. Feels like we need to work backwards from a test or use
>> case and see what's missing.
>
> At several points throughout the day I've been talking with edleafe
> about this to see whether "knowing about aggregates (or can_host)" when
> making a request to `GET /resource_providers?resources=<some resources>`
> needs to be dealt with on a scale of now, soon, later.
>
> After much confusion I think we've established that for now we don't
> need to. But we need to confirm so I said I'd write something down.
>
> The basis for this conclusion is from three assumptions:
>
> * The value of 'local_gb' on the compute_node object is any disk the
>   compute_node can see/use and the concept of associating with shared
>   disk by aggregates is not something that is real yet[0].

Yes.

> * Any query for resources from the scheduler client is going to
>   include a VCPU requirement of at least one (meaning that every
>   resource provider returned will be a compute node[1]).

Meh, we *could* do that, but for now it's unnecessary since the only 
provider records being currently created by the resource tracker 
(scheduler report client) are compute node provider records.

> * Claiming the consumption of some of that local_gb by the resource
>   tracker is the resource tracker's problem and not something we're
>   talking about here[2].

Yes.

> If all that's true, then we're getting pretty close for near term
> joy on limiting the number of hosts the filter scheduler needs to
> filter[3].

Yes, and the joy merged. So, we're in full joy mode.

> If it's not true (for the near term), can someone explain why not
> and what need to do to fix it?
>
> In the longer term:
>
> Presumably the resource tracker will start reporting inventory
> without DISK_GB when using shared disk, and shared disk will be
> managed via aggregate associations. When that happens, the query
> to GET /resource_providers will need a way to say "only give me
> compute nodes that can either satisfy this resource request
> directly or via associated stuff". Something tidier than:
>
>     GET
> /resource_providers?resources:<something>&I_only_want_capable_or_associated_compute_nodes=True

The request from the scheduler will not change at all. The user is 
requesting some resources; where those resources live is not a concern 
of the user.

> The techniques to do that, if I understand correctly, are in an
> email from Jay that some of us received a while go with a subject of
> "Some attachments to help with resource providers querying".
> Butterfly joins and such like.

Yes, indeed. The can_host field -- probably better named as "is_shared" 
or something like that -- can simplify some of the more complex join 
conditions that querying with associated shared resource pools brings 
into play. But it's more of an optimization (server side) than anything 
else. Thus, I'd prefer if we keep can_host out of any REST API interfacers.

Best,
-jay

> Thoughts, questions, clarifications?
>
> [0] This is different from the issue with allocations not needing to
> be recorded when the instance has non-local disk (is volume backed):
> https://review.openstack.org/#/c/407180/ . Here we are talking about
> recording compute node inventory.
>
> [1] This ignores for the moment that unless someone has been playing
> around there are no resource providers being created in the
> placement API that are not compute nodes.
>
> [2] But for reference will presumably come from the work started
> here https://review.openstack.org/#/c/407309/ .
>
> [3] That work starts here: https://review.openstack.org/#/c/392569/
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list