[placement][ptg] Resource Provider Partitioning

Jay Pipes jaypipes at gmail.com
Tue Apr 9 13:14:54 UTC 2019

On 04/08/2019 10:25 AM, Chris Dent wrote:
>  From the etherpad [1]:
> * do we need this?
> * what is it?
> * who is going to drive it?
> As as I recall, resource provider partitioning (distinct from
> allocation partitioning) is a way of declaring that a set of
> resource providers are in a thing. This would allow, for example,
> one placement to service multiple OpenStack clouds or for a
> placement to be a part of a single pane of glass system in a FOG or
> edge setup.
> This was mentioned during Stein nova discussions [2] but since then
> I've not personally heard a lot of discussion on this topic so it's
> unclear if it is a pressing issue. Do we want to be build it so they
> come, or wait until they come and then build it?
> The discussion at [2] mentions the possibility of an
> 'openstack-shard' header (instead of query parameter) that would be
> sent with any request to placement.
> There is, however, no substantive discussion on the internal
> implementation. Options:
> * Do nothing (see above)
> * Internally manipulate aggregates (all these resource providers
>    below to shard X).

The problem with this implementation is that resource providers can 
belong to zero or multiple aggregates, of course. And a "shard" or 
"source partition" is clearly something that a provider only belongs to 
*one of* and *must* belong to only one.

> * Add a 1:1 or 1:N relation between an RP and a shard uuid in the
>    DB.

1:1 is the only thing that makes sense to me. Therefore, it should be a 
field on the resource_providers table (source_id or partition_id or 

> * Use a trait! [3]

Same problem as aggregates. A provider can have zero or more traits, 
therefore we would run into the same unholy mess that we currently have 
in Nova aggregate metadata for "availability zones": we need a bunch of 
hack code to make sure that nobody associates a compute service with 
multiple aggregates *if* those aggregates have different 
availability_zone metadata keys.

Yuck. This is why getting the data model right is so important... and 
why bolting on attributes to the wrong entity or cramming relational 
data into a JSON blob always ends up biting us in the long run.

> But before we get into implementation details we should discuss the
> use cases for this (if any), the need to do it (if any), and the
> people who will do it (if any). All three of those are thin at
> this point.

Mentioned in the other thread on consumer types (what you are calling 
allocation partitioning for some reason), but the best *current* use 
case for these partitions/types is in solving the quota usage 
calculations in an efficient manner using the placement data model.


> [1] https://etherpad.openstack.org/p/placement-ptg-train
> [2] around lines 243 on https://etherpad.openstack.org/p/nova-ptg-stein
>      where both types (allocation/rp) of partitioning are discussed.
> [3] Not for the trait strict constructionists.

More information about the openstack-discuss mailing list