[openstack-dev] [nova] [placement] aggregates associated with multiple resource providers

Chris Dent cdent+os at anticdent.org
Tue May 31 17:06:21 UTC 2016

On Tue, 31 May 2016, Jay Pipes wrote:

> Kinda. What the compute node needs is an InventoryList object containing all 
> inventory records for all resource classes both local to it as well as 
> associated to it via any aggregate-resource-pool mapping.

Okay, that mostly makes sense. A bit different from what I've proved
out so far, but plenty of room to make it go that way.

> The SQL for generating this InventoryList is the following:

Presumably this would be a method on the InventoryList object

> We can deal with multiple shared storage pools per aggregate at a later time. 
> Just take the first resource provider in the list of inventory records 
> returned from the above SQL query that corresponds to the DISK_GB resource 
> class and that is resource provider you will deduct from.

So this seem rather fragile and pretty user-hostile. We're creating an
opportunity for people to easily replace their existing bad tracking of
disk usage with a different style of bad tracking of disk usage.

If we assign to different shared disk resource pools to the same
aggregate we've got a weird situation (unless we explicitly order
the resource providers by something).

Maybe that's fine, for now, but it seems we need to be aware of, not
only for ourselves, but in the documentation when we tell people how
to start using resource pools: Oh, by the way, for now, just
associate one shared disk pool to an aggregate.

> Assume only a single resource provider of DISK_GB. It will be either a 
> compute node's resource provider ID or a resource pool's resource provider 
> ID.


> For this initial work, my idea was to have some code that, on creation of a 
> resource pool and its association with an aggregate, if that resource pool 
> has an inventory record with resource_class of DISK_GB then remove any 
> inventory records with DISK_GB resource class for any compute node's (local) 
> resource provider ID associated with that aggregate. This way we ensure the 
> existing behaviour that a compute node either has local disk or it uses 
> shared storage, but not both.

So let me translate that to make sure I get it:

* node X exists, has inventory of DISK_GB
* node X is in aggregate Y
* resource pool A is created
* two possible paths now: first associating aggregate to pool or
   first adding inventory pool
* in either case, when aggregate Y is associated, if the pool has
   DISK_GB, traverse the nodes in aggregate Y and drop the disk

So, effectively, any time we associate an aggregate we need to
inspect its nodes?

What happens if we ever disassociate an aggregate from a resource pool?
Do the nodes in the aggregate have some way to get their local Inventory
back or are we going to assume that the switch to shared is one way?

In my scribbles when I was thinking this through (that led to the
start of this thread) I had imagined that rather than finding both
the resource pool and compute node resource providers when finding
available disk we'd instead see if there was resource pool, use it
if it was there, and if not, just use the compute node. Therefore if
the resource pool was ever disassociated, we'd be back to where we
were before without needing to reset the state in the artifact

Chris Dent               (╯°□°)╯︵┻━┻            http://anticdent.org/
freenode: cdent                                         tw: @anticdent

More information about the OpenStack-dev mailing list