[openstack-dev] [nova] [placement] aggregates associated with multiple resource providers
Cheng, Yingxin
yingxin.cheng at intel.com
Tue May 31 03:22:09 UTC 2016
Hi, cdent:
This problem arises because the RT(resource tracker) only knows to consume the DISK resource in its host, but it still doesn’t know exactly which resource provider to place the consumption. That is to say, the RT still needs to *find* the correct resource provider in the step 4. The *step 4* finally causes the explicit problem that “the RT can find two resource providers providing DISK_GB, but it doesn’t know which is right”, as you’ve encountered.
The problem is: the RT needs to make a decision to choose a resource provider when it finds multiple of them according to *step 4*. However, the scheduler should already know which resource provider to choose when it is making a decision, and it doesn’t send this information to compute nodes, either. That’s also to say, there is a missing step in the bp g-r-p that we should “improve filter scheduler that can make correct decisions with generic resource pools”, the scheduler should tell the compute node RT not only about the resources consumptions in the compute-node resource provider, but also the information where to consume shared resources, i.e. their related resource-provider-ids.
Hope it can help you.
--
Regards
Yingxin
On 5/30/16, 06:19, "Chris Dent" <cdent+os at anticdent.org> wrote:
>
>I'm currently doing some thinking on step 4 ("Modify resource tracker
>to pull information on aggregates the compute node is associated with
>and the resource pools available for those aggregatesa.") of the
>work items for the generic resource pools spec[1] and I've run into
>a brain teaser that I need some help working out.
>
>I'm not sure if I've run into an issue, or am just being ignorant. The
>latter is quite likely.
>
>This gets a bit complex (to me) but: The idea for step 4 is that the
>resource tracker will be modified such that:
>
>* if the compute node being claimed by an instance is a member of some
> aggregates
>* and one of those aggregates is associated with a resource provider
>* and the resource provider has inventory of resource class DISK_GB
>
>then rather than claiming disk on the compute node, claim it on the
>resource provider.
>
>The first hurdle to overcome when doing this is to trace the path
>from compute node, through aggregates, to a resource provider. We
>can get a list of aggregates by host, and then we can use those
>aggregates to get a list of resource providers by joining across
>ResourceProviderAggregates, and we can join further to get just
>those ResourceProviders which have Inventory of resource class
>DISK_GB.
>
>The issue here is that the result is a list. As far as I can tell
>we can end up with >1 ResourceProviders providing DISK_GB for this
>host because it is possible for a host to be in more than one
>aggregate and it is necessary for an aggregate to be able to associate
>with more than one resource provider.
>
>If the above is true and we can find two resource providers providing
>DISK_GB how does:
>
>* the resource tracker know where (to which provider) to write its
> disk claim?
>* the scheduler (the next step in the work items) make choices and
> declarations amongst providers? (Yes, place on that node, but use disk provider
> X, not Y)
>
>If the above is not true, why is it not true? (show me the code
>please)
>
>If the above is an issue, but we'd like to prevent it, how do we fix it?
>Do we need to make it so that when we associate an aggregate with a
>resource provider we check to see that it is not already associated with
>some other provider of the same resource class? This would be a
>troubling approach because as things currently stand we can add Inventory
>of any class and aggregates to a provider at any time and the amount of
>checking that would need to happen is at least bi-directional if not multi
>and that level of complexity is not a great direction to be going.
>
>So, yeah, if someone could help me tease this out, that would be
>great, thanks.
>
>
>[1] http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/generic-resource-pools.html#work-items
>
>--
>Chris Dent (╯°□°)╯︵┻━┻ http://anticdent.org/
>freenode: cdent tw: @anticdent
More information about the OpenStack-dev
mailing list