[openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"
cdent+os at anticdent.org
Wed Feb 17 11:59:43 UTC 2016
On Wed, 17 Feb 2016, Cheng, Yingxin wrote:
> To better illustrate the differences between shared-state, resource-
> provider and legacy scheduler, I've drew 3 simplified pictures  in
> emphasizing the location of resource view, the location of claim and
> resource consumption, and the resource update/refresh pattern in three
> kinds of schedulers. Hoping I'm correct in the "resource-provider
> scheduler" part.
That's a useful visual aid, thank you. It aligns pretty well with my
understanding of each idea.
A thing that may be missing, which may help in exploring the usefulness
of each idea, is a representation of resources which are separate
from compute nodes and shared by them, such as shared disk or pools
of network addresses. In addition some would argue that we need to
see bare-metal nodes for a complete picture.
One of the driving motivations of the resource-provider work is to
make it possible to adequately and accurately track and consume the
shared resources. The legacy scheduler currently fails to do that
well. As you correctly points out it does this by having "strict
centralized consistency" as a design goal.
> As can be seen in the illustrations , the main compatibility issue
> between shared-state and resource-provider scheduler is caused by the
> different location of claim/consumption and the assumed consistent
> resource view. IMO unless the claims are allowed to happen in both
> places(resource tracker and resource-provider db), it seems difficult
> to make shared-state and resource-provider scheduler work together.
Yes, but doing claims twice feels intuitively redundant.
As I've explored this space I've often wondered why we feel it is
necessary to persist the resource data at all. Your shared-state
model is appealing because it lets the concrete resource(-provider)
be the authority about its own resources. That is information which
it can broadcast as it changes or on intervals (or both) to other
things which need that information. That feels like the correct
architecture in a massively distributed system, especially one where
resources are not scarce.
The advantage of a centralized datastore for that information is
that it provides administrative control (e.g. reserving resources for
other needs) and visibility. That level of command and control seems
to be something people really want (unfortunately).
Chris Dent (¨s¡ã¡õ¡ã)¨s¦à©ß©¥©ß http://anticdent.org/
freenode: cdent tw: @anticdent
More information about the OpenStack-dev