[openstack-dev] [nova] placement/resource providers update 22
cdent+os at anticdent.org
Fri May 5 16:08:30 UTC 2017
Placement and resource providers update 21. Please let me know if
anything is incorrect or missing.
If you're going to be in Boston there are some placement related
sessions that may be worth your while:
* Scheduler Wars: A New Hope
* Scheduler Wars: Revenge of the Split
* Behind the Scenes with Placement and Resource Tracking in Nova
* Comparing Kubernetes and OpenStack Resource Management
(I guess we'll have to see "NUMA Strikes Back" some other time.)
Next week there will be no scheduler subteam meeting, nor a
placement and resource providers update but efforts will be made to
summarize placement-related stuff that happens at the Forum.
# What Matters Most
Progress has begun on dealing with claims against the placement API.
Engaging with that is the top priority. There's plenty of other work
in progress too which needs to advance. Lots of links within.
# What's Changed
In addition to the work on claims, work has started on managing
resources that are shared via aggregates. When fully operational
this will finally allow correct consumption of shared disk!
Idempotent PUT for resource classes merged, which raises the max
microversion: https://review.openstack.org/#/c/448791/ .
# Help Wanted
(This section not changed since last week)
Areas where volunteers are needed.
* General attention to bugs tagged placement:
* Helping to create api documentation for placement (see the Docs
* Helping to create and evaluate functional tests of the resource
tracker and the ways in which it and nova-scheduler use the
reporting client. For some info see
and talk to edleafe. He has a work in progress at:
that seeks input and assistance.
* Performance testing. If you have access to some nodes, some basic
benchmarking and profiling would be very useful. See the
performance section below.
# Main Themes
## Claims in the Scheduler
Work has started on placement-claims blueprint:
We intentionally left some detail out of the spec because we knew
that we would find some edge cases while the implementation is
The main API is in place. Debate raged on how best to manage updates
of standard os-traits. In the end a cache similar to the one used
for resource classes was created:
Work will be required at some point on filtering resource providers
based on traits, and adding traits to resource providers from the
resource tracker. There's been some discussion on that in the
reviews of shared providers (below) because it will be a part of
the same mass (MASS!) of SQL.
## Shared Resource Providers
Work and review on this is in progress at:
Reviewers should be aware that the patches, at least as of today,
are structured in a way that evolves from the current state to the
eventual desired state in a way that duplicates some effort and
code. This was done intentionally by Jay to make the testing and
review more incremental. It's probably best to read through the
entire stack before jumping to any conclusions. I know that I got
very concerned and confused by some of the duplication until I was
informed that it's just part of the process: the end goal ought to
be pretty clean.
Several reviews are in progress for documenting the placement API.
This is likely going to take quite a few iterations as we work out
the patterns and tooling. But it's great to see the progress and
when looking at the draft rendered docs it makes placement feel like
a real thing™.
We need multiple reviewers on this stuff, early in the process, as
it helps to identify missteps in the phrasing and styling before we
develop bad habits. We've also found some ways in which the general
style of the docs can be improved to say more about when particular
errors might happen. We'll likely need more constructive use of
Find me (cdent) or Andrey (avolkov) if you want to help out or have
We're aware that there are some redundancies in the resource tracker
that we'd like to clean up
but it's also the case that we've done no performance testing on the
placement service itself. The request profile of the resource
tracker is going to change as a result of claims work, but not be
We ought to do some testing to make sure there aren't unexpected
## Nested Resource Providers
(This section has not changed since last week)
On hold while attention is given to traits and claims. There's a
stack of code waiting until all of that settles:
## Ironic/Custom Resource Classes
(This section has not changed since last week)
There's a blueprint for "custom resource classes in flavors" that
describes the stuff that will actually make use of custom resource
The spec has merged, but the implementation has not yet started.
Over in Ironic some functional and integration tests have started:
There's also a spec in progress discussing ways to filter baremetal
nodes by tenant/project:
# Other Code/Specs
Work has started on an osc-plugin that can provide a command
line interface to the placement API.
Devstack change to install that plugin
Clean up the interface for getting inventory information from
Use DELETE inventories method in report client.
This is proving somewhat more complicated than initially
expected. DELETE of inventories doesn't give us a new
generation for the associated resource provider.
Use a specific error message for inventory in use, not just
the db exception. Jay has identified that this too is
somewhat more complicated than initially expected, because
for the time being the message is being parsed client-side.
Add a test to ensure that placement microversions have no gaps
when there is more than one handler for a URL.
Add a status check for legacy filters in nova-status.
Start the removal of the can_host column from the resource
providers database table. We're no longer going to use it.
A trait will indicate shared providers.
Handle new hosts for updating instance info in scheduler
Don't send instance updates from compute if not using filter
Cache headers not produced by placement API. This was assigned to
several different people over time, but I'm not sure if there is
any active code.
There's still some lingering stuff on here, some of which is
mentioned elsewhere in this message, but not all.
I suspect there's more, if I missed something, please tell me.
Instead of a cookie, this time you get beer.
Chris Dent ┬──┬◡ﾉ(° -°ﾉ) https://anticdent.org/
freenode: cdent tw: @anticdent
More information about the OpenStack-dev