[openstack-dev] [nova] placement/resource providers update 6
Chris Dent
cdent+os at anticdent.org
Fri Dec 16 12:40:37 UTC 2016
This will be the last resource provider/placement update for the
year. Except the next one at the end of the first week of January.
There's a lot of work happening related to placement, on multiple
concurrent threads.
# What Matters Most
The most important placement-related stuff right now is getting the
scheduler using a filtered list of resource providers and the things
that fall out from that: increased aggregate awareness in server and
client, miscellaneous bug fixes, and getting CI using placement.
There's a bit about each of those within.
# Recently Merged Stuff
Code was recently merged to make the objects used by the placement
API not remotable. There's a note in
nova/objects/resource_provider.py saying so, but in case you miss
it: Don't add remotable methods in there.
https://review.openstack.org/#/c/404279/
The reason for doing this is because the only published interface to
placement data will be the HTTP API provided by the placement
serivce.
# Unplanned Stuff
We don't currently have an active plan for when we will implement a
placement client. Though we decided to not use a specific client in
the scheduler's report client (because simple requests+JSON works
just fine) we still have expectations that shared resource providers
will likely be managed via commands that can be run using the
openstackclient.
Miguel Lavalle started work on a client
https://github.com/miguellavalle/python-placementclient/tree/adding-grp-support
# Pending Planned Work
## Resource Tracker Cleanup and Aggregates Work
There's a lot of work being done in the resource tracker and the
report client to to ensure that it cane work properly with the
changes brought about by resource providers and friends. Jay's done
a big cleanup so tracking Ironic will be more efficient:
https://review.openstack.org/#/c/398469/
That stack provide the base for changes to support management of the
new style of Ironic inventory: custom resource classes:
https://review.openstack.org/#/c/404472/
It also provides the cleaner group on which to manage maps of
aggregates associated with the compute node:
https://review.openstack.org/#/c/407309/
Which has inspired work to get the opposite information from the
placement API: Get me the resource providers that are associated
with the listed aggregates:
https://review.openstack.org/#/c/407629/
## Filtering Hosts in the Scheduler
The scheduler will use the placement API to retrieve a list of
filtered resource providers that have sufficient vcpu, disk and ram
to support a placement request. This will minimize the number of
candidate hosts that need to be evaluated in the current
nova-scheduler. The code to support that in the API is here:
https://review.openstack.org/#/c/392569/
with code for the nova-scheduler side expected in the forthcoming
week.
As this work emerges, we'll need to make sure that both the client
and server sides are aware of aggregate asociations as "the resource
providers that the placement service returns will either have the
resources requested or will be associated with aggregates that have
providers that match the requested resources."
## Docs
There are three types of docs in progress:
* placement.rst
http://docs.openstack.org/developer/nova/placement.html
* placement-dev.rst
https://review.openstack.org/#/c/408313/
* placement-api-ref
https://review.openstack.org/#/c/409340/
The two reviews are very much WIP at this point, just something to
get the ball rolling. The api-ref borrows from useful tooling in the
nova api-ref and adds a some stuff to make it hard to add a new
route without also adding some documentation.
## Custom Resource Classes
The main bits of this have merged and the active bits right now are
related to integration with the resource tracker and ironic
inventory (above).
## Nested Resource Providers
As before: Percolating and moving forward.
https://review.openstack.org/#/c/377138/
## Resource Provider Traits
There's been some recent activity on the spec for resource provider
traits. These are a way of specifying qualitative resource
requirements (e.g., "I want my disk to be SSD").
https://review.openstack.org/#/c/345138/
I'm not clear on whether this is still targeting Ocata or not?
# CI using Placement
We'd like to get placement running in all the CI jobs (for branches
that have it). devstack-gate and devstack changes are in progress
for this:
https://review.openstack.org/#/c/409871/
https://review.openstack.org/#/c/411510/
# Upgrade Checking
See Matt's excellent summary of a discussion about tools to help a
deployment know if they have turned on all the necessary bits to
upgrade to Ocata. One of those necessary bits is placement.
http://lists.openstack.org/pipermail/openstack-dev/2016-December/109060.html
# Stuff Happening Outside of Nova
\o/
* Neutron IPV4 inventory
https://review.openstack.org/#/c/358658/
* Placement support in puppet-nova
https://review.openstack.org/#/c/406300/
# Pending Pickup Work
Bugs, refactoring and the rest.
* Do not post allocations that are zero
https://review.openstack.org/#/c/407180/
This one was found initially by the testing of the puppet-nova
change above, and then confirmed by the CI additions mentioned
above. That also led to a change to adjust where and how some
exceptions are being logged:
https://review.openstack.org/#/c/410128/
* Small improvements to placement.rst
https://review.openstack.org/#/c/403811/
* Update the generic resource pools spec to reflect reality
https://review.openstack.org/#/c/407562/
* [WIP] Placement api: Add json_error_formatter to defaults
https://review.openstack.org/#/c/395194/
This is an effort to avoid boilerplate, but no good solution has
been determined yet. Reviewers can help us figure a good way to
handle things.
* CORS support in placement API:
https://review.openstack.org/#/c/392891/
* Demo inventory update script:
https://review.openstack.org/#/c/382613/
This one might be considered a WIP because how it chooses to do
things (rather simply and dumbly) may not be in line with expecations.
# End
Thanks to everyone for all the hard work.
--
Chris Dent ¯\_(ツ)_/¯ https://anticdent.org/
freenode: cdent tw: @anticdent
More information about the OpenStack-dev
mailing list