[openstack-dev] [placement][nova] PTG summary
jaypipes at gmail.com
Fri Mar 3 16:49:33 UTC 2017
In Atlanta, there were a lot of discussions involving the new(ish)
placement service. I'd like to summarize the topics of discussion and
highlight what the team aims to get done in the Pike release.
A quick refresher
The placement service's mission is to provide a stable, generic
interface for accounting of resources that are consumed in an OpenStack
deployment. Though the placement service currently resides in the Nova
codebase , our goal is to eventually lift the service out into its
own repository. We do not yet have a date for this forklift, but the
placement code has been written from the start to be decoupled from Nova.
Progress to date
To date, we've made good progress on the quantitative side of the
* nova-compute workers are reporting inventory records for resources
they know about like vCPU, RAM and disk
* Admins can create custom resource classes through the placement REST API
* Providers of resources can be associated with each other via aggregate
* and nova-scheduler is now calling the placement REST API to filter the
list of compute nodes that it inspects during scheduling decisions
We have a patch currently going through the final stages of review that
integrates the Ironic virt driver with the placement API's custom
resource classes . This patch marks an important milestone for both
Nova and Ironic with regards to how Ironic baremetal resources are
accounted for in the system.
Priorities for Pike
At the PTG, we decided that the following are our highest priority focus
areas (in order):
1) Completion of the shared resource provider modeling and implementation
Shared storage accounting is the primary use case here, along with
Neutron routed networks.
2) Getting the qualitative side of the placement API done
As mentioned above, most work to-date has focused on the quantitative
side of the request spec. The other side of the request spec is the
qualitative one, which we're calling "traits". Providers of resources
(compute nodes, Ironic baremetal nodes, SR-IOV NICs, FPGAs, routed
network pools, etc) can be decorated with these string traits to
indicate features/capabilities of the provider.
For example, a compute node might be decorated with the trait
HW_CPU_X86_AVX2 or an SR-IOV NIC might be decorated with a trait
indicating the physical network associated with the NIC.
The placement API will provide REST endpoints for managing these traits
and their association with resource providers.
3) Merging support for nested resource providers concepts
Canonical examples of nested resource providers include SR-IOV PFs and
NUMA nodes and sockets.
Much work for this has already been proposed in previous cycles . We
need to push forward with this and get it done.
Discussions at the PTG identified that in order to actually implement
priority #1, however, we would need to complete #2 first :)
And so, we are currently attempting to get the os-traits library in
shape , getting the nova-spec approved for the placement traits API
 and getting the traits implementation out of WIP mode .
Once the traits work is complete, the shared storage providers work can
be resumed .
Once that work is complete, we will move on to the aforementioned nested
resource providers work as well as integration with the nova-scheduler
for traits and shared providers.
We had a nice discussion with folks in the Cinder team about what the
placement service is all about and how Cinder can use it in the future.
We've asked the Cinder team to help us identify block-storage-specific
qualitative traits that can be standardized in the os-traits library.
We're looking forward to helping the Cinder community do storage-aware
scheduling affinity using the placement API in Queens and beyond.
Thanks all for reading!
More information about the OpenStack-dev