[openstack-dev] [api] Idea for a simple way to expose compute driver capabilities in the REST API

Matt Riedemann mriedemos at gmail.com
Sat Jan 27 20:15:34 UTC 2018


We've talked about a type of capabilities API within nova and across 
OpenStack for at least a couple of years (earliest formal session I 
remember is a session in BCN).

At the PTG in Denver, I think there was general sentiment that rather 
than never do anything because we can't come up with the perfect design 
that would satisfy all requirements in all projects, we should just do 
our own things in the projects, at least as a start, and as long as 
things are well-documented, that's good enough rather than do nothing.

Well I'm not sure why but I was thinking about this problem today and 
came up with something simple here:

https://review.openstack.org/#/c/538498/

This builds on the change to pass ironic node traits through the 
nova-compute resource tracker and push those into placement on the 
compute node resource provider resource. These traits can then be tied 
to required traits in a flavor and used for scheduling.

The patch takes the existing ComputeDriver.capabilities dict that is on 
all compute drivers, and for the supported capabilities, exposes those 
as a CUSTOM_COMPUTE_<capability name> trait on the compute node resource 
provider. So for example, a compute node backed by a libvirt driver with 
qemu<2.10 would have a CUSTOM_COMPUTE_SUPPORTS_MULTIATTACH trait.

We could then add something to the request spec when booting a server 
from a mutltiattach volume to say this request requires a compute node 
that has that trait. That's one of the gaps we have with multiattach 
support today, which is there is no connection between the request for a 
multiattach-volume-backed server and the compute host the scheduler 
picks to build that server, which could lead to server create failures 
(which aren't rescheduled by the way).

Anyway, it's just an idea and I wanted to see what others thought about 
this. Doing it would bake a certain behavior into how things are tied to 
the placement REST API, and I'm not sure if that's a good thing or not. 
It also opens up the question of whether or not these become standard 
traits in the os-traits library.

Alternatively I've always thought we could do something simple like add 
a "GET /os-hypervisors/{hypervisor_id}/capabilities" API which either 
makes an RPC call to the compute to get the driver capabilities, or we 
could store the driver capabilities in the compute_nodes table and the 
API could pull them from there. Then we could build on that same type of 
idea by doing something like "GET /servers/{server_id}/capabilities" 
which would take into account the capabilities based on the compute host 
that the instance is running on, it's flavor, etc. That's all a bigger 
change though, but it's more direct than just passing things through to 
placement. I fear it's also something that might never happen because 
it'll get bogged down in a design committee.

-- 

Thanks,

Matt



More information about the OpenStack-dev mailing list