[openstack-dev] [all] [api] API-WG PTG recap
cdent+os at anticdent.org
Fri Mar 3 17:12:21 UTC 2017
The following attempts to be a summary of some of the API-WG related
ativity from last week's PTG. Despite some initial lack of
organization we managed to have several lively discussions in a room
that was occasionally standing room only.
I had intended to do a daily summary of my time at the PTG but
completely failed to do so. As with most OpenStack gatherings the
time was very compressed and completely exhausting. Fellow API-WG
core Ed Leafe manage a blog posting which includes some summary of
the API-WG period:
We had initially planned to share a room with the architecture
working group, but at the last minute a room of our own (Georgia 5)
was made available to us. This led to some confusion on where people
were supposed to be when, but through the judicious use of IRC,
signs in the hallway and going and finding people who might help
with discussion, we managed to keep things moving. There also seemed
to be a degree of "I don't know where else to be so I'll hang with
the API-WG". This turned out to be great as it meant we had a lot of
diverse participation. To me that's the whole point of having these
gatherings, so: great success.
Because of sharing rooms with the arch-wg we also shared an initial
etherpad with them:
On there we formed an agenda and then used topic based etherpads for
* stability and compatibility guidelines:
* capabilities discovery:
* service catalog and service types:
with some discussion for how/when to raise the minimum microversion
happening on the architecture etherpad.
Sections for each of these below.
# Stability/Compatibility Guidelines
This topic was discussion related to the updates being made to the
guidelines for stability and compatibility in APIs happening at
There are plans for this to become the guidance for a voluntary tag
that asserts a service's API is stable. The passionate discussion
throughout the morning and into the afternoon was in large part
reaching some agreement about the similarities and differences in
meaning of the terms "stability", "compatibility" and
"interoperability" and how those meanings might change depending on
whether the person using the term was a developer, deployer or user
of OpenStack services.
In the end the main outcomes were:
* The definitions that matter to the terms above are the ones that
impact the end user and that if we really want stability and
interoperability for that class of people, change of any sort that
is not clearly signalled is bad.
* Though microversions are contentious they are the tool we have at
this time which does the job we want. However, care must be taken
to not allow the presence of microversion to license constant
* It's accepted and acknowledged that when a service chooses to be
stable it is accepting an increased level of development pain (and
potential duplication and cruft in code) to minimize pain caused
to end users.
* A service should not opt-in to stability until it is actually
stable. A service which is already stable, but wants to experiment
with functionality that it may not keep should put that
functionality on a different endpoint (as in different service).
* People who voiced strong opinions at the meeting should comment on
the review. Not much of this has happened yet.
* Strictness in stability is more important the more "public" the
interface is. A deployer only interface is less public.
* It is considered normal practice to express a potentially
different version with each different URL requested from a
service. What should be true is that if you take that exact same
code and use it against a service that supports the same versions
it should "just work" (modulo policy).
* Supporting continuous deployment is part of the OpenStack way.
This increases some of the developer-side pain mentioned above.
* We should document client side best practices that help ensure
stability on that side too. For example evaluating success as 200
<= resp.status < 300 instead of just 200 or 202 or 201 or
* The guideline should document more of the reasoning.
So: we landed somewhere pretty strict, but that strictness is
optional. A project that wants the tag should follow the guidelines
and a project that eventually wants the tag or wants to at least
strive for interoperability should be aware of the guidelines and
implement those it can.
* People comment on the review
* I produce a next version of the guidelines integrating the
# Capabilities Discovery
This began as an effort to refine a proposed guideline for
expressing what a cloud can do:
That was modeled as what a cloud can do, what a type of resource in
that cloud can do, and what this specific instance of this resource
Discussion was wide ranging but eventually diverged into two
* Expressing cloud-level capabilities (e.g., does this cloud do floating
ips) at either the deployment or service level. The use of the
URL /capabilities is in the original spec, but since swift already
provides an implementation of an idea like this at /info we should
go with that. It's not clear what the next steps with this are,
other than to iterate the spec. We need volunteers to work on at
least reviewing that, and perhaps picking up the authorship.
* Satisfying the use case that prompted the generic idea above:
Making the right buttons show up in dashboards like horizon that
indicate whether or not an instance can be snapshotted and other
The next steps on that latter direction are to modify the server
info representation in the compute api to include a new key which
answers the top 5 questions that horizon wants to be able to answer.
Once we see how well that's working, move on.
# Service Catalog and Service Types:
The original goal here was to talk about having:
* consistently unversioned endpoints in the service catalog
* having those endpoints in the service present consistently
structured information that allows version discovery
As is often the case when talking about these topics we wandered
quite broadly and realized that there continues to be a fair bit of
difference in how services use the catalog and how they present
themselves in the catalog. A lot of this needs to be addressed with
increased communication about correct practices when registering
things in the catalog and filtering the catalog. A first step is
cleaning up devstack's creation of its own catalog .
A first step is getting the catalog using consistent service types.
To that end the service types authority is coming back to life
with the general goal of getting everyone on the same page about
The next step is to make sure that endpoints in the service catalog
are not versioned and that the version document presented at that
endpoint is consistently structured. This is a big TODO (see the
# Raising the Minimum Microversion
(line 147 right now)
This was mostly a discussion between sdague and jroll on using
Ironic as the guinea pig for discovering the issues that will occur
when raising the minimum microversion. The main idea here is that
the version discovery document will have a new field that means "the
next minimum microversion" that is saying "soon you'll need to be at
least this or stuff will start to break". jroll is on the hook to
provide a concrete proposal to the API-WG on what this should look
like and how it would work in practice.
I'd like to thank everyone who participated on Monday and Tuesday of
last week. It was really great to see so many people with so much
enthusiasm for improving OpenStack APIs. I hope everyone felt like
they had a chance to have their voice heard. If you didn't, or you
weren't there so didn't have a chance, please speak up. Either here
in response to this message, on the associated reviews, or come to
an api-wg meeting  and have a chat. We are making progress.
Chris Dent ¯\_(ツ)_/¯ https://anticdent.org/
freenode: cdent tw: @anticdent
More information about the OpenStack-dev