[openstack-dev] The Nova API in Kilo and Beyond

Sean Dague sean at dague.net
Mon Jun 8 11:11:44 UTC 2015


On 06/05/2015 10:56 AM, Neil Jerram wrote:
> On 05/06/15 12:32, Sean Dague wrote:
>> https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/
> 
> This is really informative and useful, thanks.
> 
> A few comments / questions, with bits of your text in quotes:
> 
> "Even figuring out what a cloud could do was pretty terrible. You could
> approximate it by listing the extensions of the API, then having a bunch
> of logic in your code to realize which extensions turned on or off
> certain features, or added new data to payloads."
> 
> I guess that's why the GNU autoconf/configure system has always advised
> testing for particular wanted features, instead of looking at versions
> and then relying on carnal knowledge to know what those versions imply.
>  Is that feature-testing-based approach impractical for OpenStack?

It shouldn't be carnal knowledge. If we are talking about building an
ecosystem we need to be really explicit about "this version gives you
this contract". We have protocol versioning all through our internal RPC
mechanisms, because if we didn't you'd need to write an order of
magnitude more code to have an application work.

You can always give your user a terrible contract, and make them do a
ton of extra work on their side to figure out what's available. See...
present day. But the firm belief is we should do better than that if we
want to encourage an application ecosystem.

> "Then she runs her code at against another cloud, which runs a version
> of Nova that predates this change. She's now effectively gone back in
> time. Her code now returns thousands of records instead of 1, and she's
> terribly confused why. She also has no way to figure out if random cloud
> Z is going to support this feature or not. So the only safe thing to do
> is implement the filtering client side instead, which means the server
> side filtering actually gained her very little. It's not something she
> can ever determine will work ahead of time. It's an API that is
> untrustworthy, so it's something that's best avoided."
> 
> Except that she still has to do all this anyway - i.e. write the
> client-side filtering, and figure out when to use it instead of
> server-side - even if there was an API version change accompanying the
> filtering feature.  Doesn't she?

Not if she's comfortable with a minimum supported version in her code.
People abandon old systems all the time. No one is still writing IE6
supporting javascript code. The important thing is that a non trust
worthy API, i.e. one that could return differently seemingly at random,
is terrible. It makes developers pull out their hair and curse your
name, and figure out if they can get off your platform entirely.

Really stable contracts are a key feature in building an ecosystem above
you. It's why your house has a concrete slab under it, not just a pile
of sand.

> The difference is just between making the switch based on a version
> number, and making it based on detected feature support.
> 
> "If you want features in the 2.3 microversion, ..."
> 
> I especially appreciate this part, as I've been seeing all the chat
> about microversions go past, and not really understanding it.
> 
> FWIW, though - and maybe this is just me - when I hear "microversion",
> I'm thinking of the "Z" in an "X.Y.Z" version number.  (With X = major
> and Y = minor.)  So it's counterintuitive for me that "2.3" is a
> microversion; it just sounds like a perfectly normal major/minor version
> number.  Are 2.0 and 2.1 microversions too?

So, we used the word "microversion" in contrast to previous versioning
in OpenStack which required a new endpoint. The reality of the
implementation is we're working with a monotonically increasing counter.
Y is going to increase forever.

This *is not* semver. There is no semantic meaning / value judgement on
each change. It is better thought about as a sequence number.

> But this is just bikeshedding really, so feel free to ignore...
> 
> "without building a giant autoconf-like system"
> 
> Aha, so you probably did consider that option, then. :-)

Just provide the thought exercise on API autoconf like system. In order
to fully explore an API you are looking at basically running something
like tempest against it. So you needed to run about 20K API calls, build
and destroy 100 guests, volumes, objects, etc. That kind of run probably
would take a couple of hours. When you are building against a public
cloud, every API has a cost.

Also, it a service. It can be upgraded at any time. Without something
clear like a microversion declaration, you'd never know they cloud
updated, worked differently, and now your application fails to work.

Also, find me more than a handful of application developers that like
writing autoconf tests. :)

	-Sean

-- 
Sean Dague
http://dague.net



More information about the OpenStack-dev mailing list