Reposting Sean's response with my answers
what i really dont wnat to see is going forward an api change to nova
woudl
also requrie the oepnapi schema be updated by submiting a patch to a
speratre repo.
so im fine with haveing schemas in the nova repo but im not ok with
having a https://opendev.org/openstack/openstack-openapi/ repo
Please do not mix jsonschemas in services and the resulting OpenAPI spec
file.
if we crate an openapi spec file i woudl like to use that for api validation
in the serveircies and i would like to ensure that the two never get out of
sync which is why i want to have the OpenAPI spec file be part of the
project.
From your comments further down in the message I see where you are heading.
From all the tooling and docs around OpenAPI you can see there are 2 different
use cases of OpenAPI spec:
1. generation of the spec from current sources giving you freedom to use
whichever underlying technology you want (Stephen's nova-spec change is also
explicitly going this way)
2. using spec in the sources in the runtime
a) query parameters + body validation
b) request routing
Those are actually very contradicting with each other. We need to align what
exactly we are talking about and what is the scope of the change
Your direction feels like (2.) about having devs to write a standalone spec
and letting service consume it to implement the API in runtime (2b). This
requires that either all services switch to single framework or we implement
OpenStack-flavoured OpenAPI support into all of the currently used frameworks
(or of course we again stuck with services doing absolutely different things)
My proposal (1.) is to have spec generated supporting current zoo of
frameworks used by services for request routing without devs needing to deal
with OpenAPI directly. That requires us to ensure current jsonschemas and
docstrings in the code are complete. Currently I see a tendency in other
opensource projects that the bigger projects tend to go for 1. (with
Kubernetes and GitHub being examples here) while smaller ones tend to choose
2. This can be explained by the additional complexity you get with maintaining
manually the spec for huge services (my current compute OpenAPI spec is 46511
lines of code). Last, but not least, is the evolution of the OpenAPI standard
itself forcing you to rewrite the spec time to time.
This approach does not let you to consume openapi to route requests while it
still allows you to have a test suite that would validate your service behaves
like the spec describes.
If we are on that level of discussion I really would like to see TC to get
more actively into the discussion and we all agree and commit on a single way.
Anyway, I was thinking about such approach also. Issue, but not really
critical for me, is that having schemas in various repos will make it
harder for the user to actually grep them, unless we build a single page
with links to all specs.
In addition to that with the spec generated by a different tool we are
going to have some workflow issue: schema is an artifact of a build and
need to be hosted somewhere permanently.
yep we can host it in the project docs sight.
besided the rendered api_ref.
we can also commit it to the git repo is if we want to.
i feel like not maing it the output of a build process also has advantages.
You would be having problem in having an API
affecting change which during merge generates spec and this spec being
submitted into the repo as well.
i was not proposing having any generations of the schema and instead
modfiying them by hand when we are modifying the api to ensure the two
correalate and capture the intent of the api change.
Ok, but this is exactly what I am proposing - generate spec from source code
of services
My analysis of the source code of services showed that there are much more
things and bugs exposed then devs know/expect/want.
Maintaining OpenAPI per hand is something people have problem with and is very
error-prone with respect to OpenStack API nuances.
Forcing devs to manually invoke the generator and include the spec into
the
change is definitely a very strong no-go for me, this will be a disaster.
Pushing rendered specs to some artifacts server is necessary.
well having external schema files is also a stong no-go form me but im
trying not to draw red lines and find a way forward.
there are three thing i want to prevent.
i never want to have an external schema definition result in a client
rejecting a api call that should have been valid for a given cloud.
i don't want to maintain two diffent vallidation schmea for the apis (one in
the proeject and a second external) finally i dont want to have to do a
seperate release of a lib just to udpate the schema to be able to merge a
new api microversion or the reverses to require an update an external repo
after every api microversion.
if we have the openapi sechma defs in nova we should not need to update the
openstack sdk when we have an api uidate and woudl only need to update
openstack client to use that new api.
Agreed, as said - I am ok with not keeping the rendered spec part of different
repository and wasn't under any means proposing to have a "lib" like repo with
releases and so on. 3rd party "doc" repo can be used to link specs of different
services together.
One more time clarifying since I still feel we are not in sync about one
technical aspect: jsonschemas of request bodies are and must be in the service
repository, service router is exposing list of supported routes/methods with
corresponding body jsonschemas. OpenAPI spec combines information about those
exposed routes with their corresponding bodies and descriptions and applies
standard on how to document microversions and actions (your router just shows
that there is /"servers/{server_id}/action" url while you need to further
define all the different actions and microversions).
In any case what is required is to have a sphinx-extension that allows us
to generate docs for the schema and this repo is primarily is exactly
about that (I have no issue renaming it to better indicate that fact).
Just as a matter of temporary demonstration it also hosts the rendered
specs.
# My "ideal" workflow would look like that:
- service developer creates change like above that changes the API
so in my expected workflow the openapi spec file would be a file in the repo
that is updated and review as part of the normal review process
- Zuul job generates OpenAPI spec for the service respecting the change
and
publishes it together with job artifacts (just like docs right now)
- Once a change is merged the OpenAPI spec artifact is published as a
deliverable (just like doc job publishes built docs). Maybe the api-ref
job
can be modified to publish openapi spec and rendered HTML together
i also think we should have a job to publish the spec file too but im not
sure we need to have any generation step outside of what we already do for
the api ref.
- [from here absolutely speculatively]
- another job kicks in and automatically proposes changes to projects that
consume this OpenAPI spec (like a change toward the SDK is automatically
opened that applies corresponding change)
this gets a little dicey in terms of licensing as we currently have a policy
in openstack that all patches must be authored by a human who has signed
the icla with the exception of the depency bump patches so we would have to
make sure that any such patches dont fall foul of the guidance that
maintainer should not merge patches form AI bots
in general any generated code not submitted by a human is problematic form a
copyright point of view and icla point of veiw which is partly why i woudl
prefer the spec update to be in the orginal patch to the project not
autogenerated via a job.
This is not AI generated code. It is similar to a tooling that RM is using to
open bunch changes to the projects (only difference is who triggers it: a human
or Zuul as a result of another job). Here it is about rendering pre-defined
templates with new values and automatically proposing that as a change. We
control both the templates and the values.
I really do not understand a point of discussion like in example when a job
completely under your control is doing periodic check of new available
dependencies versions following your update strategy and proposing an update
once all tests are green. But I definitely do not want to move the discussion
into the different aspect. It is not about that.
Anyway, I hope we can address this aspect by adapting our policies where
required or stop being productive in a fear of laws.
- dependent project developers review this automatic change and
merge/modify it as necessary
=== openstack-rs
https://gtema.github.io/openstack/
This is a Rust monorepo project with few items:
- SDK for OpenStack out of the rendered OpenAPI specs
- experimental CLI following very different approach compared to the
python- openstackclient. It is compiled into single binary (what helps
those not willing to pull whole python with all deps and the insanity
we
experience lately or those who build docker containers with only OSC
inside to bypass the mentioned hell) and purely auto-generated from
the
specs
- [future] openstack-tui (similar to k9s for Kubernetes)
i was actully playing with rust and pyo3 to try and generate rust data
types form the jsonscheam defintion of novas oslo versioend objects
https://github.com/SeanMooney/nova-oxide thats mainly to paly with rust
and
rust<>python interaction in partarlary.
i was toying with the idea of writing a deamon that coudl comuncitate
with
nova comonets over the rpc bus effectivly a rust implemetiaon of a
nova-comptue agent as way to learn rust. basically nova-comput ewould
need
effectivly a rewrite to remove our use of eventlet anyway so i was
toying
with the idea of if we are rewriting should we rewtien in something
other
then python.
have you considerd using something like pyo3 to provide python binding
around a rust inmpelmation of the sdk to support both lanageues form a
singel codeabse. https://pyo3.rs/v0.20.3/
Yes, it is clearly possible and I am poking with idea of integrating
generated code into the openstacksdk (at least in some parts like auth
caching).
I really think of having a new openstacksdk2 that is either also generated
from specs or is under the hood uses the rust sdk.
Changing the current OpenStackSDK would require huge breaking changes, so
this is not in the scope right now.
i think if we were to add rust to the supproted langugaes for openstack
projects then creating an openstack-sdk-rs project would make sense however
rust is not an appvoed lanague for use in offial project go, javascript adn
python are so for this to proceed in the openstack namespace you would need
to formally request rust be added via
https://github.com/openstack/governance/blob/master/reference/new-language-r
equirements.rst
and we woudl have to defien a new rust pti likely targeting the rust stable
release. https://github.com/openstack/governance/tree/master/reference/pti
Ouch, I completely missed when "Go" was added into that list.
Right, and thus mentioning it here. At least now I see you are also thinking
about usage of Rust so at least there are now 2 person and this can be
triggered.
If Rust is not accepted then my project will not be official OpenStack delivery.
The real question is who wins in that case.
Here it is all really oriented only on the user facing tooling. I am
very
convinced in this project and will continue working on it independent
on
the outcome.
Any objections/ideas/comments?
yep see above.
mainly just decopleing the schmas form the implemation.
if we keep the schemas with the python code in the same repo or in a lib
repo provded by each service project
I do not get the point here really.
- Ideally every service is extending schemas already present in the code
and adds descriptions directly into the code.
- from schemas in the service code we are able to generate OpenAPI
- the OpenAPI spec can be used to render API docs of the service
- OpenAPI specs can be used to generate client facing (and possibly also
server bindings) tooling (sdk, cli, etc)
ideally from my point of view we would remove our custom jsonschma
validation and replace it with an openapi spec file then use something like
https://github.com/python-openapi/openapi-core?utm_source=openapi-tools&utm_
medium=website&utm_campaign=data-validators to load the spec file form disk
and use it ot add server side request validation. i would really like the
nova api to be able to do somethign like
```
from openapi_core import OpenAPI
openapi = OpenAPI.from_file_path('openapi.json')
try:
result = openapi.unmarshal_request(request)
handel_request(request)
excpet ValidationError:
...
```
the current json schema validation we have is a custom intergartion and im
inclidned to consider it techdebt and look to replace it with a better
supported, purpose built project that we dont have to maintain long term...
This is an absolutely valid approach (as also pointed above), but in my eyes
would require much more work and will not work for all OpenStack services
together without enforcement of OpenStack flavor of OpenAPI (same as you can't
enforce all services to use same wsgi framework as of now or to follow the
accepted API guidelines).
As described my proposal is about going a less offensive way and allows us have
"auto-documented" code what could be standardized across services while
keeping the service's freedom to use whichever technology it prefers.
That does not block any service from implementing service relying on the spec,
but puts certain requirements on the form of the spec.
N.B: what really surprises and disappoints me is general lack of any reaction
in this thread as well as all previous attempts.
Is it so unimportant ?
Is it a fear of debate or just understanding that this is going to be another
heated discussion without consensus?
Is it something too big to do and thus let us not even start doing that?
Is the community "died out" short in front of elections?
Is it just waiting for somebody to break the "governance/policies/rules" wall
we built between each other?
Or am I the only one who thinks to see this issue? But that would not match to
what others tell me directly.
P.S. I re-read my message and assume some may feel offended by the tone. It is
not really intended, so sorry in advance.
Regards,
Artem