On Mon, 2025-08-18 at 13:28 +0200, Niklas Schwarz wrote:
Hey there
I have observed that the keystone logs are flooded with error messages and a stack trace that resources are not found. When investigating on the issue I noticed that the resources do exist but the behavior of the openstack-client is a bit confusing. I will describe the behavior for the openstack-client in the interaction with the kestone api. As know the openstack-cli which uses the openstack-client has the functionality to show a user either by passing an id or the name of the user. The requests when executing the command with the name of the user will be 1. v3/user/<name_or_id> - v3/user?name=<name_or_id>
The first request is issued to the endpoint which expects the rout parameter to be an id of the user and in the case of the keystone api a db query is executed with the name as an id and no user will be found which results in the log message and the stack trace in the keyston logs. Because no user was found the openstack-client issues a second request to the api with the name as a query-parameter instead of a rout-parameter which will issue the db-query at the keystone api to search by the username. On some deployments this can and will be additional load on the database and on the api which can be reduced just by sending the correct request to the api instead of "guessing" and trying for the correct request and hoping there will be an answer.
The issue with keystone is that the openstack-client will issue also requests to the keystone api when listing, e.g. networks but only for a specific project (openstack network list --project <name_or_id>). The client will first search for the project and in the worst case these are two requests.
In my opinion this behavior can be prevented to send multiple requests to the keystone api (or any other api), either by checking on the client if a the passed parameter is an id or by handling the passed route-parameter accordingly in the keystone api and executing the correct db-query to prevent multiple requests.
I have decided on implementing a change [1] on the client side and have adjusted the openstack-client.
As artem pointed out in the review that in some services there are places where the id cannot be really identified as such I wanted to ask/I was asked to discuss the issue on the mailing list since there might be multiple projects involved in such a change to reduce the calls to the api (when handling on the client side) or db (when handling on the server side) when a search by an id or name is issued.
Not an answer, but rather some additional context/opinions. Firstly, I don't believe keystone should be emitting tracebacks for invalid tokens. I'm sure there are historical reasons for this but it's not an _exceptional_ case, and other services don't raise exceptions for failed validation. For example, if I try to create a flavor with ram=0, I see the following in the nova-api logs: DEBUG nova.api.openstack.wsgi [None req-b860d230-75f2-400b-9cce-0eca9e842609 admin admin] Calling method '<bound method VersionsV2.index of <nova.api.openstack.compute.versions.VersionsV2 object at 0x7f3dda6bb1a0>>' {{(pid=2891882) _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:552}} INFO nova.api.openstack.requestlog [None req-b860d230-75f2-400b-9cce-0eca9e842609 admin admin] 10.45.225.11 "GET /compute/v2.1" status: 200 len: 390 microversion: 2.1 time: 0.002868 [pid: 2891882|app: 0|req: 54/108] 10.45.225.11 () {58 vars in 1003 bytes} [Mon Aug 18 16:06:07 2025] GET /compute/v2.1 => generated 390 bytes in 3 msecs (HTTP/1.1 200) 9 headers in 357 bytes (1 switches on core 0) DEBUG nova.api.openstack.wsgi [None req-044fa668-13b8-4992-839f-67e884e540f2 admin admin] Action: 'create', calling method: <bound method FlavorsController.create of <nova.api.openstack.compute.flavors.FlavorsController object at 0x7f3dd9f47c20>>, body: {"flavor": {"name": "test-flavor", "OS-FLV-EXT-DATA:ephemeral": 0, "id": null, "disk": 0, "rxtx_factor": 1.0, "vcpus": 0, "os-flavor-access:is_public": true, "ram": 0, "swap": 0}} {{(pid=2891879) _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:550}} DEBUG nova.api.openstack.wsgi [None req-044fa668-13b8-4992-839f-67e884e540f2 admin admin] Returning 400 to user: Invalid input for field/attribute ram. Value: 0. 0 is less than the minimum of 1 {{(pid=2891879) __call__ /opt/stack/nova/nova/api/openstack/wsgi.py:909}} INFO nova.api.openstack.requestlog [None req-044fa668-13b8-4992-839f-67e884e540f2 admin admin] 10.45.225.11 "POST /compute/v2.1/flavors" status: 400 len: 124 microversion: 2.61 time: 0.021014 [pid: 2891879|app: 0|req: 55/109] 10.45.225.11 () {68 vars in 1355 bytes} [Mon Aug 18 16:06:07 2025] POST /compute/v2.1/flavors => generated 124 bytes in 21 msecs (HTTP/1.1 400) 9 headers in 383 bytes (1 switches on core 0) IMO this is enough to debug failed requests. I'm sure there are good reason why keystone logs exceptions for tracebacks, but if not that could be an option to remove noise from your logs. Secondly, the "find_*" pattern as implemented has two big issues: the one you've described, and a second issue non-admin users can't list resources from other projects by default. The latter manifests itself in things like adding members to an image in glance, and you shouldn't need to have access to the project (and by default, you won't unless you're an admin). Short-cutting obvious UUIDs would seem the obvious fix and could be done in the SDK by e.g. providing a Resource.id_is_uuid attribute and a corresponding change to the 'Proxy._find' method to respect this. However, as we've pointed out (a) not all projects insist on UUIDs for resource IDs, meaning the `id_is_uuid` attribute would need to be evaluated on resource-by-resource basis and (b) you can't be sure that a UUID is an actually the ID rather than the name, since a UUID can be used as a name. I'd like to see if anyone has a clever suggestion for how to avoid this. Cheers, Stephen
Are there an plans on tackling this issue or any other options and opinions? The post by marc [2] might also be a result of the openstack-client issuing multiple requests to the api and polluting the logs. So in some cases this is a realy issue.
Best regards
Niklas
[1] https://review.opendev.org/c/openstack/python-openstackclient/+/933644 [2] https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack....