[openstack-dev] [keystone] keystoneauth version auto discovery for internal endpoints in queens

Monty Taylor mordred at inaugust.com
Sat May 12 15:00:54 UTC 2018


On 05/11/2018 03:37 PM, Vlad Gusev wrote:
> Hello.
> 
> We faced a bug in keystoneauth, which haven't existed before Queens.

Sorry about that.

> In our OpenStack deployments we use urls like http://controller:5000/v3 
> for internal and admin endpoints and urls like 
> https://api.example.org/identity/v3 for public endpoints.

Thank you for using suburl deployment for your public interface and not 
the silly ports!!!

> We set option 
> public_endpoint in [default] section of the 
> keystone.conf/nova.conf/cinder.conf/glance.conf/neutron.conf. For 
> example, for keystone it is 
> 'public_endpoint=https://api.example.org/identity/'.
> 
> Since keystoneauth 3.2.0 or commit 
> https://github.com/openstack/keystoneauth/commit/8b8ff830e89923ca6862362a5d16e496a0c0093c 
> all internal client requests to the internal endpoints (for example, 
> openstack server list from controller node) fail with 404 error, because 
> it tries to do auto discovery at the http://controller:5000/v3. It gets 
> {"href": "https://api.example.org/identity/v3/", "rel": "self"} because 
> of the public_endpoint option, and then in function 
> _combine_relative_url() (keystoneauth1/discover.py:405) keystoneauth 
> combines http://controller:5000/ with the path from public href. So 
> after auto discovery attempt it goes to the wrong path 
> http://controller:5000/identity/v3/

Ok. I'm going to argue that there are bugs on both the server side AND 
in keystoneauth. I believe I know how to fix the keystoneauth one- but 
let me describe why I think the server is broken as well, and then we 
can figure out how to fix that. I'm going to describe it in slightly 
excruciating detail, just to make sure we're all on the same page about 
mechanics that may be going on behind the scenes.

The user has said:

   I want the internal interface of v3 of the identity service

First, the identity service has to be found in the catalog. Looking in 
the catalog, we find this:

       {
         "endpoints": [
           {
             "id": "4deb4d0504a044a395d4480741ba628c",
             "interface": "public",
             "region": "RegionOne",
             "url": "https://api.example.com/identity"
           },
           {
             "id": "012322eeedcd459edabb4933021112bc",
             "interface": "internal",
             "region": "RegionOne",
             "url": "http://controller:5000/v3"
           }
         ],
         "name": "keystone",
         "type": "identity"
       },

We've found the entry for 'identity' service, and looking at the 
endpoints we see that the internal endpoint is:

   http://controller:5000/v3

The next step is version discovery, because the user wants version 3 of 
the api. (I'm skipping possible optimizations that can be applied on 
purpose)

To do version discovery, one does a GET on the endpoint found in the 
catalog, so GET http://controller:5000/v3. That returns:

{
   "versions": {
     "values": [
       {
         "status": "stable",
         "updated": "2016-04-04T00:00:00Z",
         "media-types": [
           {
             "base": "application/json",
             "type": "application/vnd.openstack.identity-v3+json"
           }
         ],
         "id": "v3.6",
         "links": [
           {
             "href": "https://api.example.com/identity/v3/",
             "rel": "self"
           }
         ]
       }
     ]
   }
}

Here is the server-side bug. A GET on the discovery document on the 
internal endpoint returned an endpoint for the public interface. That is 
incorrect information. GET http://controller:5000/v3 should return either:

{
   "versions": {
     "values": [
       {
         "status": "stable",
         "updated": "2016-04-04T00:00:00Z",
         "media-types": [
           {
             "base": "application/json",
             "type": "application/vnd.openstack.identity-v3+json"
           }
         ],
         "id": "v3.6",
         "links": [
           {
             "href": "http://controller:5000/v3/",
             "rel": "self"
           }
         ]
       }
     ]
   }
}

or

{
   "versions": {
     "values": [
       {
         "status": "stable",
         "updated": "2016-04-04T00:00:00Z",
         "media-types": [
           {
             "base": "application/json",
             "type": "application/vnd.openstack.identity-v3+json"
           }
         ],
         "id": "v3.6",
         "links": [
           {
             "href": "/v3/",
             "rel": "self"
           }
         ]
       }
     ]
   }
}

That's because the discovery documents are maps to what the user wants. 
The user needs to be able to follow them automatically.

NOW - there is also a keystoneauth bug in play here that combined with 
this server-side bug have produced the issue you have.

That is in the way we do the catalog / discovery URL join.

First of all - we do the catalog / discovery URL join because of a 
frequently occuring deployment bug in the other direction. That is, it 
is an EXTREMELY common misconfiguration for the discovery url to return 
the internal url (this is what happens if public_url is not set).

In order to deal with that, we take the url from the catalog (which we 
know is valid for the given interface) and do the url join you reported 
between it and the url from the discovery document to produce a working url.

This is, as you can see, not doing the correct thing if the catalog url 
and the discovery url have different paths.

I believe we can fix this to be more robust and handle both deployment 
issues if, instead of using url joining as we are doing now - use the 
logic we have elsewhere to pop project_id and version from a url and 
then to put them back in place. If we apply that here, what we'd do is 
the following:

catalog_url is http://controller:5000/v3/
discovery_url is https://api.example.com/identity/v3/

decompose catalog url:
   catalog_project_id is None
   catalog_version_segment is "v3"
   catalog_base_url is "http://controller:5000"

decompose discovery url:
   discovery_project_id is None
   discovery_version_segment is "v3"
   discovery_base_url is "https://api.example.com/identity"

combine catalog and discovery url for discovered versioned endpoint

   if catalog_project_id:
     {catalog_base_url}/{discovery_version_segment}/{catalog_project_id}/
   else:
     {catalog_base_url}/{discovery_version_segment}/

which would produce http://controller:5000/v3/

That may seem like a lot to go through to wind up back at the catalog 
url, but the process itself needs to work if the url in the catalog is 
not the url the user should use. For instance, if you put the 
unversioned endpoint in your catalog for the internal same as you do for 
the public:

catalog_url is http://controller:5000/
discovery_url is https://api.example.com/identity/v3/

decompose catalog url:
   catalog_project_id is None
   catalog_version_segment is "v3"
   catalog_base_url is "http://controller:5000"

decompose discovery url:
   discovery_project_id is None
   discovery_version_segment is "v3"
   discovery_base_url is "https://api.example.com/identity"

combine catalog and discovery url for discovered versioned endpoint

   if catalog_project_id:
     {catalog_base_url}/{discovery_version_segment}/{catalog_project_id}/
   else:
     {catalog_base_url}/{discovery_version_segment}/

which would produce http://controller:5000/v3/

The process in general is also required for services like cinder that 
still require project_id in the urls and put that in the catalog, 
because otherwise the discovery endpoint is not actually usable:

catalog_url is http://cinder:5000/v2/123456/
discovery_url is https://api.example.com/block-storage/v3/

decompose catalog url:
   catalog_project_id is "123456"
   catalog_version_segment is "v2"
   catalog_base_url is "http://cinder:5000"

decompose discovery url:
   discovery_project_id is None
   discovery_version_segment is "v3"
   discovery_base_url is "https://api.example.com/block-storage"

combine catalog and discovery url for discovered versioned endpoint

   if catalog_project_id:
     {catalog_base_url}/{discovery_version_segment}/{catalog_project_id}/
   else:
     {catalog_base_url}/{discovery_version_segment}/

which would produce http://cinder:5000/v3/123456/

In any case - if you haven't given up reading this email by now ... I 
believe we can fix the issue you're seeing in keystoneauth - and I'm 
sorry for our invalid assumption about matching paths.

I do think that we should think a bit more systemically about how to 
have discovery documents return the correct information in the first 
place so that client-side hacks such as these are not needed.

> Before this commit openstackclient made auth request to the 
> https://api.example.org/identity/v3/auth/tokens (and it worked, because 
> in our deployment internal services and console clients can access this 
> public url). At best, we expect openstackclient always go to the 
> http://controller:5000/v3/
> 
> This problem partially could be solved by explicitly passing public 
> --os-auth-url https://api.example.org/identity/identity/v3 to the 
> console clients even if we want to use internal endpoints.
> 
> I found similiar bug in launchpad, but it haven't received any 
> attention: https://bugs.launchpad.net/keystoneauth/+bug/1733052
> 
> What could be done with this behavior of keystoneauth auto discovery?
> 
> - Vlad
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




More information about the OpenStack-dev mailing list