[Openstack] [Grizzly] VMs not authorized by metadata server

Michaël Van de Borne michael.vandeborne at cetic.be
Sun Apr 28 17:45:05 UTC 2013


I think I'm getting closer here. Whenever a VM requests metadata, the 
quantum-metadata-agent tries to authenticate to keystone.
correct credentials for my config are
admin_tenant_name = service
admin_user = quantum
admin_password = grizzly


*BUT*
in the keystone log, I can see this

2013-04-28 19:36:33    DEBUG [keystone.common.wsgi] ******************** 
REQUEST BODY ********************
2013-04-28 19:36:33    DEBUG [keystone.common.wsgi] {"auth": 
{"tenantName": "service", "passwordCredentials": {"username": "quantum", 
"password": "*password*"}}}
2013-04-28 19:36:33    DEBUG [keystone.common.wsgi]
2013-04-28 19:36:33    DEBUG [keystone.common.wsgi] arg_dict: {}
2013-04-28 19:36:33  WARNING [keystone.common.wsgi] Authorization 
failed. Invalid user / password from 192.168.203.103


Means that whatever the password I configured in 
/etc/quantum/metadata_agent.ini, the one that is sent to keystone is 
"password".

How can it be? is it a bug? has it been stored persistently in the DB? 
and how can I change that?

thanks,

m.


Michaël Van de Borne
R&D Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi

Le 28/04/2013 10:35, Michaël Van de Borne a écrit :
> Hi,
>
> 1. yes.
> 2. yes. Moreover, I have to kill it manually and delete the pid file 
> and then restart l3-agent, cause otherwise it stays alive. No error in 
> its log file.
> 3. yes. Here are the corresponding keys for this shared secret:
>
> # on the controller node
> root at leonard:~# cat /etc/nova/nova.conf | grep secret
> quantum_metadata_proxy_shared_secret=grizzly
> # on the network node
> root at rajesh:/var/log/quantum# cat /etc/quantum/metadata_agent.ini | 
> grep secret
> metadata_proxy_shared_secret=grizzly
>
> By the way, I tried to mismatch the secret, and I got an error saying 
> that the secrets did not match. So I guess the error (unauthorized) 
> I'm getting isn't related to the secret.
>
> any other idea?
>
> thanks
>
>
>
> Le 28/04/2013 07:28, Gary Kotton a écrit :
>> On 04/27/2013 12:44 PM, Michaël Van de Borne wrote:
>>> Anybody has an idea about why the nova metadata server rejects the 
>>> VM requests?
>>
>> Hi,
>> Just a few questions:-
>> 1. Can you please check /etc/quantum/metadata_agent.ini to see that 
>> you have the correct quantum keystone credential configured?
>> 2. Can you please make sure that you are running the quantum metadata 
>> proxy.
>> 3. In nova.conf can you please see that 
>> "service_quantum_metadata_proxy = True" is set.
> Thanks
>> Gary
>>
>>>
>>>
>>>
>>> Le 26/04/2013 15:58, Michaël Van de Borne a écrit :
>>>> Hi there,
>>>>
>>>> I've installed Grizzly on 3 servers:
>>>> compute (howard)
>>>> controller (leonard)
>>>> network (rajesh)).
>>>>
>>>> Namespaces are ON
>>>> Overlapping IPs are ON
>>>>
>>>> When booting, my VMs can reach the metadata server (on the 
>>>> controller node), but it responds a "500 Internal Server Error"
>>>>
>>>> *Here is the error from the log of nova-api:*
>>>> 2013-04-26 15:35:28.149 19902 INFO nova.metadata.wsgi.server [-] 
>>>> (19902) accepted ('192.168.202.105', 54871)
>>>>
>>>> 2013-04-26 15:35:28.346 ERROR nova.network.quantumv2 
>>>> [req-52ffc3ae-a15e-4bf4-813c-6596618eb430 None None] 
>>>> _get_auth_token() failed
>>>> 2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 
>>>> Traceback (most recent call last):
>>>> 2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 File 
>>>> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py", 
>>>> line 40, in _get_auth_token
>>>> 2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2     
>>>> httpclient.authenticate()
>>>> 2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 File 
>>>> "/usr/lib/python2.7/dist-packages/quantumclient/client.py", line 
>>>> 193, in authenticate
>>>> 2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2     
>>>> content_type="application/json")
>>>> 2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 File 
>>>> "/usr/lib/python2.7/dist-packages/quantumclient/client.py", line 
>>>> 131, in _cs_request
>>>> 2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2     
>>>> raise exceptions.Unauthorized(message=body)
>>>> 2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 
>>>> Unauthorized: {"error": {"message": "The request you have made 
>>>> requires authentication.", "code": 401, "title": "Not Authorized"}}
>>>> 2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2
>>>> 2013-04-26 15:35:28.347 ERROR nova.api.metadata.handler 
>>>> [req-52ffc3ae-a15e-4bf4-813c-6596618eb430 None None] Failed to get 
>>>> metadata for instance id: 05141f81-04cc-4493-86da-d2c05fd8a2f9
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
>>>> Traceback (most recent call last):
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File 
>>>> "/usr/lib/python2.7/dist-packages/nova/api/metadata/handler.py", 
>>>> line 179, in _handle_instance_id_request
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> remote_address)
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File 
>>>> "/usr/lib/python2.7/dist-packages/nova/api/metadata/handler.py", 
>>>> line 90, in get_metadata_by_instance_id
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> instance_id, address)
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File "/usr/lib/python2.7/dist-packages/nova/api/metadata/base.py", 
>>>> line 417, in get_metadata_by_instance_id
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> return InstanceMetadata(instance, address)
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File "/usr/lib/python2.7/dist-packages/nova/api/metadata/base.py", 
>>>> line 143, in __init__
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> conductor_api=capi)
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File 
>>>> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", 
>>>> line 359, in get_instance_nw_info
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> result = self._get_instance_nw_info(context, instance, networks)
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File 
>>>> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", 
>>>> line 367, in _get_instance_nw_info
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> nw_info = self._build_network_info_model(context, instance, networks)
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File 
>>>> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", 
>>>> line 777, in _build_network_info_model
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> client = quantumv2.get_client(context, admin=True)
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File 
>>>> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py", 
>>>> line 67, in get_client
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> return _get_client(token=token)
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File 
>>>> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py", 
>>>> line 49, in _get_client
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> token = _get_auth_token()
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File 
>>>> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py", 
>>>> line 43, in _get_auth_token
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
>>>> LOG.exception(_("_get_auth_token() failed"))
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> self.gen.next()
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File 
>>>> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py", 
>>>> line 40, in _get_auth_token
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> httpclient.authenticate()
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File "/usr/lib/python2.7/dist-packages/quantumclient/client.py", 
>>>> line 193, in authenticate
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
>>>> content_type="application/json")
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   
>>>> File "/usr/lib/python2.7/dist-packages/quantumclient/client.py", 
>>>> line 131, in _cs_request
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler     
>>>> raise exceptions.Unauthorized(message=body)
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
>>>> Unauthorized: {"error": {"message": "The request you have made 
>>>> requires authentication.", "code": 401, "title": "Not Authorized"}}
>>>> 2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler
>>>> 2013-04-26 15:35:28.349 19902 INFO nova.api.ec2 [-] 0.198106s 
>>>> 192.168.202.105 GET /2009-04-04/meta-data/instance-id None:None 500 
>>>> [Python-httplib2/0.7.2 (gzip)] text/plain text/plain
>>>> 2013-04-26 15:35:28.349 19902 INFO nova.metadata.wsgi.server [-] 
>>>> 10.0.0.4,192.168.202.105 "GET /2009-04-04/meta-data/instance-id 
>>>> HTTP/1.1" status: 500 len: 229 time: 0.1988521
>>>>
>>>>
>>>> *On the network node, here is the config file for metadata agent:*
>>>> root at rajesh:/var/log/quantum# cat /etc/quantum/metadata_agent.ini
>>>> [DEFAULT]
>>>> debug = True
>>>> auth_url = http://192.168.203.103:35357/v2.0
>>>> auth_region = RegionOne
>>>> admin_tenant_name = service
>>>> admin_user = quantum
>>>> admin_password = grizzly
>>>> nova_metadata_ip = 192.168.202.103
>>>> nova_metadata_port = 8775
>>>> metadata_proxy_shared_secret = grizzly
>>>>
>>>>
>>>> *Here are the metadata keys from the nova.conf of the controller node:*
>>>> service_quantum_metadata_proxy=true
>>>> quantum_metadata_proxy_shared_secret=grizzly
>>>>
>>>>
>>>> *I tried to curl the controller node like this:*
>>>> root at leonard:~# curl -H "x-instance-id: 
>>>> 05141f81-04cc-4493-86da-d2c05fd8a2f9" -H "x-instance-id-signature: 
>>>> 1de544a5fc4c1b8d5fb37441bf4c1360ab63336b58dfb3f4b78d290c5268b4e5" 
>>>> http://192.168.202.103:8775/2009-04-04/meta-data/instance-id
>>>> <html>
>>>>  <head>
>>>>   <title>500 Internal Server Error</title>
>>>>  </head>
>>>>  <body>
>>>>   <h1>500 Internal Server Error</h1>
>>>>   An unknown error has occurred. Please try your request again.<br 
>>>> /><br />
>>>>
>>>>
>>>>
>>>> *I should add that the quantum-ns-proxy log file on the network 
>>>> node remains empty.*
>>>>
>>>>
>>>>
>>>> *Here is the metadata **agent log:*
>>>> 2013-04-26 15:37:16  WARNING [quantum.agent.metadata.agent] Remote 
>>>> metadata server experienced an internal server error.
>>>>
>>>>
>>>> any clue why the request to metadata server cannot be authorized?
>>>>
>>>>
>>>> thanks,
>>>>
>>>> yours,
>>>>
>>>> mike
>>>>
>>>>
>>>> -- 
>>>> Michaël Van de Borne
>>>> R&D Engineer, SOA team, CETIC
>>>> Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
>>>> www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi
>>>>
>>>>
>>>> _______________________________________________
>>>> Mailing list:https://launchpad.net/~openstack
>>>> Post to     :openstack at lists.launchpad.net
>>>> Unsubscribe :https://launchpad.net/~openstack
>>>> More help   :https://help.launchpad.net/ListHelp
>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list:https://launchpad.net/~openstack
>>> Post to     :openstack at lists.launchpad.net
>>> Unsubscribe :https://launchpad.net/~openstack
>>> More help   :https://help.launchpad.net/ListHelp
>>
>>
>>
>> _______________________________________________
>> Mailing list:https://launchpad.net/~openstack
>> Post to     :openstack at lists.launchpad.net
>> Unsubscribe :https://launchpad.net/~openstack
>> More help   :https://help.launchpad.net/ListHelp
>
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130428/8a23d860/attachment.html>


More information about the Openstack mailing list