[Openstack] metadata service not working for VMs

Xin Zhao xzhao at bnl.gov
Fri Feb 7 19:14:44 UTC 2014


One more information:

And the metadata-agent.log file from the network node shows these error 
messages:

2014-02-07 10:59:28    ERROR [quantum.agent.metadata.agent] Unexpected 
error.
Traceback (most recent call last):
   File 
"/usr/lib/python2.6/site-packages/quantum/agent/metadata/agent.py", line 
86, in __call__
     instance_id = self._get_instance_id(req)
   File 
"/usr/lib/python2.6/site-packages/quantum/agent/metadata/agent.py", line 
116, in _get_instance_id
     fixed_ips=['ip_address=%s' % remote_address])['ports']
   File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", 
line 107, in with_params
     ret = self.function(instance, *args, **kwargs)
   File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", 
line 255, in list_ports
     **_params)
   File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", 
line 996, in list
     for r in self._pagination(collection, path, **params):
   File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", 
line 1009, in _pagination
     res = self.get(path, params=params)
   File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", 
line 982, in get
     headers=headers, params=params)
   File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", 
line 967, in retry_request
     headers=headers, params=params)
   File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", 
line 904, in do_request
     resp, replybody = self.httpclient.do_request(action, method, body=body)
   File "/usr/lib/python2.6/site-packages/quantumclient/client.py", line 
137, in do_request
     self.authenticate()
   File "/usr/lib/python2.6/site-packages/quantumclient/client.py", line 
193, in authenticate
     raise exceptions.Unauthorized(message=body)
Unauthorized: [Errno 111] ECONNREFUSED

Thanks,
XIn

On 2/7/2014 11:57 AM, Xin Zhao wrote:
> Hello,
>
> I have an issue with accessing metadata from instances.
>
> I am running a grizzly testbed using quantum/OVS network mode. There 
> is one controller, one network node and several compute nodes. I don't 
> have any HA setups in this testbed.
>
> From the VM instance, I can not access the metadata service, below is 
> the stdout:
>
> [root at host-172-16-0-15 ~]# curl -v http://169.254.169.254
> * About to connect() to 169.254.169.254 port 80 (#0)
> *   Trying 169.254.169.254... connected
> * Connected to 169.254.169.254 (169.254.169.254) port 80 (#0)
> > GET / HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 
> NSS/3.12.9.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2
> > Host: 169.254.169.254
> > Accept: */*
> >
> < HTTP/1.1 500 Internal Server Error
> < Content-Length: 206
> < Content-Type: text/html; charset=UTF-8
> < Date: Fri, 07 Feb 2014 15:59:28 GMT
> <
> <html>
>  <head>
>   <title>500 Internal Server Error</title>
>  </head>
>  <body>
>   <h1>500 Internal Server Error</h1>
>   Remote metadata server experienced an internal server error.<br /><br />
>
>
> From the instance, I can telnet to the 169.254.169.254:80, just fine.
>
> From the controller node, I see the following error messages from 
> /var/log/nova/metadata-api.log:
>
> 2014-01-28 15:57:47.246 12119 INFO nova.network.driver [-] Loading 
> network driver 'nova.network.linux_net'
> 2014-01-28 15:57:47.307 12119 CRITICAL nova [-] Cannot resolve 
> relative uri 'config:api-paste.ini'; no relative_to keyword argument given
> 2014-01-28 15:57:47.307 12119 TRACE nova Traceback (most recent call 
> last):
> 2014-01-28 15:57:47.307 12119 TRACE nova   File 
> "/usr/bin/nova-api-metadata", line 44, in <module>
> 2014-01-28 15:57:47.307 12119 TRACE nova     server = 
> service.WSGIService('metadata')
> 2014-01-28 15:57:47.307 12119 TRACE nova   File 
> "/usr/lib/python2.6/site-packages/nova/service.py", line 598, in __init__
> 2014-01-28 15:57:47.307 12119 TRACE nova     self.app = 
> self.loader.load_app(name)
> 2014-01-28 15:57:47.307 12119 TRACE nova   File 
> "/usr/lib/python2.6/site-packages/nova/wsgi.py", line 482, in load_app
> 2014-01-28 15:57:47.307 12119 TRACE nova     return 
> deploy.loadapp("config:%s" % self.config_path, name=name)
> 2014-01-28 15:57:47.307 12119 TRACE nova   File 
> "/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py", 
> line 247, in loadapp
> 2014-01-28 15:57:47.307 12119 TRACE nova     return loadobj(APP, uri, 
> name=name, **kw)
> 2014-01-28 15:57:47.307 12119 TRACE nova   File 
> "/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py", 
> line 271, in loadobj
> 2014-01-28 15:57:47.307 12119 TRACE nova global_conf=global_conf)
> 2014-01-28 15:57:47.307 12119 TRACE nova   File 
> "/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py", 
> line 296, in loadcontext
> 2014-01-28 15:57:47.307 12119 TRACE nova global_conf=global_conf)
> 2014-01-28 15:57:47.307 12119 TRACE nova   File 
> "/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py", 
> line 308, in _loadconfig
> 2014-01-28 15:57:47.307 12119 TRACE nova     "argument given" % uri)
> 2014-01-28 15:57:47.307 12119 TRACE nova ValueError: Cannot resolve 
> relative uri 'config:api-paste.ini'; no relative_to keyword argument given
> 2014-01-28 15:57:47.307 12119 TRACE nova
>
> Any wisdom on what the problem could be ?
>
> Thanks,
> Xin
>
>
>
>
> On 11/20/2013 9:06 PM, Paul Robert Marino wrote:
>> Well there are several ways to set up the nova metadata service.
>>
>> By default the API service provides the metadata service. But can be 
>> broken out in a counterintuitive way. Usually the nova metadata data 
>> service runs on the controller node.
>> However in Folsom and this may still be the case in Grizzly and 
>> Havana you could only have one instance of the metadata service 
>> running at a time. My current config in Grizzly still assume this 
>> limitation although I haven't checked to see if its still the case. 
>> So if you are running redundant controller nodes you need to disable 
>> the metadata service in the nova.conf file on the controller node. 
>> Then run the API service on both controllers. Finally you run the 
>> metadata service on only one of the controllers and use an external 
>> method to handle failover like redhat clustering ha tools, 
>> keepalived, or custom scripts controlled by your monitoring system to 
>> handle failover.
>> In my case I'm using keepalived to manage a VIP which is used as the 
>> keystroke endpoint for nova so I integrated the start and stop of the 
>> nova metadata service into the scripts it calls with a state change 
>> with further assistance by an external check script which attempts an 
>> auto recovery on failure executed by Nagios.
>>
>>
>> -- Sent from my HP Pre3
>>
>> ------------------------------------------------------------------------
>> On Nov 20, 2013 18:06, Xin Zhao <xzhao at bnl.gov> wrote:
>>
>> Some more info:
>>
>> from the router namespace, I can see the metadata service is listening
>> on port 9697, and an NAT rule for it:
>>
>> [root at cldnet01 quantum(keystone_admin)]# ip netns exec
>> qrouter-183f4dda-cb26-4822-af6d-941b4b0831b4 netstat -lpnt
>> Active Internet connections (only servers)
>> Proto Recv-Q Send-Q Local Address Foreign
>> Address State PID/Program name
>> tcp 0 0 0.0.0.0:9697 0.0.0.0:* LISTEN
>> 2703/python
>>
>> [root at cldnet01 quantum(keystone_admin)]# ip netns exec
>> qrouter-183f4dda-cb26-4822-af6d-941b4b0831b4 iptables -L -t nat
>> ......
>> Chain quantum-l3-agent-PREROUTING (1 references)
>> target prot opt source destination
>> REDIRECT tcp -- anywhere 169.254.169.254 tcp
>> dpt:http redir ports 9697
>> ......
>>
>>
>>
>>
>> On 11/20/2013 5:48 PM, Xin Zhao wrote:
>> > Hello,
>> >
>> > I am installing grizzly with quantum/OVS using
>> > kernel-2.6.32-358.123.2.openstack.el6.x86_64 and
>> > openstack-XXX-2013.1.4-3.
>> > From inside the VM, I can ping 169.254.169.254 (it's available in the
>> > routing table), but curl commands fail with the following errors:
>> >
>> > $>curl http://169.254.169.254
>> > About to connect to 169.254.169.254 port 80 ...
>> > Connection refused
>> >
>> > Does the metadata service run on the controller node or the network
>> > node, on which port and which namespace ? The VMs can only talk to
>> > the network
>> > host via the physical VM network, they don't have access to the
>> > management network.
>> >
>> > Below is the relevant configuration information. Another info is that
>> > I still have some DNS issue for the VMs, external DNS and internal DNS
>> > can't work at the same time,
>> > meaning if I assign public DNS servers to the VM virtual subnets, VM
>> > can resolve external hostnames, but doesn't work for other VMs inside
>> > the same subnet, and if I use
>> > the default internal DNS, VMs can't resolve external hostnames but
>> > they can resolve names within the same VM subnet. I am not sure if
>> > this is related to the metadata issue or not, I
>> > would think not, as the above metadata command uses ip directly...
>> >
>> > Thanks,
>> > Xin
>> >
>> >
>> > on controller node:
>> > nova.conf:
>> > service_neutron_metadata_proxy=true
>> > quantum_metadata_proxy_shared_secret=
>> >
>> > On network node:
>> > dhcp_agent.ini:
>> > enable_isolated_metadata = True
>> > metadata_agent.ini:
>> > [DEFAULT]
>> > auth_url = http://localhost:35357/v2.0
>> > auth_region = RegionOne
>> > admin_tenant_name = %SERVICE_TENANT_NAME%
>> > admin_user = %SERVICE_USER%
>> > admin_password = %SERVICE_PASSWORD%
>> > auth_strategy = keystone
>> >
>> > metadata_proxy_shared_secret =
>> > [keystone_authtoken]
>> > auth_host = <ip of controller on the management network>
>> > admin_tenant_name = services
>> > admin_user = quantum
>> > admin_password = <pwd>
>> >
>> > The VM internal subnet info:
>> >
>> > +------------------+--------------------------------------------+
>> > | Field | Value |
>> > +------------------+--------------------------------------------+
>> > | allocation_pools | {"start": "10.0.1.2", "end": "10.0.1.254"} |
>> > | cidr | 10.0.1.0/24 |
>> > | dns_nameservers | 8.8.4.4 |
>> > | | 8.8.8.8 |
>> > | enable_dhcp | True |
>> > | gateway_ip | 10.0.1.1 |
>> > | host_routes | |
>> > | id | 505949ed-30bb-4c5e-8d1b-9ef2745f9455 |
>> > | ip_version | 4 |
>> > | name | |
>> > | network_id | 31f9d39b-012f-4447-92a4-1a3b5514b37d |
>> > | tenant_id | 22b1956ec62a49e88fb93b53a4f10337 |
>> > +------------------+--------------------------------------------+
>> >
>> >
>> > _______________________________________________
>> > rhos-list mailing list
>> > rhos-list at redhat.com
>> > https://www.redhat.com/mailman/listinfo/rhos-list
>>
>> _______________________________________________
>> rhos-list mailing list
>> rhos-list at redhat.com
>> https://www.redhat.com/mailman/listinfo/rhos-list
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140207/baa295b0/attachment.html>


More information about the Openstack mailing list