<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hello,<br>
<br>
I have an issue with accessing metadata from instances. <br>
<br>
I am running a grizzly testbed using quantum/OVS network mode.
There is one controller, one network node and several compute
nodes. I don't have any HA setups in this testbed. <br>
<br>
From the VM instance, I can not access the metadata service, below
is the stdout: <br>
<br>
[root@host-172-16-0-15 ~]# curl -v <a class="moz-txt-link-freetext" href="http://169.254.169.254">http://169.254.169.254</a><br>
* About to connect() to 169.254.169.254 port 80 (#0)<br>
* Trying 169.254.169.254... connected<br>
* Connected to 169.254.169.254 (169.254.169.254) port 80 (#0)<br>
> GET / HTTP/1.1<br>
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu)
libcurl/7.19.7 NSS/3.12.9.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2<br>
> Host: 169.254.169.254<br>
> Accept: */*<br>
> <br>
< HTTP/1.1 500 Internal Server Error<br>
< Content-Length: 206<br>
< Content-Type: text/html; charset=UTF-8<br>
< Date: Fri, 07 Feb 2014 15:59:28 GMT<br>
< <br>
<html><br>
<head><br>
<title>500 Internal Server Error</title><br>
</head><br>
<body><br>
<h1>500 Internal Server Error</h1><br>
Remote metadata server experienced an internal server
error.<br /><br /><br>
<br>
<br>
From the instance, I can telnet to the 169.254.169.254:80, just
fine. <br>
<br>
From the controller node, I see the following error messages from
/var/log/nova/metadata-api.log:<br>
<br>
2014-01-28 15:57:47.246 12119 INFO nova.network.driver [-] Loading
network driver 'nova.network.linux_net'<br>
2014-01-28 15:57:47.307 12119 CRITICAL nova [-] Cannot resolve
relative uri 'config:api-paste.ini'; no relative_to keyword
argument given<br>
2014-01-28 15:57:47.307 12119 TRACE nova Traceback (most recent
call last):<br>
2014-01-28 15:57:47.307 12119 TRACE nova File
"/usr/bin/nova-api-metadata", line 44, in <module><br>
2014-01-28 15:57:47.307 12119 TRACE nova server =
service.WSGIService('metadata')<br>
2014-01-28 15:57:47.307 12119 TRACE nova File
"/usr/lib/python2.6/site-packages/nova/service.py", line 598, in
__init__<br>
2014-01-28 15:57:47.307 12119 TRACE nova self.app =
self.loader.load_app(name)<br>
2014-01-28 15:57:47.307 12119 TRACE nova File
"/usr/lib/python2.6/site-packages/nova/wsgi.py", line 482, in
load_app<br>
2014-01-28 15:57:47.307 12119 TRACE nova return
deploy.loadapp("config:%s" % self.config_path, name=name)<br>
2014-01-28 15:57:47.307 12119 TRACE nova File
"/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py",
line 247, in loadapp<br>
2014-01-28 15:57:47.307 12119 TRACE nova return loadobj(APP,
uri, name=name, **kw)<br>
2014-01-28 15:57:47.307 12119 TRACE nova File
"/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py",
line 271, in loadobj<br>
2014-01-28 15:57:47.307 12119 TRACE nova
global_conf=global_conf)<br>
2014-01-28 15:57:47.307 12119 TRACE nova File
"/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py",
line 296, in loadcontext<br>
2014-01-28 15:57:47.307 12119 TRACE nova
global_conf=global_conf)<br>
2014-01-28 15:57:47.307 12119 TRACE nova File
"/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py",
line 308, in _loadconfig<br>
2014-01-28 15:57:47.307 12119 TRACE nova "argument given" %
uri)<br>
2014-01-28 15:57:47.307 12119 TRACE nova ValueError: Cannot
resolve relative uri 'config:api-paste.ini'; no relative_to
keyword argument given<br>
2014-01-28 15:57:47.307 12119 TRACE nova <br>
<br>
Any wisdom on what the problem could be ? <br>
<br>
Thanks,<br>
Xin<br>
<br>
<br>
<br>
<br>
On 11/20/2013 9:06 PM, Paul Robert Marino wrote:<br>
</div>
<blockquote cite="mid:528d6a99.8a10e00a.5eb7.494d@mx.google.com"
type="cite">Well there are several ways to set up the nova
metadata service.<br>
<br>
By default the API service provides the metadata service. But can
be broken out in a counterintuitive way. Usually the nova metadata
data service runs on the controller node. <br>
However in Folsom and this may still be the case in Grizzly and
Havana you could only have one instance of the metadata service
running at a time. My current config in Grizzly still assume this
limitation although I haven't checked to see if its still the
case. So if you are running redundant controller nodes you need to
disable the metadata service in the nova.conf file on the
controller node. Then run the API service on both controllers.
Finally you run the metadata service on only one of the
controllers and use an external method to handle failover like
redhat clustering ha tools, keepalived, or custom scripts
controlled by your monitoring system to handle failover. <br>
In my case I'm using keepalived to manage a VIP which is used as
the keystroke endpoint for nova so I integrated the start and stop
of the nova metadata service into the scripts it calls with a
state change with further assistance by an external check script
which attempts an auto recovery on failure executed by Nagios. <br>
<span style="font-family:Prelude, Verdana, san-serif;"><br>
<br>
</span><span id="signature">
<div style="font-family: arial, sans-serif; font-size:
12px;color: #999999;">-- Sent from my HP Pre3</div>
<br>
</span><span style="color:navy; font-family:Prelude, Verdana,
san-serif; ">
<hr style="width:75%" align="left">On Nov 20, 2013 18:06, Xin
Zhao <a class="moz-txt-link-rfc2396E" href="mailto:xzhao@bnl.gov"><xzhao@bnl.gov></a> wrote: <br>
<br>
</span>Some more info:
<br>
<br>
from the router namespace, I can see the metadata service is
listening <br>
on port 9697, and an NAT rule for it:
<br>
<br>
[root@cldnet01 quantum(keystone_admin)]# ip netns exec <br>
qrouter-183f4dda-cb26-4822-af6d-941b4b0831b4 netstat -lpnt
<br>
Active Internet connections (only servers)
<br>
Proto Recv-Q Send-Q Local Address Foreign <br>
Address State PID/Program name
<br>
tcp 0 0 0.0.0.0:9697 0.0.0.0:* LISTEN <br>
2703/python
<br>
<br>
[root@cldnet01 quantum(keystone_admin)]# ip netns exec <br>
qrouter-183f4dda-cb26-4822-af6d-941b4b0831b4 iptables -L -t nat
<br>
......
<br>
Chain quantum-l3-agent-PREROUTING (1 references)
<br>
target prot opt source destination
<br>
REDIRECT tcp -- anywhere 169.254.169.254 tcp <br>
dpt:http redir ports 9697
<br>
......
<br>
<br>
<br>
<br>
<br>
On 11/20/2013 5:48 PM, Xin Zhao wrote:
<br>
> Hello,
<br>
>
<br>
> I am installing grizzly with quantum/OVS using <br>
> kernel-2.6.32-358.123.2.openstack.el6.x86_64 and <br>
> openstack-XXX-2013.1.4-3.
<br>
> From inside the VM, I can ping 169.254.169.254 (it's
available in the <br>
> routing table), but curl commands fail with the following
errors:
<br>
>
<br>
> $>curl <a class="moz-txt-link-freetext" href="http://169.254.169.254">http://169.254.169.254</a>
<br>
> About to connect to 169.254.169.254 port 80 ...
<br>
> Connection refused
<br>
>
<br>
> Does the metadata service run on the controller node or the
network <br>
> node, on which port and which namespace ? The VMs can only
talk to <br>
> the network
<br>
> host via the physical VM network, they don't have access to
the <br>
> management network.
<br>
>
<br>
> Below is the relevant configuration information. Another info
is that <br>
> I still have some DNS issue for the VMs, external DNS and
internal DNS <br>
> can't work at the same time,
<br>
> meaning if I assign public DNS servers to the VM virtual
subnets, VM <br>
> can resolve external hostnames, but doesn't work for other
VMs inside <br>
> the same subnet, and if I use
<br>
> the default internal DNS, VMs can't resolve external
hostnames but <br>
> they can resolve names within the same VM subnet. I am not
sure if <br>
> this is related to the metadata issue or not, I
<br>
> would think not, as the above metadata command uses ip
directly...
<br>
>
<br>
> Thanks,
<br>
> Xin
<br>
>
<br>
>
<br>
> on controller node:
<br>
> nova.conf:
<br>
> service_neutron_metadata_proxy=true
<br>
> quantum_metadata_proxy_shared_secret=
<br>
>
<br>
> On network node:
<br>
> dhcp_agent.ini:
<br>
> enable_isolated_metadata = True
<br>
> metadata_agent.ini:
<br>
> [DEFAULT]
<br>
> auth_url = <a class="moz-txt-link-freetext" href="http://localhost:35357/v2.0">http://localhost:35357/v2.0</a>
<br>
> auth_region = RegionOne
<br>
> admin_tenant_name = %SERVICE_TENANT_NAME%
<br>
> admin_user = %SERVICE_USER%
<br>
> admin_password = %SERVICE_PASSWORD%
<br>
> auth_strategy = keystone
<br>
>
<br>
> metadata_proxy_shared_secret =
<br>
> [keystone_authtoken]
<br>
> auth_host = <ip of controller on the management
network>
<br>
> admin_tenant_name = services
<br>
> admin_user = quantum
<br>
> admin_password = <pwd>
<br>
>
<br>
> The VM internal subnet info:
<br>
>
<br>
>
+------------------+--------------------------------------------+
<br>
> | Field | Value |
<br>
>
+------------------+--------------------------------------------+
<br>
> | allocation_pools | {"start": "10.0.1.2", "end":
"10.0.1.254"} |
<br>
> | cidr | 10.0.1.0/24 |
<br>
> | dns_nameservers | 8.8.4.4 |
<br>
> | | 8.8.8.8 |
<br>
> | enable_dhcp | True |
<br>
> | gateway_ip | 10.0.1.1 |
<br>
> | host_routes | |
<br>
> | id | 505949ed-30bb-4c5e-8d1b-9ef2745f9455 |
<br>
> | ip_version | 4 |
<br>
> | name | |
<br>
> | network_id | 31f9d39b-012f-4447-92a4-1a3b5514b37d |
<br>
> | tenant_id | 22b1956ec62a49e88fb93b53a4f10337 |
<br>
>
+------------------+--------------------------------------------+
<br>
>
<br>
>
<br>
> _______________________________________________
<br>
> rhos-list mailing list
<br>
> <a class="moz-txt-link-abbreviated" href="mailto:rhos-list@redhat.com">rhos-list@redhat.com</a>
<br>
> <a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/rhos-list">https://www.redhat.com/mailman/listinfo/rhos-list</a>
<br>
<br>
_______________________________________________
<br>
rhos-list mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:rhos-list@redhat.com">rhos-list@redhat.com</a>
<br>
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/rhos-list">https://www.redhat.com/mailman/listinfo/rhos-list</a>
<br>
</blockquote>
<br>
</body>
</html>