[Openstack] Instances lost connectivity with metadata service.

Jorge Luiz Correa correajl at gmail.com
Thu Mar 15 12:08:12 UTC 2018


Sending a feedback to all answers bellow.

I've solved the problem using a configuration on neutron dhcp_agent
(dhcp_agent.ini). I was using just the option "enable_isolated_metadata =
True", so I could use dhcp on networks that didn't have a router. Now, I
enabled "force_metadata = true". So, DHCP send a route to new instance
saying it is the 169.254.169.254. The instance sends requests directly to
DHCP host/port. So, the iptables rule previously used is no longer used.
After that configuration I did not realize the mistake anymore.

Thank for all help!
:)

----
On Mon, Feb 26, 2018 at 9:44 AM, Itxaka Serrano Garcia <igarcia at suse.com>
wrote:

Did you check if port 80 is listening inside the dhcp namespace with "ip
> netns exec NAMESPACE netstat -punta" ?
>
> We recently hit something similar in which the ns-proxy was up and the
> metadata-agent as well but the port 80 was missing inside the namespace, a
> restart fixed it but there was no logs of a failure anywhere so it may be
> similar.
>
>
I've checked but I was not looking for port 80. I just checked all my
namespaces now and no one has an opened port 80. All have

"tcp        0      0 0.0.0.0:9697            0.0.0.0:*
LISTEN      <PID>/python".

And, as said, all namespaces have an iptables rules that redirect all
169.254.169.254:80 traffic to this 9697 port.

ip netns exec qrouter-HASH_ID iptables -n -L -t nat

Chain neutron-l3-agent-PREROUTING (1 references)
target     prot opt source               destination
REDIRECT   tcp  --  0.0.0.0/0            169.254.169.254      tcp dpt:80
redir ports 9697

So, is this listening port 80 really necessary? I can see in the logs that
information arrives to this service on compute nodes.

---
On Tue, Feb 27, 2018 at 5:43 AM, Tobias Urdin <tobias.urdin at crystone.com>
wrote:

> Did some troubleshooting on this myself just some days ago.
>
> You want to check out the neutron-metadata-agent log in
> /var/log/neutron/neutron-metadata-agent.log
>
> neutron-metadata-agent in turn connects to your nova keystone endpoint to
> talk to nova metadata api (nova api port 8775) to get instance information.
>
>
> I had a issue with connectivity between neutron-metadata-agent and nova
> metadata api causing the issue for me.
>
> Should probably check the nova metadata api logs as well.
>
Ok, I've checked that information could arrive to metada proxy on compute
node, but on controller I cannot see data arriving to nova metadata api.
So, I suspected to be a problem with connectivity inside openvswitch
(network connectivy was ok). But I couldn't identify why this data was not
being well transfered.

---
On Tue, Feb 27, 2018 at 12:26 PM, Paras pradhan <pradhanparas at gmail.com>
wrote:

> If this is project specifc usually I run the router-update and fixes the
> problem.
>
> /usr/bin/neutron router-update --admin-state-up False $routerid
> /usr/bin/neutron router-update --admin-state-up True $routerid
>

I'd tried that but couldn't solve the problem.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20180315/0609bc4d/attachment.html>


More information about the Openstack mailing list