[Openstack-operators] metadata-api 500 errors
Alex Leonhardt
aleonhardt.py at gmail.com
Thu Jan 15 17:03:18 UTC 2015
Hi Edgar,
that's the crazy thing - so all the gre tunnels are up, I can see them in
openvswitch and also can see that there are some openflow rules applied.
I've craeted VMs on every hypervisor (including the controller, as it's a
test install) on network1 (192.168.1.0), every VM (and that is still the
case now) started there works just fine and gets the metadata as expected,
the same for network2 (192.168.2.0).
the issue only appeared after I created network3 (192.168.3.0), VMs there
(tried again all 3 hypervisors) get a 500 errror instead of the expected
metadata files/json. The same for any / all other networks I created after
(network4 and 5).
On the VM all I can see is this:
2015-01-15 17:02:57,310 - url_helper.py[WARNING]: Calling
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[0/120s]: bad status code [500]
2015-01-15 17:02:58,509 - url_helper.py[WARNING]: Calling
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[1/120s]: bad status code [500]
Alex
On Thu Jan 15 2015 at 16:53:31 Edgar Magana <edgar.magana at workday.com>
wrote:
> Alex,
>
> Did you follow the networking recommendations:
>
> http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html
>
> It will ell you if you write your own topology and complete a packet
> trace to find out the issue.
> Make sure all tunnels are established between your three nodes.
>
> Thanks,
>
> Edgar
>
> From: Alex Leonhardt <aleonhardt.py at gmail.com>
> Date: Thursday, January 15, 2015 at 7:45 AM
> To: openstack-operators <openstack-operators at lists.openstack.org>
> Subject: [Openstack-operators] metadata-api 500 errors
>
> hi,
>
> i've got a test openstack install with 3 nodes, using gre tunneling --
>
> initially it all worked fine, but, after creating > 2 networks, VMs in
> networks 3,4,5 do not seem to get the metadata due to it erroring with 500
> errors. whilst this is happening, VMs in networks 1 and 2 are still working
> fine and can be provisioned OK.
>
> anyone seen something similar or ideas on how to go about
> troubleshooting this ? I got a tcpdump from the VM but as it does get to
> the metadata api, am not sure where the issue is (especially since other
> VMs in other Networks work just fine)
>
> any ideas ?
>
> Alex
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150115/1bb338c0/attachment.html>
More information about the OpenStack-operators
mailing list