[Openstack] Problems with Neutron

Sergey Motovilovets motovilovets.sergey at gmail.com
Sat Jun 14 17:44:59 UTC 2014


This can probably be useful too:

>From network node

# ip netns ls
qdhcp-1b982b98-62db-4c87-867b-0490bac8fb52
qrouter-c7e7ea00-a362-4f4f-9a1c-a54ac86eb3be

# ps -x | grep metadata
 1469 ?        S      0:00 /usr/bin/python
/usr/bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/6c91714c-c1aa-41d7-88ba-249df3a8368c.pid
--metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--network_id=6c91714c-c1aa-41d7-88ba-249df3a8368c
--state_path=/var/lib/neutron --metadata_port=80
--log-file=neutron-ns-metadata-proxy-6c91714c-c1aa-41d7-88ba-249df3a8368c.log
--log-dir=/var/log/neutron
 5937 ?        S      0:00 /usr/bin/python
/usr/bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/fa72c69b-2ca2-4d4d-ab8c-d6fa6e8e72d6.pid
--metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--network_id=fa72c69b-2ca2-4d4d-ab8c-d6fa6e8e72d6
--state_path=/var/lib/neutron --metadata_port=80
--log-file=neutron-ns-metadata-proxy-fa72c69b-2ca2-4d4d-ab8c-d6fa6e8e72d6.log
--log-dir=/var/log/neutron
 8108 ?        S      0:00 /usr/bin/python
/usr/bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/c7e7ea00-a362-4f4f-9a1c-a54ac86eb3be.pid
--metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--router_id=c7e7ea00-a362-4f4f-9a1c-a54ac86eb3be
--state_path=/var/lib/neutron --metadata_port=9697
--log-file=neutron-ns-metadata-proxy-c7e7ea00-a362-4f4f-9a1c-a54ac86eb3be.log
--log-dir=/var/log/neutron



2014-06-14 20:38 GMT+03:00 Sergey Motovilovets <
motovilovets.sergey at gmail.com>:

> Hi, Mike.
>
> There are no routes in my VM's except for the default one. Private subnet
> I'm using is 192.168.0.0/24 with neutron router on 192.168.0.1.
>
> # route -n
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags Metric Ref    Use
> Iface
> 0.0.0.0         192.168.0.1     0.0.0.0         UG    0      0        0
> eth0
> 192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0
> eth0
>
> Following is a part of Fedora cloud-init script output. Instance tries to
> get info from 169.254.169.254 and then switches to 192.168.0.2, which is an
> interface with dnsmasq injected by neutron. I guess, metadata service is
> supposed to listen on 192.168.0.2, but it's not.
>
> [ 61.409140] cloud-init[512]: 2014-06-14 17:23:53,183 -
> url_helper.py[WARNING]: Calling '
> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> [50/120s]: request error [HTTPConnectionPool(host='169.254.169.254',
> port=80): Request timed out. (timeout=50.0)]
>
> [  112.463197] cloud-init[512]: 2014-06-14 17:24:44,237 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request timed out. (timeout=50.0)]
> [  130.489916] cloud-init[512]: 2014-06-14 17:25:02,264 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request timed out. (timeout=17.0)]
> [  131.494462] cloud-init[512]: 2014-06-14 17:25:03,266 - DataSourceEc2.py[CRITICAL]: Giving up on md from ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds
> [  131.502647] cloud-init[512]: 2014-06-14 17:25:03,273 - url_helper.py[WARNING]: Calling 'http://192.168.0.2//latest/meta-data/instance-id' failed [0/120s]: request error [HTTPConnectionPool(host='192.168.0.2', port=80): Max retries exceeded with url: //latest/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 111] Connection refused)]
>
>
>
>
>
> 2014-06-14 20:04 GMT+03:00 Mike Spreitzer <mspreitz at us.ibm.com>:
>
> Sergey Motovilovets <motovilovets.sergey at gmail.com> wrote on 06/14/2014
>> 11:00:09 AM:
>> ...
>>
>> > Another problem is metadata service. I've tried like everything I
>> > found regarding neutron<->metadata configuration, without any
>> > success. I just can't connect to 169.254.169.254 from virtual
>> > machines, though they get configured by dhcp, can ping each other in
>> > their subnet and I can allocate floating IPs to them.
>>
>> > ...
>>
>> Did yo look to see if there is a wrong route in your VM?  Sometimes I
>> find the metadata service is messed up by a bogus entry in the VM's routing
>> table.
>>
>> Regards,
>> Mike
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140614/886256ed/attachment.html>


More information about the Openstack mailing list