[Openstack-operators] cloud-init:169.254.169.254 to time out / refuse connections

Christian Parpart trapni at gmail.com
Wed May 30 07:14:38 UTC 2012


We should improve the docs regarding multi host setups and this flag to
explicitely state that.

I found the solution by accident and out of my curiosity. :-)

Regards,
Christian Parpart.
Am 30.05.2012 02:06 schrieb "Dan Wendlandt" <dan at nicira.com>:

> the flag metadata_host (in nova/flags.py) defaults to the IP address of
> the localhost, so nova-network will DNAT to its own IP unless you override
> metadata_host in your nova.conf
>
> Dan
>
> On Tue, May 29, 2012 at 4:28 PM, Christian Parpart <trapni at gmail.com>wrote:
>
>> On Tue, May 29, 2012 at 2:47 PM, Christian Parpart <trapni at gmail.com>wrote:
>>
>>> Hey all,
>>>
>>> This 169.254.169.254 is driving me crazy. I read a few things already
>>> about that suspcisious IP address, however,
>>> I always get either a few:
>>>
>>> 2012-05-29 12:22:40,831 - util.py[WARNING]: '
>>> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
>>> [50/120s]: url error [timed out]
>>>
>>> or I'll get tons of:
>>>
>>> 2012-05-29 12:19:38,049 - util.py[WARNING]: '
>>> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
>>> [113/120s]: url error [[Errno 111] Connection refused]
>>>
>>> when instantiating a new VM.
>>>
>>> My setup is as follows:
>>>
>>> "production" network: 10.10.40.0/21
>>>  management network (physical nodes, switches, PDUs, ...) 10.10.0.0/19
>>>
>>> nova-network: (we're not in multi_host mode)
>>> - eth0: 10.10.30.4
>>>
>>> controller (api, scheduler, etc, also compute-1 node):
>>> - eth0: 10.10.30.190
>>>
>>> compute-2:
>>> - eth0: 10.10.30.191
>>>
>>> compute-3:
>>> - eth0: 10.10.30.192
>>>
>>> Now, since the 169.254.169.254 is just an artificial IP, to be NAT'ed to
>>> the right host via iptables, I did a quick check,
>>> and tcp/80 seems to redirect to the nova-api instance at port 8775.
>>>
>>> So here's my question:
>>> On which physical nodes is this iptables rule expected, Just
>>> nova-network or on every compute node? (and how to fix my above situation?)
>>>
>>> I'm asking because I found the DNAT rule on the dedicated network node
>>> but also compute-1 node (which is also the controller node, with api,
>>> scheduler, etc) but not on compute-3 nor on compute-3 node - regardless of
>>> my issue, this doesn't feel right.
>>>
>>
>> Hey,
>>
>> for the latter case (ECONNREFUSED) I believe I have an answer, but not
>> why it is set up this way:
>>
>> root at nova-network-node:/etc/nova# iptables -t nat -L -vn | grep -n3
>> 169.254.169.254
>> 26-
>> 27-Chain nova-network-PREROUTING (1 references)
>> 28- pkts bytes target     prot opt in     out     source
>> destination
>> 29:   33  1980 DNAT       tcp  --  *      *       0.0.0.0/0
>>  169.254.169.254      tcp dpt:80 to:10.10.40.1:8775
>> 30-    0     0 DNAT       udp  --  *      *       0.0.0.0/0
>>  10.10.40.1           udp dpt:1000 to:10.10.40.2:1194
>> 31-
>>
>> This shows, that the suspicious IP address is routed to 10.10.40.1:8875where this IP
>> is the host itself and not the nova-api node's IP.
>>
>> AFAIK nova-api is just to be installed onto a single node, that is, the
>> controller node, so I wonder
>> why nova-network seems to create a DNAT rule for nova-api to its own host
>> instead to the cloud controller's IP.
>>
>> I checked my nova.conf, and while there is no direct entry for what IP to
>> use for node-api, I at least
>> see, that cc_host is set to the proper IP (10.10.30.190).
>>
>> So long,
>> Christian Parpart.
>>
>>
>> _______________________________________________
>> Openstack-operators mailing list
>> Openstack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20120530/84c9ac61/attachment-0002.html>


More information about the Openstack-operators mailing list