[Openstack] Instances can't reach metadata server in network HA mode

Gui Maluf guimalufb at gmail.com
Thu Dec 20 13:07:34 UTC 2012


Vish, if you could help.
I realized that all internal route of my vms point to cloudcontroller. if I
change the default route to node address everything works perfectly.
How can I make the node IP the default route?

Thanks for all help!


On Wed, Dec 19, 2012 at 2:34 PM, Gui Maluf <guimalufb at gmail.com> wrote:

> Yes, it's in multi_host=true. In nova.conf and in the database multi_host
> is set to True. 10.5.5.32 isn't the gateway, instead is the private network.
>
> LoL
>
> Out of nothing my instances can now reach metadata. But when I login and
> ping www.google.com VM can resolv name but there is no answer back, all
> packets are lost.  And I've attached a floating IP for two vms, on
> different node, and they dont even ping back in the same node.
>
> This is so confused! I'll do some tcpdump to check what is happening!
>
>
>
>
>
> On Wed, Dec 19, 2012 at 2:05 PM, Vishvananda Ishaya <vishvananda at gmail.com
> > wrote:
>
>> Are you sure your network has multi_host = True? It seems like it isn't,
>> since the gateway listed by the guest is 10.5.5.32
>>
>> In multi_host mode each node should be getting an ip from the fixed range
>> and the guest should be using that as the gateway.
>>
>> Vish
>>
>>
>>
>>
>> On Wed, Dec 19, 2012 at 1:13 PM, Vishvananda Ishaya <
>> vishvananda at gmail.com> wrote:
>>
>>> There should be a redirect in iptables from 169.254.169.254:80<http://169.254.169.254/>to $my_ip:8775 (where nova-api-metadata is running)
>>>
>>> So:
>>>
>>> a) can you
>>>
>>>   curl $my_ip:8775 (should 404)
>>>
>> CloudController and Nodes awnser in the same way:
>> 1.0
>> 2007-01-19
>> 2007-03-01
>> 2007-08-29
>> 2007-10-10
>> 2007-12-15
>> 2008-02-01
>> 2008-09-01
>> 2009-04-04
>>
>>
>>>
>>> b) if you do
>>>
>>>   sudo iptables -t nat -L -n v
>>>
>>> do you see the forward rule? Is it getting hit properly?
>>>
>>
>> there is the correct rule, but they never got hit
>> controller
>>     0     0 DNAT       tcp  --  *      *       0.0.0.0/0
>> 169.254.169.254      tcp dpt:80 to:200.131.6.250:8775
>>
>> nodes
>>     0     0 DNAT       tcp  --  *      *       0.0.0.0/0
>> 169.254.169.254      tcp dpt:80 to:200.131.6.248:8775
>>     0     0 DNAT       tcp  --  *      *       0.0.0.0/0
>> 169.254.169.254      tcp dpt:80 to:200.131.6.249:8775
>>
>>
>> Thanks for appearing Vish! I was wishing your help!
>>
>>>
>>> Vish
>>>
>>> On Dec 19, 2012, at 6:39 AM, Gui Maluf <guimalufb at gmail.com> wrote:
>>>
>>> My set up is a nova-network-ha<http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html>,
>>> so each of my nodes run a nova-{api-metadata,network,compute,volume}, my
>>> controller runs all of this plus the rest of things it should run.
>>> Each of my nodes are the gateway for it's own instances. They all have
>>> the same net config, ip_forwarding.
>>>
>>> The main issue is that I can't telnet the nodes on port 80 that should
>>> redirect to metadatas server. metadata IP is set correctly to eth0, but 80
>>> port is not open.
>>> My doubt is, should I create a endpoint for each node api-metadata
>>> service? should I install apache on nodes?
>>>
>>> I really don't know what to do anymore. This only happen on nodes, on
>>> cloudcontroller all instance run smoothly. they get the floatip, metadata
>>> service, etc.
>>>
>>> Thanks in advance!
>>>
>>>
>>> I will put the max of info I can here.
>>>
>>> root at oxala:~# nova-manage service
>>> list
>>>
>>> Binary           Host                                 Zone
>>> Status     State Updated_At
>>> nova-compute     xango                                nova
>>> enabled    :-)   2012-12-18 20:34:21
>>> nova-network     xango                                nova
>>> enabled    :-)   2012-12-18 20:34:20
>>> nova-compute     oxossi                               nova
>>> enabled    :-)   2012-12-18 20:34:15
>>> nova-network     oxossi                               nova
>>> enabled    :-)   2012-12-18 20:34:20
>>> nova-volume      oxossi                               nova
>>> enabled    :-)   2012-12-18 20:34:18
>>> nova-volume      xango                                nova
>>> enabled    :-)   2012-12-18 20:34:19
>>> nova-consoleauth oxala                                nova
>>> enabled    :-)   2012-12-18 20:34:24
>>> nova-scheduler   oxala                                nova
>>> enabled    :-)   2012-12-18 20:34:25
>>> nova-cert        oxala                                nova
>>> enabled    :-)   2012-12-18 20:34:25
>>> nova-volume      oxala                                nova
>>> enabled    :-)   2012-12-18 20:34:25
>>> nova-network     oxala                                nova
>>> enabled    :-)   2012-12-18 20:34:17
>>> nova-compute     oxala                                nova
>>> enabled    :-)   2012-12-18 20:34:10
>>>
>>> *controller nova.conf*
>>> #NETWORK
>>> --allow_same_net_traffic=true
>>> --network_manager=nova.network.manager.FlatDHCPManager
>>> --firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
>>> --public_interface=eth0
>>> --flat_interface=eth1
>>> --flat_network_bridge=br100
>>> --fixed_range=10.5.5.32/27
>>> --network_size=32
>>> --flat_network_dhcp_start=10.5.5.33
>>> --my_ip=200.131.6.250
>>> --multi_host=True
>>> #--enabled_apis=ec2,osapi_compute,osapi_volume,metadata
>>> --dhcpbridge_flagfile=/etc/nova/nova.conf
>>> --dhcpbridge=/usr/bin/nova-dhcpbridge
>>> --force_dhcp_release
>>> --ec2_private_dns_show
>>> --routing_source_ip=$my_ip
>>>
>>> *nodes nova.conf*
>>> {same network configs}
>>> --my_ip=200.131.6.248
>>> --multi_host=True
>>> --enabled_apis=ec2,osapi_compute,osapi_volume,metadata
>>> --routing_source_ip=$my_ip
>>>
>>> *controller iptables -L -vn && **iptables -L -vn* *-t nat*
>>> http://paste.openstack.org/show/mkWZTYI6cKHR4qUWbOUz/
>>> *node* *iptables -L -vn && **iptables -L -vn* *-t nat
>>> *http://paste.openstack.org/show/28384/*
>>> **
>>> *
>>> *controller ip a*
>>> http://paste.openstack.org/show/W2vrVtost2EP2u62iZwp/
>>> root at oxala:~# route
>>> Tabela de Roteamento IP do Kernel
>>> Destino         Roteador        MáscaraGen.    Opções Métrica Ref   Uso
>>> Iface
>>> default         200.131.6.129   0.0.0.0         UG    100    0        0
>>> eth0
>>> 10.5.5.32       *               255.255.255.224 U     0      0        0
>>> br100
>>> 200.131.6.128   *               255.255.255.128 U     0      0        0
>>> eth0
>>>
>>> *node ip a
>>> *http://paste.openstack.org/show/S44TL3sznIztNCO3s8p2/*
>>> *root at oxossi:~# route
>>> Tabela de Roteamento IP do Kernel
>>> Destino         Roteador        MáscaraGen.    Opções Métrica Ref   Uso
>>> Iface
>>> default         200.131.6.129   0.0.0.0         UG    100    0        0
>>> eth0
>>> 10.5.5.32       *               255.255.255.224 U     0      0        0
>>> br100
>>> 200.131.6.128   *               255.255.255.128 U     0      0        0
>>> eth0
>>> *
>>>
>>> *
>>> *And finnaly the error throw out by the vm when running on Nodes.
>>>
>>> *
>>>
>>> ci-info: lo    : 1 127.0.0.1       255.0.0.0       .
>>>
>>> ci-info: eth0  : 1 10.5.5.53       255.255.255.224 fa:16:3e:69:cb:d2
>>>
>>> ci-info: route-0: 0.0.0.0         10.5.5.35       0.0.0.0         eth0   UG
>>>
>>> ci-info: route-1: 10.5.5.32       0.0.0.0         255.255.255.224 eth0   U
>>>
>>> cloud-init start running: Tue, 18 Dec 2012 20:34:09 +0000. up 4.02 seconds
>>>
>>> 2012-12-18 20:34:15,967 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [6/120s]: url error [[Errno 113] No route to host]
>>>
>>>
>>>
>>> --
>>> *guilherme* \n
>>> \t *maluf*
>>>  _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to     : openstack at lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>>
>>
>>
>> --
>> *guilherme* \n
>> \t *maluf*
>>
>>
>>
>
>
> --
> *guilherme* \n
> \t *maluf*
>



-- 
*guilherme* \n
\t *maluf*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121220/70385fda/attachment.html>


More information about the Openstack mailing list