[Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

Aaron Rosen arosen at nicira.com
Wed Apr 24 18:02:35 UTC 2013


Can you show us a quantum subnet-show for the subnet your vm has an ip on.
Is it possible that you added a host_route to the subnet for 169.254/16?

Or could you try this image:
http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img


On Wed, Apr 24, 2013 at 1:06 AM, Balamurugan V G <balamuruganvg at gmail.com>wrote:

> I booted a Ubuntu Image in which I had made sure that there was no
> pre-existing route for 169,254.0.0/16. But its getting the route from DHCP
> once its boots up. So its the DHCP server which is sending this route to
> the VM.
>
> Regards,
> Balu
>
>
> On Wed, Apr 24, 2013 at 12:47 PM, Balamurugan V G <balamuruganvg at gmail.com
> > wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the response. I do not have enable_isolated_metadata_proxy
>> anywhere under /etc/quantum and /etc/nova. The closest I see is
>> 'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
>> commented out. What do you mean by link-local address?
>>
>> Like you said, I suspect that the image has the route. This was was a
>> snapshot taken in a Folsom setup. So its possible that Folsom has injected
>> this route and when I took the snapshot, it became part of the snapshot. I
>> then copied over this snapshot to a new Grizzly setup. Let me check the
>> image and remove it from the image if it has the route. Thanks for the hint
>> again.
>>
>> Regards,
>> Balu
>>
>>
>>
>> On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando <sorlando at nicira.com>wrote:
>>
>>> The dhcp agent will set a route to 169.254.0.0/16 if
>>> enable_isolated_metadata_proxy=True.
>>> In that case the dhcp port ip will be the nexthop for that route.
>>>
>>> Otherwise, it might be your image might have a 'builtin' route to such
>>> cidr.
>>> What's your nexthop for the link-local address?
>>>
>>> Salvatore
>>>
>>>
>>> On 24 April 2013 08:00, Balamurugan V G <balamuruganvg at gmail.com> wrote:
>>>
>>>> Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from the VMs routing table, I could access the metadata service!
>>>>
>>>> The route for 169.254.0.0/16 is added automatically when the instance
>>>> boots up, so I assume its coming from the DHCP. Any idea how this can be
>>>> suppressed?
>>>>
>>>> Strangely though, I do not see this route in a WindowsXP VM booted in
>>>> the same network as the earlier Ubuntu VM and the Windows VM can reach the
>>>> metadata service with out me doing anything. The issue is with the Ubuntu
>>>> VM.
>>>>
>>>> Thanks,
>>>> Balu
>>>>
>>>>
>>>>
>>>> On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen <arosen at nicira.com>wrote:
>>>>
>>>>> The vm should not have a routing table entry for 169.254.0.0/16  if
>>>>> it does i'm not sure how it got there unless it was added by something
>>>>> other than dhcp. It seems like that is your problem as the vm is arping
>>>>> directly for that address rather than the default gw.
>>>>>
>>>>>
>>>>> On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G <
>>>>> balamuruganvg at gmail.com> wrote:
>>>>>
>>>>>> Thanks Aaron.
>>>>>>
>>>>>> I am perhaps not configuring it right then. I am using Ubuntu 12.04
>>>>>> host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
>>>>>> that the VM's routing table has an entry for 169.254.0.0/16 but I
>>>>>> cant ping 169.254.169.254 from the VM. I am using a single node setup with
>>>>>> two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP
>>>>>>
>>>>>> These are my metadata related configurations.
>>>>>>
>>>>>> */etc/nova/nova.conf *
>>>>>> metadata_host = 10.5.12.20
>>>>>> metadata_listen = 127.0.0.1
>>>>>> metadata_listen_port = 8775
>>>>>> metadata_manager=nova.api.manager.MetadataManager
>>>>>> service_quantum_metadata_proxy = true
>>>>>> quantum_metadata_proxy_shared_secret = metasecret123
>>>>>>
>>>>>> */etc/quantum/quantum.conf*
>>>>>> allow_overlapping_ips = True
>>>>>>
>>>>>> */etc/quantum/l3_agent.ini*
>>>>>> use_namespaces = True
>>>>>> auth_url = http://10.5.3.230:35357/v2.0
>>>>>> auth_region = RegionOne
>>>>>> admin_tenant_name = service
>>>>>> admin_user = quantum
>>>>>> admin_password = service_pass
>>>>>> metadata_ip = 10.5.12.20
>>>>>>
>>>>>> */etc/quantum/metadata_agent.ini*
>>>>>> auth_url = http://10.5.3.230:35357/v2.0
>>>>>> auth_region = RegionOne
>>>>>> admin_tenant_name = service
>>>>>> admin_user = quantum
>>>>>> admin_password = service_pass
>>>>>> nova_metadata_ip = 127.0.0.1
>>>>>> nova_metadata_port = 8775
>>>>>> metadata_proxy_shared_secret = metasecret123
>>>>>>
>>>>>>
>>>>>> I see that /usr/bin/quantum-ns-metadata-proxy process is running.
>>>>>> When I ping 169.254.169.254 from VM, in the host's router namespace, I see
>>>>>> the ARP request but no response.
>>>>>>
>>>>>> root at openstack-dev:~# ip netns exec
>>>>>> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
>>>>>> Kernel IP routing table
>>>>>> Destination     Gateway         Genmask         Flags Metric Ref
>>>>>> Use Iface
>>>>>> 0.0.0.0         10.5.12.1       0.0.0.0         UG    0      0
>>>>>> 0 qg-193bb8ee-f5
>>>>>> 10.5.12.0       0.0.0.0         255.255.255.0   U     0      0
>>>>>> 0 qg-193bb8ee-f5
>>>>>> 192.168.2.0     0.0.0.0         255.255.255.0   U     0      0
>>>>>> 0 qr-59e69986-6e
>>>>>> root at openstack-dev:~# ip netns exec
>>>>>> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>>>>> decode
>>>>>> listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture
>>>>>> size 65535 bytes
>>>>>> ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
>>>>>> length 28
>>>>>> 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
>>>>>> Unknown), length 28
>>>>>> 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell
>>>>>> 192.168.2.3, length 28
>>>>>> 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell
>>>>>> 192.168.2.3, length 28
>>>>>> 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell
>>>>>> 192.168.2.3, length 28
>>>>>> 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell
>>>>>> 192.168.2.3, length 28
>>>>>>
>>>>>> 6 packets captured
>>>>>> 6 packets received by filter
>>>>>> 0 packets dropped by kernel
>>>>>> root at openstack-dev:~#
>>>>>>
>>>>>>
>>>>>> Any help will be greatly appreciated.
>>>>>>
>>>>>> Thanks,
>>>>>> Balu
>>>>>>
>>>>>>
>>>>>> On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen <arosen at nicira.com>wrote:
>>>>>>
>>>>>>> Yup, If your host supports namespaces this can be done via the
>>>>>>> quantum-metadata-agent.  The following setting is also required in your
>>>>>>>  nova.conf: service_quantum_metadata_proxy=True
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G <
>>>>>>> balamuruganvg at gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> In Grizzly, when using quantum and overlapping IPs, does metadata
>>>>>>>> service work? This wasnt working in Folsom.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Balu
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Mailing list: https://launchpad.net/~openstack
>>>>>>>> Post to     : openstack at lists.launchpad.net
>>>>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>>>>> More help   : https://help.launchpad.net/ListHelp
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Mailing list: https://launchpad.net/~openstack
>>>> Post to     : openstack at lists.launchpad.net
>>>> Unsubscribe : https://launchpad.net/~openstack
>>>> More help   : https://help.launchpad.net/ListHelp
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130424/40602742/attachment.html>


More information about the Openstack mailing list