[Openstack] [Quantum] Metadata service route from a VM
Sylvain Bauza
sylvain.bauza at digimind.com
Tue Feb 26 08:20:01 UTC 2013
Hi Dan,
Thanks for your clear answer. I do confirm, the 169.254.0.0/16 route was
working with my nova-network setup (FlatDHCP).
When mentioning Grizzly pushing a route to VMs, I guess it would be
possible to backport it to Folsom.
Do you have any idea on which changes to do for that feature ?
I'll take a look at dnsmasq and see if I can hardcode this.
-Sylvain
Le 26/02/2013 06:37, Dan Wendlandt a écrit :
> Hi Sylvain,
>
> The answer here is that "it depends".
>
> If you are using Folsom + Quantum, the only supported mechanism is
> reaching the metadata server is via your default gateway, so VMs
> should not have specific routes to reach the metadata subnet (I
> believe this is also the case for nova-network, so I'm a bit surprised
> by your original comments in this thread about using the direct route
> with nova-network).
>
> In Grizzly, Quantum will support two different mechanisms of reaching
> metadata. One via the router (as before) and another via the DHCP
> server IP (with a route for 169.254.169.254/32
> <http://169.254.169.254/32> injected into the VM via DHCP). The
> latter supports metadata on networks that do not have a router
> provided by Quantum.
>
> Dan
>
> On Mon, Feb 25, 2013 at 8:36 AM, Sylvain Bauza
> <sylvain.bauza at digimind.com <mailto:sylvain.bauza at digimind.com>> wrote:
>
> Yet no reply ?
>
> I did the hack, I removed the 169.254.0.0/16
> <http://169.254.0.0/16> route from my images, but this is quite a
> ugly hack.
> Could someone with OpenVswitch/GRE setup please confirm that there
> is no route to create for metadata ?
>
> Thanks,
> -Sylvain
>
> Le 21/02/2013 11:33, Sylvain Bauza a écrit :
>
> Anyone ?
> I found the reason why a 'quantum-dhcp-agent restart' is
> fixing the route, this is because the lease is DHCPNACK'd at
> next client refresh and the VM is getting a fresh new
> configuration excluding 169.254.0.0/16 <http://169.254.0.0/16>
> route.
>
> Community, I beg you to confirm the 169.254.0.0/16
> <http://169.254.0.0/16> route should *not* be pushed to VMs,
> and 169.254.169.254/32 <http://169.254.169.254/32> should be
> sent thru the default route (ie. provider router internal IP).
> If it's the case, I'll update all my images to remove that
> route. If not, something is wrong with my Quantum setup that I
> should fix.
>
> Thanks,
> -Sylvain
>
> Le 20/02/2013 15:55, Sylvain Bauza a écrit :
>
> Hi,
>
> Previously using nova-network, all my VMs were having :
> # route -n
> Table de routage IP du noyau
> Destination Passerelle Genmask Indic Metric Ref
> Use Iface
> 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
> 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0
> 0 eth0
> 0.0.0.0 10.0.0.1 0.0.0.0 UG 0 0 0 eth0
>
> Now, this setup seems incorrect with Quantum, as the ARP
> query goes directly from the network node trying to
> resolve 169.254.169.254 :
> [root at toto ~]# curl http://169.254.169.254/
> curl: (7) couldn't connect to host
>
> sylvain at folsom02:~$ sudo tcpdump -i qr-f76e4668-fa -nn not
> ip6 and not udp and host 169.254.169.254 -e
> tcpdump: verbose output suppressed, use -v or -vv for full
> protocol decode
> listening on qr-f76e4668-fa, link-type EN10MB (Ethernet),
> capture size 65535 bytes
> 15:47:46.009548 fa:16:3e:bf:0b:f6 > ff:ff:ff:ff:ff:ff,
> ethertype ARP (0x0806), length 42: Request who-has
> 169.254.169.254 tell 10.0.0.5, length 28
> 15:47:47.009076 fa:16:3e:bf:0b:f6 > ff:ff:ff:ff:ff:ff,
> ethertype ARP (0x0806), length 42: Request who-has
> 169.254.169.254 tell 10.0.0.5, length 28
>
> The only way for me to fix it is to remove the
> 169.254.0.0/16 <http://169.254.0.0/16> route on the VM (or
> for some reason I doesn't understand, by restarting
> quantum-dhcp-agent on the network node) and then L3
> routing is working correctly :
>
> [root at toto ~]# route del -net 169.254.0.0/16
> <http://169.254.0.0/16>
> [root at toto ~]# curl http://169.254.169.254/
> 1.0
> 2007-01-19
> 2007-03-01
> 2007-08-29
> 2007-10-10
> 2007-12-15
> 2008-02-01
> 2008-09-01
> 2009-04-04
>
> sylvain at folsom02:~$ sudo tcpdump -i qg-f2397006-20 -nn not
> ip6 and not udp and host 10.0.0.5 and not port 22 -e
> tcpdump: verbose output suppressed, use -v or -vv for full
> protocol decode
> listening on qg-f2397006-20, link-type EN10MB (Ethernet),
> capture size 65535 bytes
> 15:52:58.479234 fa:16:3e:e1:95:20 > e0:46:9a:2c:f4:7d,
> ethertype IPv4 (0x0800), length 74: 10.0.0.5.55428 >
> 192.168.1.71.8775: Flags [S], seq 3032859044, win 14600,
> options [mss 1460,sackOK,TS val 2548891 ecr 0,nop,wscale
> 5], length 0
> 15:52:58.480987 e0:46:9a:2c:f4:7d > fa:16:3e:e1:95:20,
> ethertype IPv4 (0x0800), length 74: 192.168.1.71.8775 >
> 10.0.0.5.55428: Flags [S.], seq 3888257357, ack
> 3032859045, win 14480, options [mss 1460,sackOK,TS val
> 16404712 ecr 2548891,nop,wscale 7], length 0
> 15:52:58.482211 fa:16:3e:e1:95:20 > e0:46:9a:2c:f4:7d,
> ethertype IPv4 (0x0800), length 66: 10.0.0.5.55428 >
> 192.168.1.71.8775: Flags [.], ack 1, win 457, options
> [nop,nop,TS val 2548895 ecr 16404712], length 0
>
>
> I can't understand what's wrong with my setup. Could you
> help me ? I would have to undergo a post-up statement for
> all my images... :(
>
> Thanks,
> -Sylvain
>
>
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> Post to : openstack at lists.launchpad.net
> <mailto:openstack at lists.launchpad.net>
> Unsubscribe : https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> More help : https://help.launchpad.net/ListHelp
>
>
>
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com <http://www.nicira.com>
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130226/c033bf52/attachment.html>
More information about the Openstack
mailing list