IPv6 deployment on OpenStack
Hello. We are progressively adding support for IPv6 in my company. We decided to use SLAAC only for laptops, phones, … since DHCPv6 isn’t supported on Android. RDNSS support will also increase. We are now planning our deployment on OpenStack. We already know that we'll rely only on neutron but we are not yet fixed between DHCPv6 and SLAAC ? Do you have any arguments for one these for VMs ? Thanks, Marc-Antoine.
Hi, On poniedziałek, 7 marca 2022 02:36:24 CET Marc-Antoine Godde wrote:
With SLAAC You need to have Your network connected to the router in Neutron and You can only configure IP address on the VM. With DHCPv6 You can configure other things, like some static-routes, etc. Neutron supports DHCPv6 in the stateful and stateless variants. With stateless, You are using RA for address configuration and DHCP server for other configation. Please see [1] for more details. [1] https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#address-mod... -- Slawek Kaplonski Principal Software Engineer Red Hat
Hello, Thanks for your answer. If I’m correct, we can just use a virtual router with SLAAC since RADVD can deal with RS and emit RA (with support for RFC6106), right ? More generally, aren’t we suppose to have a virtual router every time, even in DHCPv6 (stateless and statefull), to answer RS ? I have to admit that I’m not very familiar at the moment with the implementations of RFCs in OpenStack. Currently, we prefer the idea of adding IPv6 through SLAAC to have a uniform network. If we do so, we’d like to avoid sending RA from our physical router to limit its load. Yet, we do not any other arguments to support this choice. Do you have any recommendations on what to do in latest versions of OpenStack ? What is usually done ? Thanks, Marc-Antoine
Hi, On poniedziałek, 7 marca 2022 10:36:30 CET Marc-Antoine Godde wrote:
Yes, virtual router created in the Neutron is enough there. It will spawn radvd in the qrouter namespace and will send RA to the Vms. Please note that Neutron don't supports privacy extension [1] so You will need to make sure that it's disabled it on Your vms.
TBH I don't have such experience. That's more question to operators of OpenStack.
[1] https://datatracker.ietf.org/doc/html/rfc4941 -- Slawek Kaplonski Principal Software Engineer Red Hat
Hi, Here’s what we’ve done. We created a network: Name ipv6-testing-network ID 9d5ca309-1861-4422-bcff-8818f9762a6f Project ID 653f5a2e60d34768a8629e5d4fca0738 Status Active Admin State UP Shared Yes External Network Yes MTU 1500 Provider Network Network Type: vlan Physical Network: vlan Segmentation ID: 51 We created a subnet: Name ipv6-testing-v6 ID 763771d4-b9d7-419a-ba04-97ce3abaf152 Project ID 653f5a2e60d34768a8629e5d4fca0738 Network Name ipv6-testing-network Network ID 9d5ca309-1861-4422-bcff-8818f9762a6f <https://openstack.viarezo.fr/project/networks/9d5ca309-1861-4422-bcff-8818f9762a6f/detail>Subnet Pool None IP Version IPv6 CIDR xxxx:xxxx:2f1:aaaa::/64 IP Allocation Pools Start xxxx:xxxx:2f1:aaaa::2 - End xxxx:xxxx:2f1:aaaa:ffff:ffff:ffff:ffff Gateway IP xxxx:xxxx:2f1:aaaa::1 DHCP Enabled Yes IPv6 Address Configuration Mode SLAAC: Address discovered from OpenStack Router Additional Routes None DNS Name Servers None We created Ubuntu and Debian instances. According to Horizon, the instance IPv6 is xxxx:xxxx:2f1:aaaa:f816:3eff:fe6d:c41a. Yet, we only have a link local address which is fe80::f816:3eff:fe6d:c41a/64. TCPdump indicates no Router Advertisement. We tried with and without adding a router on the Network in Horizon. ICMPv6 is authorized in INGRESS from ::/0. We checked on the controllers, the computes and in the Neutron containers, systemctl indicated no instance of RADVD. Maybe we checked incorrectly... Do you have any suggestions ? I add that we are working with OpenStack Ussuri deployed with OpenStack-ansible. Thanks, Marc-Antoine
Hi Marc-Antoine, See inline... On 3/8/22 11:18, Marc-Antoine Godde wrote:
So this is an external provider network connected to your datacenter network, correct? In the case Slawek was describing I believe he was talking about an internal private network, which when a neutron router is attached will trigger radvd to be spawned, etc. In this case VMs booted on this network should be seeing RAs from your datacenter router, if it's sending them. If it's not that would explain why they only have a link-local IPv6 address since the neutron router will not spawn radvd to run on the external network. BTW, I'm trying to compare this to my local setup, but since I'm not running Horizon just using 'openstack network show...', 'openstack subnet show...' output, which is slightly different, but looks to match what you're doing. Is your plan to have private IPv6 subnets that are then routed to your external network or is this just a test? -Brian
Hello, Indeed, this network is connected to our physical network (VLAN 51 for testing), xxxx:xxxx:2f1:aaaa::1 is an interface on our physical router. Finally, we successfully started RADVD by adding a network interface in the subnet to a virtual router in OpenStack. This gave IPs to VMs, they were able to communicate between each other. Obviously, this network topology isn’t making any sense, we can’t route traffic outside. It was just for testing. Now, the goal is route the traffic of VMs. I see two paradigms. The first one, we use our physical router to send RA directly to VMs. The second one, we use a private subnet (xxxx:xxxx:2f1:bbbb::/64 for instance) in a non external network of OpenStack. We add a virtual router to that subnet, we now have RADVD. We use that router to route traffic to an external network of OpenStack. What is best ? Marc-Antoine
That really depends on if you want Openstack to play an active role. If the VMs are only connected to the Provider/External network on VLAN51, than SLAAC should happen without Openstack being involved. If you tcpdump on those VM, you should have seen the RA or some kind of traffic arrive. If the VMs are getting the RA for the qrouter in Openstack, I do believe that your gateway might become the link-local on that qrouter. This means that the qrouter is now processing the out of subnet traffic for those VMs. That might not be the expected flow. On Tue, Mar 8, 2022 at 9:14 PM Marc-Antoine Godde < marc-antoine.godde@viarezo.fr> wrote:
Hello, Thanks for these information. Indeed, we were obtaining link-local address for the gateway with the OpenStack router. Even if it’s contrary to the RFC, do yo know if it’s possible to send RA from OpenStack on behalf of our datacenter router ? That way, we could limit the load on our datacenter router. Thanks, Marc-Antoine.
Hi, On 3/9/22 10:01, Marc-Antoine Godde wrote:
From my memory of the RFCs, the link-local of the sender is important in the RA message, but I could be remembering it wrong, or there could be an option to override that. I can't say I've seen it been done like this. Is the load really that much to send periodic RAs and respond to RSs? Most routers have supported this for 15+ years by now so should be able to handle it. -Brian
Hi, On 3/8/22 21:08, Marc-Antoine Godde wrote:
This is going to be the quickest and easiest way to do this - having the VM directly attached to your infrastructure and having them create a SLAAC address based on the RA from that router.
This is possible, but requires using prefix delegation such that the private network gets an IPv6 prefix that is routeable in your datacenter. This is described on the docs page at [0] but does require that your infrastructure supports IPV6-PD. -Brian [0] https://docs.openstack.org/neutron/latest/admin/config-ipv6.html
Hello, Here is a little feedback of what we did. We kept our VMs connected to a VLAN network and we activated SLAAC on the corresponding IRB on our router. We then configured the subnet with SLAAC for external router : VMs receives RA (and RDNSS). They now have an IP, the gateway is a link local address of our datacenter router. Everything is working fine. I have to add that we were having some issues: we couldn’t ping VMs from our router for instance. We added two rules in the default security group (INGRESS and EGRESS, All ICMP on ::/0). Now, it is working fine. As you said, this paradigm is the most simple one and is working just fine. We’ll continue our testing with this. For everything, thank you. Marc-Antoine
participants (4)
-
Brian Haley
-
Laurent Dumont
-
Marc-Antoine Godde
-
Slawek Kaplonski