Delegating and routing an IPv6 prefix to an instance
Hello OpenStack-Discuss, I have a use case in which an instance / VM is hosting i.e. an OpenVPN gateway which shall be doing some routing / networking by itself. For that purpose one would like to have a global unique IPv6 prefix delegated and routed to it to, which it can in turn give out to its VPN clients. This can and should not be cut out of the on-link network that is provided by Neutron and used to connect the instance itself. If you look at https://community.openvpn.net/openvpn/wiki/IPv6, which has a section *Details: IPv6 routed block* explaining just how that is one intended approach on how to do it. I am now wondering if the existing DHCPv6 prefix delegation implemented in OpenStack is capable of providing a prefix to an instance. Digging a little into what can be found online I ran into this Etherpad doc https://etherpad.opendev.org/p/neutron-kilo-prefix-delegation (linked to on https://wiki.openstack.org/wiki/Neutron/IPv6/PrefixDelegation) There is a list of use-cases, the second one being exactly what I described above:
[...]
Use cases:
We need to allocate addresses to ports from an external or providernetwork, and route them via Neutron routers.
We wish to allocate whole prefixes to devices (and their specific neutron port) on demand. A port must be authorised via the API for a prefix. The prefix could be issued to the device via PD (since the device has to discover the prefix it's been given).
[...]
But to my understanding the spec used to implement the current IPv6 networking and also prefix delegation mechanism, also mentioned this use case as an "limitation and future enhancement" - see: https://specs.openstack.org/openstack/neutron-specs/specs/liberty/ipv6-prefi... Does anyone have any thoughts on this matter of dedicating a prefix and and routingits traffic to a VM, but not just a subnet? Regards Christian
On 6/26/20 11:41 AM, Christian Rohmann wrote:
Hello OpenStack-Discuss,
Hi Christian,
I have a use case in which an instance / VM is hosting i.e. an OpenVPN gateway which shall be doing some routing / networking by itself. For that purpose one would like to have a global unique IPv6 prefix delegated and routed to it to, which it can in turn give out to its VPN clients. This can and should not be cut out of the on-link network that is provided by Neutron and used to connect the instance itself.
If you look at https://community.openvpn.net/openvpn/wiki/IPv6, which has a section *Details: IPv6 routed block* explaining just how that is one intended approach on how to do it.
I am now wondering if the existing DHCPv6 prefix delegation implemented in OpenStack is capable of providing a prefix to an instance. Digging a little into what can be found online I ran into this Etherpad doc https://etherpad.opendev.org/p/neutron-kilo-prefix-delegation (linked to on https://wiki.openstack.org/wiki/Neutron/IPv6/PrefixDelegation)
The Neutron implementation of IPv6 PD doesn't support the use case you're describing, allocating an entire /64 to a device/neutron port. The Neutron router can only do PD, then advertise the /64 it received on a downstream IPv6 subnet. While this does give the instance an IPv6 address that is globally unique, it's just the single address. There is a neutron-vpnaas project, https://docs.openstack.org/neutron-vpnaas/latest/ and I've cc'd Dongcan Ye, he would know more about VPNaas setup related to Neutron, I'm just not that familiar with it myself. -Brian
There is a list of use-cases, the second one being exactly what I described above:
[...]
Use cases:
We need to allocate addresses to ports from an external or providernetwork, and route them via Neutron routers.
We wish to allocate whole prefixes to devices (and their specific neutron port) on demand. A port must be authorised via the API for a prefix. The prefix could be issued to the device via PD (since the device has to discover the prefix it's been given).
[...]
But to my understanding the spec used to implement the current IPv6 networking and also prefix delegation mechanism, also mentioned this use case as an "limitation and future enhancement" - see: https://specs.openstack.org/openstack/neutron-specs/specs/liberty/ipv6-prefi...
Does anyone have any thoughts on this matter of dedicating a prefix and and routingits traffic to a VM, but not just a subnet?
Regards
Christian
hi Christian, not fully openstack compiant implementation, but. you might make it working by: you can create a port with several IPv6 addresses (so it would be reserved in OSP) and port security disabled for that port or network/subnet (this will allow to reach your vm on that port without filtering "spoofed" traffic which will be in case of packets addressed using the ip addresses which are expected to come only FROM your port but not into it). Create port and later assign some ip addresses to it then again some, then again... 5 in a go to ne safe, then sleep 60 sec and again openstack port set fixed-ip or smth like that... set it in a cycle :) Then that vm, will have exact ip addresses "by accident" they might be in a smaller subnet :)
On 2020-06-26 23:14:11 +0300 (+0300), Ruslanas Gžibovskis wrote: [...]
Create port and later assign some ip addresses to it then again some, then again... 5 in a go to ne safe, then sleep 60 sec and again openstack port set fixed-ip or smth like that... set it in a cycle :) [...]
Not to break out the math, but I'm gonna hafta break out the math. At that rate, you could assign a full ipv6 /64 network in rough 117 *billion* (okay, US billion not UK billion, but still) years. Not exactly after the estimated heat death of the Universe, but let's just say you're going to need to move your servers out of this solar system pretty early in the process. -- Jeremy Stanley
well can be /120 :) not all /64 On Fri, 26 Jun 2020, 23:29 Jeremy Stanley, <fungi@yuggoth.org> wrote:
Create port and later assign some ip addresses to it then again some,
On 2020-06-26 23:14:11 +0300 (+0300), Ruslanas Gžibovskis wrote: [...] then
again... 5 in a go to ne safe, then sleep 60 sec and again openstack port set fixed-ip or smth like that... set it in a cycle :) [...]
Not to break out the math, but I'm gonna hafta break out the math. At that rate, you could assign a full ipv6 /64 network in rough 117 *billion* (okay, US billion not UK billion, but still) years. Not exactly after the estimated heat death of the Universe, but let's just say you're going to need to move your servers out of this solar system pretty early in the process. -- Jeremy Stanley
also, if it is private cloud, maybe just dedicate a vlan as provider net, and disable port security on that port and no need to reserve ip addresses :) On Sat, 27 Jun 2020, 00:20 Ruslanas Gžibovskis, <ruslanas@lpic.lt> wrote:
well can be /120 :) not all /64
On Fri, 26 Jun 2020, 23:29 Jeremy Stanley, <fungi@yuggoth.org> wrote:
Create port and later assign some ip addresses to it then again some,
again... 5 in a go to ne safe, then sleep 60 sec and again openstack
On 2020-06-26 23:14:11 +0300 (+0300), Ruslanas Gžibovskis wrote: [...] then port
set fixed-ip or smth like that... set it in a cycle :) [...]
Not to break out the math, but I'm gonna hafta break out the math. At that rate, you could assign a full ipv6 /64 network in rough 117 *billion* (okay, US billion not UK billion, but still) years. Not exactly after the estimated heat death of the Universe, but let's just say you're going to need to move your servers out of this solar system pretty early in the process. -- Jeremy Stanley
On 2020-06-26 20:16, Brian Haley wrote:
The Neutron implementation of IPv6 PD doesn't support the use case you're describing, allocating an entire /64 to a device/neutron port.
The Neutron router can only do PD, then advertise the /64 it received on a downstream IPv6 subnet. While this does give the instance an IPv6 address that is globally unique, it's just the single address.
Thanks for your reply and the time to look into this! It might be a bit of a leap, but would this feature not be similar to a floating public IP which I can assign to an instance in the IPv4 world? Certainly with virtually unlimited globally unique IPv6 space there is no need for this in regards to the IPv6 prefixes an instance receives to be reachable - they could all have globally routed and reachable addresses if need be. But some sort of pool of "additional" <s>public</s> globally unique prefixes that can be requested, bound and routed to an instance would be nice - kinda like https://www.terraform.io/docs/providers/openstack/r/compute_floatingip_v2.ht... . What would be the process of getting such a feature into a working specification in case one was interested in implementing this? Regards Christian
On 6/29/20 11:10 AM, Christian Rohmann wrote:
On 2020-06-26 20:16, Brian Haley wrote:
The Neutron implementation of IPv6 PD doesn't support the use case you're describing, allocating an entire /64 to a device/neutron port.
The Neutron router can only do PD, then advertise the /64 it received on a downstream IPv6 subnet. While this does give the instance an IPv6 address that is globally unique, it's just the single address.
Thanks for your reply and the time to look into this!
It might be a bit of a leap, but would this feature not be similar to a floating public IP which I can assign to an instance in the IPv4 world? Certainly with virtually unlimited globally unique IPv6 space there is no need for this in regards to the IPv6 prefixes an instance receives to be reachable - they could all have globally routed and reachable addresses if need be.
But some sort of pool of "additional" <s>public</s> globally unique prefixes that can be requested, bound and routed to an instance would be nice - kinda like https://www.terraform.io/docs/providers/openstack/r/compute_floatingip_v2.ht... .
We made a conscious decision to not support floating IPv6 in the base reference implementation, although it is available in one of the third-party drivers (Midonet?). Making the tenant IPv6 subnets reachable directly was instead the goal, which happens when PD is used.
What would be the process of getting such a feature into a working specification in case one was interested in implementing this?
It's outlined in the neutron docs, but basically it starts with a bug describing the feature you'd like to add, it's then discussed at a weekly meeting each Friday. https://docs.openstack.org/neutron/ussuri/contributor/policies/blueprints.ht... -Brian
On 2020-06-29 19:55, Brian Haley wrote:
But some sort of pool of "additional" <s>public</s> globally unique prefixes that can be requested, bound and routed to an instance would be nice - kinda like https://www.terraform.io/docs/providers/openstack/r/compute_floatingip_v2.ht...
.
We made a conscious decision to not support floating IPv6 in the base reference implementation, although it is available in one of the third-party drivers (Midonet?). Making the tenant IPv6 subnets reachable directly was instead the goal, which happens when PD is used.
I suppose you are talking about this fip64 feature - https://docs.midonet.org/docs/latest-en/operations-guide/content/fip64.html // https://docs.openstack.org/networking-midonet/latest/user/features.html ? Just in case there might be a misunderstanding in what I described ... "floating ip" for IPv4 would translate to "floating prefix" (out of a pool, with a limit on the number of those a tenant might use/request) in my suggested case and the routing sending the traffic of that prefix to an interface. Certainly no single ipv6 addresses / 128 shall be "floating" around or be mapped to a fixed IPv4 to make things even more complicated. It's just a about being able to provide an instance with a global prefix of its own to do networky things.
What would be the process of getting such a feature into a working specification in case one was interested in implementing this?
It's outlined in the neutron docs, but basically it starts with a bug describing the feature you'd like to add, it's then discussed at a weekly meeting each Friday.
https://docs.openstack.org/neutron/ussuri/contributor/policies/blueprints.ht...
-Brian
Thanks for that immediate pointer. I shall be discussing this with some folks here and might get back to writing one of those blueprints. Regards Christian
participants (4)
-
Brian Haley
-
Christian Rohmann
-
Jeremy Stanley
-
Ruslanas Gžibovskis