[magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN
Hi, This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off). As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN. I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead. [1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... -- Best Regards, - Chris
Hi, On Tue, Dec 15, 2020 at 04:11:25PM +0100, Krzysztof Klimonda wrote:
Hi,
This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off).
As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN.
I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead.
Security group rules with remote groups should works with allowed address pairs for ML2/OVS. Because of that we even have note in our docs that You shouldn't add e.g. 0.0.0.0/0 as allowed address pair for one port as it would effectively open all Your traffic to all Your ports which are using the same SG. But from the other hand, we have known issues with scalability of the security groups with remote group ids as reference in ML2/OVS. If You have many ports which are using such group, every time new port is added, all other ports has to be updated to add new IP address to the ipset (or OF rule) and that make take long time. So using e.g. CIDRs in SG rules works better for sure.
[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
-- Best Regards, - Chris
-- Slawek Kaplonski Principal Software Engineer Red Hat
Hi Chris, thanks for moving this here. On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda < kklimonda@syntaxhighlighted.com> wrote:
Hi,
This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off).
As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN.
I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead.
In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0]. I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g: addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) port_security column will be kept as it is today. This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups. +Numan Siddique <nusiddiq@redhat.com> to confirm that this doesn't have any unwanted side effects. [0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73...
[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
-- Best Regards, - Chris
On Tue, Dec 15, 2020 at 4:59 PM Daniel Alvarez Sanchez <dalvarez@redhat.com> wrote:
Hi Chris, thanks for moving this here.
On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda < kklimonda@syntaxhighlighted.com> wrote:
Hi,
This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off).
As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN.
I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead.
In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0]. I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g:
addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) port_security column will be kept as it is today.
This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups.
+Numan Siddique <nusiddiq@redhat.com> to confirm that this doesn't have any unwanted side effects.
On top of this I'd say that if the behavior with ML2/OVN is different from ML2/OVS we'll also need to add testing coverage in Neutron for allowed address pairs and remote SGs simultaneously.
[0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73...
[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
-- Best Regards, - Chris
On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote:
Hi Chris, thanks for moving this here.
On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda <kklimonda@syntaxhighlighted.com> wrote:
Hi,
This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off).
As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN.
I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead.
In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0]. I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g:
addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) port_security column will be kept as it is today.
How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping? If ml2/ovs is also having scaling issues when remote groups are used, perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed?
This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups.
+Numan Siddique <mailto:nusiddiq@redhat.com> to confirm that this doesn't have any unwanted side effects.
[0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73...
[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
-- Best Regards, - Chris
Hi, On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote:
On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote:
Hi Chris, thanks for moving this here.
On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda <kklimonda@syntaxhighlighted.com> wrote:
Hi,
This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off).
As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN.
I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead.
In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0]. I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g:
addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) port_security column will be kept as it is today.
How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping?
If ml2/ovs is also having scaling issues when remote groups are used, perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed?
IIRC Kuryr moved already to such solution as they had problems with scaling on ML2/OVS when remote_group ids where used.
This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups.
+Numan Siddique <mailto:nusiddiq@redhat.com> to confirm that this doesn't have any unwanted side effects.
[0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73...
[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
-- Best Regards, - Chris
-- Slawek Kaplonski Principal Software Engineer Red Hat
On 15 Dec 2020, at 22:18, Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote:
On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote: Hi Chris, thanks for moving this here.
On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda <kklimonda@syntaxhighlighted.com> wrote:
Hi,
This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off).
As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN.
I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead.
In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0]. I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g:
addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) port_security column will be kept as it is today.
How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping?
It will use conjunctive flows but yes it will be huge no matter what. If we follow the approach of adding match conditions to the ACLs for each address pair it is going to be even worse when expanded by ovn-controller.
If ml2/ovs is also having scaling issues when remote groups are used, perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed?
IIRC Kuryr moved already to such solution as they had problems with scaling on ML2/OVS when remote_group ids where used.
That’s right. Remote groups are expensive in any case. Mind opening a launchpad bug for OVN though? Thanks!
This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups.
+Numan Siddique <mailto:nusiddiq@redhat.com> to confirm that this doesn't have any unwanted side effects.
[0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73...
[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
-- Best Regards, - Chris
-- Slawek Kaplonski Principal Software Engineer Red Hat
On Tue, Dec 15, 2020 at 10:57 PM Daniel Alvarez <dalvarez@redhat.com> wrote:
On 15 Dec 2020, at 22:18, Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote: Hi Chris, thanks for moving this here.
On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda < kklimonda@syntaxhighlighted.com> wrote:
Hi,
This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off).
As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it
I can't verify this behaviour under ml2/ovs for the next couple of
weeks, as I'm taking them off for holidays, but perhaps someone knows if
On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote: directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN. that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead.
In ML2/OVN we're adding the allowed address pairs to the 'addresses'
field only when the MAC address of the pair is the same as the port MAC [0].
I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g:
addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) port_security column will be kept as it is today.
How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping?
If ml2/ovs is also having scaling issues when remote groups are used,
It will use conjunctive flows but yes it will be huge no matter what. If we follow the approach of adding match conditions to the ACLs for each address pair it is going to be even worse when expanded by ovn-controller. perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed?
IIRC Kuryr moved already to such solution as they had problems with
scaling on
ML2/OVS when remote_group ids where used.
@Slaweq, ML2/OVS accounts for allowed address pairs for remote security groups but not for FIPs right? I wonder why the distinction. Documentation is not clear but I'm certain that FIPs are not accounted for by remote groups. If we decide to go ahead and implement this in ML2/OVN, the same thing can be applied for FIPs adding the FIP to the 'addresses' field but there might be scaling issues.
That’s right. Remote groups are expensive in any case.
Mind opening a launchpad bug for OVN though?
Thanks!
This way, when ovn-northd generates the Address_Set in the SB database
for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups.
+Numan Siddique <mailto:nusiddiq@redhat.com> to confirm that this
doesn't have any unwanted side effects.
[0]
https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73...
[1]
https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html
[2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
-- Best Regards, - Chris
-- Slawek Kaplonski Principal Software Engineer Red Hat
Hi, On Wed, Dec 16, 2020 at 12:23:02PM +0100, Daniel Alvarez Sanchez wrote:
On Tue, Dec 15, 2020 at 10:57 PM Daniel Alvarez <dalvarez@redhat.com> wrote:
On 15 Dec 2020, at 22:18, Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote: Hi Chris, thanks for moving this here.
On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda < kklimonda@syntaxhighlighted.com> wrote:
Hi,
This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off).
As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it
I can't verify this behaviour under ml2/ovs for the next couple of
weeks, as I'm taking them off for holidays, but perhaps someone knows if
On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote: directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN. that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead.
In ML2/OVN we're adding the allowed address pairs to the 'addresses'
field only when the MAC address of the pair is the same as the port MAC [0].
I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g:
addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) port_security column will be kept as it is today.
How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping?
If ml2/ovs is also having scaling issues when remote groups are used,
It will use conjunctive flows but yes it will be huge no matter what. If we follow the approach of adding match conditions to the ACLs for each address pair it is going to be even worse when expanded by ovn-controller. perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed?
IIRC Kuryr moved already to such solution as they had problems with
scaling on
ML2/OVS when remote_group ids where used.
@Slaweq, ML2/OVS accounts for allowed address pairs for remote security groups but not for FIPs right? I wonder why the distinction. Documentation is not clear but I'm certain that FIPs are not accounted for by remote groups.
Right. FIPs aren't added to the list of allowed IPs in the ipset.
If we decide to go ahead and implement this in ML2/OVN, the same thing can be applied for FIPs adding the FIP to the 'addresses' field but there might be scaling issues.
That’s right. Remote groups are expensive in any case.
Mind opening a launchpad bug for OVN though?
Thanks!
This way, when ovn-northd generates the Address_Set in the SB database
for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups.
+Numan Siddique <mailto:nusiddiq@redhat.com> to confirm that this
doesn't have any unwanted side effects.
[0]
https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73...
[1]
https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html
[2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0... [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
-- Best Regards, - Chris
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat
Hello, Have the same issue as Krzysztof, did some more digging and it seems that: - Magnum adds a CIDR network to allowed_address_pairs (10.100.0.0/16 by default) - OVN does not support adding CIDR to OVN NB LSP addresses field (so it makes it at least harder to reach feature parity with ML2/OVS in that sense) I've been able to work this around by changing Magnum code to add additional SG rules to pass traffic with remote_ip: 10.100.0.0/16 ( https://review.opendev.org/c/openstack/magnum/+/773923/1/magnum/drivers/k8s_... ). Unfortunately disabling allowed_address_pairs (which I wanted to propose in the same change) effects in 10.100.0.0/16 not being added to OVN NB LSP port_security field - and then it stops working. Are there some additional SG entries needed that might allow that traffic (to facilitate the disablement of allowed_address_pairs and improve scalability)? I'll post another thread on ovs-discuss to discuss if adding CIDRs to addresses field as a feature is technically feasible. Michal śr., 16 gru 2020 o 13:00 Slawek Kaplonski <skaplons@redhat.com> napisał(a):
Hi,
On Tue, Dec 15, 2020 at 10:57 PM Daniel Alvarez <dalvarez@redhat.com> wrote:
On 15 Dec 2020, at 22:18, Slawek Kaplonski <skaplons@redhat.com>
wrote:
Hi,
On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote:
On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote: Hi Chris, thanks for moving this here.
On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda <
kklimonda@syntaxhighlighted.com> wrote:
> Hi, > > This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off). > > As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between
> > I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if
deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN. that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use
network cidr to allow traffic between nodes instead.
In ML2/OVN we're adding the allowed address pairs to the
'addresses' field only when the MAC address of the pair is the same as the port MAC [0].
I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of
On Wed, Dec 16, 2020 at 12:23:02PM +0100, Daniel Alvarez Sanchez wrote: pods pod the
allowed-address pairs to the 'addresses' column. E.g:
addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right
now it's just addresses = [ MAC1 IP1 ])
port_security column will be kept as it is today.
How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping?
If ml2/ovs is also having scaling issues when remote groups are
used,
It will use conjunctive flows but yes it will be huge no matter what. If we follow the approach of adding match conditions to the ACLs for each address pair it is going to be even worse when expanded by ovn-controller. perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed?
IIRC Kuryr moved already to such solution as they had problems with
scaling on
ML2/OVS when remote_group ids where used.
@Slaweq, ML2/OVS accounts for allowed address pairs for remote security groups but not for FIPs right? I wonder why the distinction. Documentation is not clear but I'm certain that FIPs are not accounted for by remote groups.
Right. FIPs aren't added to the list of allowed IPs in the ipset.
If we decide to go ahead and implement this in ML2/OVN, the same thing
can
be applied for FIPs adding the FIP to the 'addresses' field but there might be scaling issues.
That’s right. Remote groups are expensive in any case.
Mind opening a launchpad bug for OVN though?
Thanks!
This way, when ovn-northd generates the Address_Set in the SB
database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups.
+Numan Siddique <mailto:nusiddiq@redhat.com> to confirm that this
doesn't have any unwanted side effects.
[0]
https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73...
> > [1]
https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html
> [2]
https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
> [3]
https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb0...
> > -- > Best Regards, > - Chris >
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Michał Nasiadka mnasiadka@gmail.com
participants (5)
-
Daniel Alvarez
-
Daniel Alvarez Sanchez
-
Krzysztof Klimonda
-
Michał Nasiadka
-
Slawek Kaplonski