Re: [magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN

Krzysztof Klimonda kklimonda at syntaxhighlighted.com
Tue Dec 15 16:14:29 UTC 2020


On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote:
> Hi Chris, thanks for moving this here.
> 
> On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda <kklimonda at syntaxhighlighted.com> wrote:
>> Hi,
>> 
>> This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off).
>> 
>> As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN.
>> 
>> I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead.
> 
> In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0].
> I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g:
> 
> addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ]    (right now it's just  addresses = [ MAC1 IP1 ])
> port_security column will be kept as it is today.

How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping?

If ml2/ovs is also having scaling issues when remote groups are used, perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed?

> 
> This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups.
>  
> +Numan Siddique <mailto:nusiddiq at redhat.com> to confirm that this doesn't have any unwanted side effects. 
> 
> [0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73614715780b997/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L122-L125 
>> 
>> [1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html
>> [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml
>> [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038
>> 
>> -- 
>> Best Regards,
>>   - Chris
>> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20201215/f9138ff9/attachment-0001.html>


More information about the openstack-discuss mailing list