[magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN

Slawek Kaplonski skaplons at redhat.com
Wed Dec 16 11:57:36 UTC 2020


Hi,

On Wed, Dec 16, 2020 at 12:23:02PM +0100, Daniel Alvarez Sanchez wrote:
> On Tue, Dec 15, 2020 at 10:57 PM Daniel Alvarez <dalvarez at redhat.com> wrote:
> 
> >
> >
> >
> > > On 15 Dec 2020, at 22:18, Slawek Kaplonski <skaplons at redhat.com> wrote:
> > >
> > > Hi,
> > >
> > >> On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote:
> > >>> On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote:
> > >>> Hi Chris, thanks for moving this here.
> > >>>
> > >>> On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda <
> > kklimonda at syntaxhighlighted.com> wrote:
> > >>>> Hi,
> > >>>>
> > >>>> This email is a follow-up to a discussion I've openened on
> > ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods
> > deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling
> > disabled (calico_ipv4pool_ipip label set to a default value of Off).
> > >>>>
> > >>>> As a short introduction, during magnum testing in ussuri deployment
> > with ml2/ovn neutron driver I've noticed lack of communication between pods
> > deployed on different nodes as part of magnum deployment with calico
> > configured to *not* encapsulate traffic in IPIP tunnel, but route it
> > directly between nodes. In theory, magnum configures adds defined pod
> > network to k8s nodes ports' allowed_address_pairs[2] and then security
> > group is created allowing for ICMP and TCP/UDP traffic between ports
> > belonging to that security group[3]. This doesn't work with ml2/ovn as
> > TCP/UDP traffic between IP addresses in pod network is not matching ACLs
> > defined in OVN.
> > >>>>
> > >>>> I can't verify this behaviour under ml2/ovs for the next couple of
> > weeks, as I'm taking them off for holidays, but perhaps someone knows if
> > that specific usecase (security group rules with remote groups used with
> > allowed address pairs) is supposed to be working, or should magnum use pod
> > network cidr to allow traffic between nodes instead.
> > >>>
> > >>> In ML2/OVN we're adding the allowed address pairs to the 'addresses'
> > field only when the MAC address of the pair is the same as the port MAC [0].
> > >>> I think that we can change the code to accomplish what you want (if it
> > matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the
> > allowed-address pairs to the 'addresses' column. E.g:
> > >>>
> > >>> addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ]    (right now
> > it's just  addresses = [ MAC1 IP1 ])
> > >>> port_security column will be kept as it is today.
> > >>
> > >> How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP
> > addresses set in allowed_address_pairs? Given how default pod network is
> > 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1
> > mapping?
> >
> > It will use conjunctive flows but yes it will be huge no matter what. If
> > we follow the approach of adding match conditions to the ACLs for each
> > address pair it is going to be even worse when expanded by ovn-controller.
> > >>
> > >> If ml2/ovs is also having scaling issues when remote groups are used,
> > perhaps magnum should switch to defining remote-ip in its security groups
> > instead, even if the underlying issue on ml2/ovn is fixed?
> > >
> > > IIRC Kuryr moved already to such solution as they had problems with
> > scaling on
> > > ML2/OVS when remote_group ids where used.
> >
> 
> @Slaweq, ML2/OVS accounts for allowed address pairs for remote security
> groups but not for FIPs right? I wonder why the distinction.
> Documentation is not clear but I'm certain that FIPs are not accounted for
> by remote groups.

Right. FIPs aren't added to the list of allowed IPs in the ipset.

> 
> If we decide to go ahead and implement this in ML2/OVN, the same thing can
> be applied for FIPs adding the FIP to the 'addresses' field but there might
> be scaling issues.
> 
> 
> > That’s right. Remote groups are expensive in any case.
> >
> > Mind opening a launchpad bug for OVN though?
> >
> > Thanks!
> > >
> > >>
> > >>>
> > >>> This way, when ovn-northd generates the Address_Set in the SB database
> > for the corresponding remote group, the allowed-address pairs IP addresses
> > will be added to it and honored by the security groups.
> > >>>
> > >>> +Numan Siddique <mailto:nusiddiq at redhat.com> to confirm that this
> > doesn't have any unwanted side effects.
> > >>>
> > >>> [0]
> > https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73614715780b997/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L122-L125
> > >>>>
> > >>>> [1]
> > https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html
> > >>>> [2]
> > https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml
> > >>>> [3]
> > https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038
> > >>>>
> > >>>> --
> > >>>> Best Regards,
> > >>>>  - Chris
> > >>>>
> > >
> > > --
> > > Slawek Kaplonski
> > > Principal Software Engineer
> > > Red Hat
> > >
> >

-- 
Slawek Kaplonski
Principal Software Engineer
Red Hat




More information about the openstack-discuss mailing list