[VICTORIA] Not working SNAT over VXLAN?
laurentfdumont at gmail.com
Wed Mar 9 02:20:08 UTC 2022
Any overlay inside the vxlan overlay should work. Since it encapsulates the
actual pod/service traffic, Openstack has no clue and happily forwards the
traffic since it matches the spec (mac + IP) of the port.
Using only security groups will not let you spoof traffic/act as a router.
- If you want to keep using security groups, you will need allowed-pairs.
- You can use a subnet as a wildcard / or use the IP address
- If you dont mind very permissive rules, then disabling port-security
will require you to remove security groups + allowed-ip-pairs.
On Tue, Mar 8, 2022 at 4:59 PM Gaël THEROND <gael.therond at bitswalk.com>
> Interesting I’ve thought about especially because of the anti-spoofing
> features but I had the feeling that Security-groups if assigned to the port
> would have allowed it.
> Additionally, what’s really weird is that one of our k8s clusters using
> flannel and/or calico (depends on the user needs) is actually putting those
> exact rules (SNAT+FORWARD rules) automatically for pods services ingress
> access but I’m wondering if it works because the traffic is on a tunnel
> that the host can’t see.
> I’ll have a look at your idea by disabling the port security and check if
> it work like that. If it works then I’ll re-enable it and work with the
> Many thanks!
> Le mar. 8 mars 2022 à 22:13, Laurent Dumont <laurentfdumont at gmail.com> a
> écrit :
>> You might want to look at port-security on the bastion host VM. If it's
>> enabled, it means that Openstack will drop any outgoing packets that are
>> not sourced using the ip address + mac AND/OR the allowed_pairs defined on
>> the port.
>> On Tue, Mar 8, 2022 at 3:07 PM Gaël THEROND <gael.therond at bitswalk.com>
>>> Hi everyone!
>>> I’m facing a weird situation on a tenant of one of our Openstack cluster
>>> based on Victoria.
>>> On this tenant, the network topology is as follow:
>>> One DMZ network (192.168.0.0/24) linked to our public network through a
>>> neutron router where there is a VM acting as a bastion/router for the MGMT
>>> One MGMT network (172.16.31.0/24) where all VMs are linked to.
>>> On the DMZ network, there is a linux Debian 11, let’s call it VM-A with
>>> a Floating IP from the public pool, this VM is both attached to the DMZ
>>> network (ens3 / 192.168.0.12) AND the MGMT network (ens4 / 172.16.31.23).
>>> All other VMs, let’s call them VM-X are exclusively attached to the MGMT
>>> network (ens4).
>>> I’ve setup VM-A with ip_forward kernel module and the following iptables
>>> # iptables -t nat -A POSTROUTING -o ens3 -J SNAT —to-source 192.168.0.12
>>> My VM-X are on their own setup with a default gateway via VM-A:
>>> # ip route add default via 172.31.16.23
>>> The setup seems to be working as if I don’t put the iptables rule and
>>> the kernel forwarding I can’t see any packets on my DMZ interface (ens3) on
>>> VM-A from VM-X.
>>> Ok so now that you get the whole schema, let dive into the issue.
>>> So when all rules, modules and gateway are set, I can fully see my VM-X
>>> traffic (ICMP ping to a dns server) going from VM-X (ens4) to VM-A (ens4)
>>> then forwarded to VM-A (ens3) and finally going to our public IP targeted
>>> What’s not working however is the response not reaching back to VM-X.
>>> I’ve tcpdump the whole traffic from VM-X to VM-A on each point of the
>>> from inside the VM-X nic, on the tap device, on the qbr bridge, on the
>>> qvb veth, on the qvo second side of the veth through the ovs bridges and
>>> However the response packets aren’t reaching back further than on the
>>> VM-A qvo veth.
>>> Once it exit the VM-A the traffic never reaches the VM-X.
>>> What’s really suspicious in here is that a direct ping from VM-X
>>> (172.16.31.54) to VM-A (172.16.31.23) is coming back correctly, so it looks
>>> like if ovs detected that the response on a SNAT case isn’t legit or
>>> something similar.
>>> Is anyone able to get such setup working?
>>> Here are few additional information:
>>> Host runs on CentOS 8.5 latest update.
>>> Our platform is a Openstack Victoria deployed using kolla-ansible.
>>> We are using a OVS based deployment.
>>> Our tunnels are VXLAN.
>>> All VMs have a fully open secgroup applied and all ports have it (I
>>> checked it twice and even on host iptables).
>>> If you ever need additional information feel free to let me know !
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the openstack-discuss