SNAT failure with OVN under Antelope

Gary Molenkamp molenkam at uwo.ca
Tue Jun 27 17:20:39 UTC 2023



On 2023-06-27 11:18, Roberto Bartzen Acosta wrote:
> Hi Gary,
>
> Em ter., 27 de jun. de 2023 às 11:47, Yatin Karel <ykarel at redhat.com> 
> escreveu:
>
>     Hi Gary,
>
>     On top what Rodolfo said
>     On Tue, Jun 27, 2023 at 5:15 PM Gary Molenkamp <molenkam at uwo.ca>
>     wrote:
>
>         Good morning,   I'm having a problem with snat routing under
>         OVN but I'm
>         not sure if something is mis-configured or just my
>         understanding of how
>         OVN is architected is wrong.
>
>         I've built a Zed cloud, since upgraded to Antelope, using the
>         Neutron
>         Manual install method here:
>         https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html
>         I'm using a multi-tenent configuration using geneve and the flat
>         provider network is present on each hypervisor. Each
>         hypervisor is
>         connected to the physical provider network, along with the tenent
>         network and is tagged as an external chassis under OVN.
>                  br-int exists, as does br-provider
>                  ovs-vsctl set open .
>         external-ids:ovn-cms-options=enable-chassis-as-gw
>
>
>     Any specific reason to enable gateway on compute nodes? Generally
>     it's recommended to use controller/network nodes as gateway.
>     What's your env(number of controllers, network, compute nodes)?
>
>
> Wouldn't it be interesting to enable-chassis-as-gw on the compute 
> nodes, just in case you want to use DVR: If that's the case, you need 
> to map the external bridge (ovs-vsctl set open . 
> external-ids:ovn-bridge-mappings=...) via ansible this is created 
> automatically, but in the manual installation I didn't see any mention 
> of it.

Our intention was to distribute the routing on our OVN cloud to take 
advantage of DVR as our provider network is just a tagged vlan in our 
physical infrastructure.  This avoids requiring dedicated network 
node(s) and fewer bottlenecks.  I had not set up any ovn-bridge-mappings 
as it was not mentioned in the manual install.  I will look into it.


> The problem is basically that the port of the OVN LRP may not be in 
> the same chassis as the VM that failed (since the CR-LRP will be where 
> the first VM of that network will be created). The suggestion is to 
> remove the enable-chassis-as-gw from the compute nodes to allow the VM 
> to forward traffic via tunneling/Geneve to the chassis where the LRP 
> resides.
>

I forced a similar VM onto the same chassis as the working VM, and it 
was able to communicate out.    If we do want to keep multiple chassis' 
as gateways, would that be addressed with the ovn-bridge-mappings?




> ovs-vsctl remove open . external-ids 
> ovn-cms-options="enable-chassis-as-gw" ovs-vsctl remove open . 
> external-ids ovn-bridge-mappings ip link set br-provider-name down 
> ovs-vsctl del-br br-provider-namesystemctl restart ovn-controller 
> systemctl restart openvswitch-switch
>



-- 
Gary Molenkamp			Science Technology Services
Systems Administrator		University of Western Ontario
molenkam at uwo.ca                  http://sts.sci.uwo.ca
(519) 661-2111 x86882		(519) 661-3566
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230627/228da5a9/attachment.htm>


More information about the openstack-discuss mailing list