SNAT failure with OVN under Antelope

Gary Molenkamp molenkam at uwo.ca
Tue Jun 27 18:13:47 UTC 2023


Thanks for the pointers, itlooks like I'm starting to narrow it down.  
Something still confusing me, though.

>
>         I've built a Zed cloud, since upgraded to Antelope, using the
>         Neutron
>         Manual install method here:
>         https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html
>         I'm using a multi-tenent configuration using geneve and the flat
>         provider network is present on each hypervisor. Each
>         hypervisor is
>         connected to the physical provider network, along with the tenent
>         network and is tagged as an external chassis under OVN.
>                  br-int exists, as does br-provider
>                  ovs-vsctl set open .
>         external-ids:ovn-cms-options=enable-chassis-as-gw
>
>
>     Any specific reason to enable gateway on compute nodes? Generally
>     it's recommended to use controller/network nodes as gateway.
>     What's your env(number of controllers, network, compute nodes)?
>
>
> Wouldn't it be interesting to enable-chassis-as-gw on the compute 
> nodes, just in case you want to use DVR: If that's the case, you need 
> to map the external bridge (ovs-vsctl set open . 
> external-ids:ovn-bridge-mappings=...) via ansible this is created 
> automatically, but in the manual installation I didn't see any mention 
> of it.
> The problem is basically that the port of the OVN LRP may not be in 
> the same chassis as the VM that failed (since the CR-LRP will be where 
> the first VM of that network will be created). The suggestion is to 
> remove the enable-chassis-as-gw from the compute nodes to allow the VM 
> to forward traffic via tunneling/Geneve to the chassis where the LRP 
> resides.
>
> ovs-vsctl remove open . external-ids 
> ovn-cms-options="enable-chassis-as-gw" ovs-vsctl remove open . 
> external-ids ovn-bridge-mappings ip link set br-provider-name down 
> ovs-vsctl del-br br-provider-namesystemctl restart ovn-controller 
> systemctl restart openvswitch-switch
>

How does one support both use-case types?

If I want to use DVR via each compute node, then I must create the 
br-provider bridge, set the chassis as a gateway and map the bridge.  
This seems to be breaking forwarding to the OVN LRP.    The 
hypervisor/VM with the working LRP works but any other hypervisor is not 
tunneling via Geneve.

Thanks as always, this is very informative.

Gary


-- 
Gary Molenkamp			Science Technology Services
Systems Administrator		University of Western Ontario
molenkam at uwo.ca                  http://sts.sci.uwo.ca
(519) 661-2111 x86882		(519) 661-3566
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230627/977f573f/attachment-0001.htm>


More information about the openstack-discuss mailing list