I've built a Zed cloud, since upgraded to Antelope,
using the Neutron
Manual install method here:
https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html
I'm using a multi-tenent configuration using geneve
and the flat
provider network is present on each hypervisor. Each
hypervisor is
connected to the physical provider network, along with
the tenent
network and is tagged as an external chassis under
OVN.
br-int exists, as does br-provider
ovs-vsctl set open .
external-ids:ovn-cms-options=enable-chassis-as-gw
Any specific reason to enable gateway on compute
nodes? Generally it's recommended to use
controller/network nodes as gateway. What's your
env(number of controllers, network, compute nodes)?
Wouldn't it be interesting to enable-chassis-as-gw on the
compute nodes, just in case you want to use DVR: If that's
the case, you need to map the external bridge (ovs-vsctl set open . external-ids:ovn-bridge-mappings=...)
via ansible this is created automatically, but in the manual
installation I didn't see any mention of it.
The problem is basically that the port of the OVN LRP may
not be in the same chassis as the VM that failed (since the
CR-LRP will be where the first VM of that network will be
created). The suggestion is to remove the
enable-chassis-as-gw from the compute nodes to allow the VM
to forward traffic via tunneling/Geneve to the chassis where
the LRP resides.
ovs-vsctl remove open . external-ids ovn-cms-options="enable-chassis-as-gw"
ovs-vsctl remove open . external-ids ovn-bridge-mappings
ip link set br-provider-name down
ovs-vsctl del-br br-provider-name
systemctl restart ovn-controller
systemctl restart openvswitch-switch