Hi Roger,
I believe there is an expectation that if a compute node will host a router or instance connected to a VLAN (provider or tenant network), it should have the provider network interface plumbed to it (and mapped
accordingly). On the compute, you can get look at the external_ids field of the ‘ovs-vsctl list open_vswitch’ output and see ovn-bridge-mappings populated. If it’s also a gateway node, you’d see ‘ovn-cms-options=enable-chassis-as-gw’. The consensus among those
I’ve talked to in the past is that network nodes should be gateway nodes, rather than enabling the compute nodes to also be gateway nodes. Others might feel differently.
There are some things you can do with the neutron provider setup in OSA to treat network/gateway nodes differently from compute nodes from a plumbing POV; heterogenous vs homogenous network and bridge configuration.
This doc,
https://docs.openstack.org/openstack-ansible/latest/user/prod/provnet_groups.html, might help – but don’t hesitate to ask for more help if that’s what you’re looking for.
James Denton
Principal Architect
Rackspace Private Cloud - OpenStack
james.denton@rackspace.com
From:
Dmitriy Rabotyagov <noonedeadpunk@gmail.com>
Date: Thursday, September 7, 2023 at 12:46 PM
To: Roger Rivera <roger.riverac@gmail.com>
Cc: openstack-discuss <openstack-discuss@lists.openstack.org>
Subject: Re: [openstack-ansible] Dedicated gateway hosts not working with OVN
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
I'm not a huge expert in OVN, but I believe this specific part works in pretty much the same way for OVS and LXB.
We have exactly same usecase as you do, but with OVS for now. And the only way to get external connectivity is to create neutron router, which will be used as a gateway to public networks. And router should
be created on OVN gateway nodes from what I know. So your VMs always have only geneve network, that is passed inside the router, and then router connected to external network on gateway nodes.
Floating IP is kind of 1-to-1 NAT on the router, which allows to access your VM through external network (and router).
Attaching public network to the VM directly in your scenario should not be possible by design.
Feel free to join us on #openstack-ansible channel on OFTC IRC network and we will be glad to answer your questions.
Thanks again for your help. Unfortunately, we've tried everything that's been suggested to no avail. And it seems plausible that external connectivity will not be achieved on the compute nodes if there are
no bridges mapped to the external network on those hosts. Keep in mind these compute hosts do not have the ens2 physical interface to bind the ext-br or br-flat bridges to.
Having said that, we would have loved to see a complete OVN scenario reference configuration with dedicated networking/gateway nodes.
The documentation we have reviewed assumes compute nodes as gateways and that bridges can be set up on compute nodes, which is not our case. We are relying 100% on a single L3 interface on compute nodes with
GENEVE as a tunneling protocol. And it is because of GENEVE that private east/west traffic works without a problem.
Only networking nodes have that second ens2 network interface that physically connects to the external network, hence the need to make those chassis as gateway nodes.
Again, our setup has the following configuration:
-Compute nodes with x1 L3 NIC and IP.
-Network/gateway nodes with x1 L3 NIC and x1 L2 NIC with connection to external network.