Hey,

I think we should be actually having some netplan examples as well:
https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/netplan

Moreover, you can try to leverage our systemd-networkd role (networkd is used as a backed by netplan anyway, so they don't conflict) and some reference can be seen here:
https://docs.openstack.org/openstack-ansible/latest/user/network-arch/example.html#configuring-network-interfaces

One thing I've spotted right away, is that generally you should be using /32 for  haproxy_keepalived, as they are gonna be added as aliases to "main" interfaces.
Another one, is that you'd better use an FQDN for internal_lb_vip_address/external_lb_vip_address and then defined haproxy_bind_internal_lb_vip_address/haproxy_bind_external_lb_vip_address to these IP addresses.

So the main question I have regarding networking is actually the single host having access to the "uplink". As then, I assume, only this one host should be acting as HAProxy host (as others won't have external VIP on them working). Also, by default, we do not install keepalived in case there's a single host in haproxy group, and it makes limited sense. You can override it by setting `haproxy_use_keepalived: true` in user_variables.
But then - what are your expectations on VMs accessing the internet? Is it planned for them to access the world through Infra1 via geneve (tunnel) networks? As in general it is expected to have some kind of public subnet, from which IPs can be allocated, even though traffic will be SRC/DST nat-ed through "net" node. In other words, you can have that traffic flow, but then for "public" network in Neutron you'd need some tagged vlan with a public subnet, to be used by L3 routers in SDN. You can do some nasty hooks, like we do in AIO, where the public network is "fake" and another SRC NAT is happening through the node default route, but it's not how you should build production setup.

пн, 6 окт. 2025 г. в 04:37, <holywine@outlook.com>:
Hello,

I am trying to build a minimum usable production-level cloud with OpenStack Ansible following online reference.
The idea it shoud be able to scale up without radical change afterwards.


Online reference: https://docs.openstack.org/project-deploy-guide/openstack-ansible/2025.1/
openstack-ansible
version: stable/2025.1

Target hosts (Ubuntu 24.04LTS):
1) Infra1: Controller 1 + networking
2) Infra2: Controller 2
3) Compute 1: Compute 1 + storage 1
4) Compute 2: Compute 2 + storage 2

Deploy host: dev-host (Ubuntu 24.04LTS)


Network:
Each sever has two network adapter, say enp11s0f0 and enp11s0f1 (different server may have different port name, en0 and en1 for short).

Public IP (en1) :
Infra1 - 10.2.46.70/24, fixed IP permanently assigned by provider.
Infra2, Compute 1, Compute 2 - 10.2.46.XX/24, dynamic IP temporary assigned for SW installation

Private IP (en0):
Infra1:
    172.29.236.11/22, VLAN 10
    172.29.240.11/22, VLAN 30
Infra2:
    172.29.236.12/22, VLAN 10
    172.29.240.12/22, VLAN 30
Compute 1:
    172.29.236.13/22, VLAN 10
    172.29.240.13/22, VLAN 30
    172.29.244.13/22, VLAN 20
Compute 2:
    172.29.236.14/22, VLAN 10
    172.29.240.14/22, VLAN 30
    172.29.244.14/22, VLAN 20

VLAN 10 - management network
VLAN 30 - tunnel network
VLAN 20 - storage network

Target:
Since only Infra1:en1 is available with fixed public IP, It would be the only Internet connection without any port bonding.

What have been done:
1) All ordinary IPs and VLANs setting (as above Network part)
     -  The settings are verified and working (verified by ping and curl)

2) user_variables.yml
    haproxy_keepalived_external_vip_cidr: "10.2.46.70/24"
    haproxy_keepalived_internal_vip_cidr: "172.29.236.11/22"
    haproxy_keepalived_external_interface: enp11s0f1
    haproxy_keepalived_internal_interface: br-mgmt

3) openstack_user_config.yml
  internal_lb_vip_address: 172.29.236.11
  external_lb_vip_address: 10.2.46.70

Based on experiences as network engineer previously and a lot of readings from books and online guidance esp. the OpenStack Ansible deployment reference, I understand there should/could be something extra to be done.

Can anybody please give me a clue on what else to be done or it is enough to make it working?

Sorry for disturbing if any but netwoking seems to be the most complicated part for OpenStack deployment.
The online examples are hard to be comprehend and customized.
One small example:
Ubuntu seems to be promoting netplan, while the examples are all with /etc/network/interfaces file (difficult to adapte with Ubuntu24.04).

Thanks a lot for help in advance!