Hey,
I think we should be actually having some netplan examples as well:
Moreover, you can try to leverage our systemd-networkd role (networkd is used as a backed by netplan anyway, so they don't conflict) and some reference can be seen here:
One thing I've spotted right away, is that generally you should be using /32 for haproxy_keepalived, as they are gonna be added as aliases to "main" interfaces.
Another one, is that you'd better use an FQDN for internal_lb_vip_address/external_lb_vip_address and then defined haproxy_bind_internal_lb_vip_address/haproxy_bind_external_lb_vip_address to these IP addresses.
So the main question I have regarding networking is actually the single host having access to the "uplink". As then, I assume, only this one host should be acting as HAProxy host (as others won't have external VIP on them working). Also, by default, we do not install keepalived in case there's a single host in haproxy group, and it makes limited sense. You can override it by setting `haproxy_use_keepalived: true` in user_variables.
But then - what are your expectations on VMs accessing the internet? Is it planned for them to access the world through Infra1 via geneve (tunnel) networks? As in general it is expected to have some kind of public subnet, from which IPs can be allocated, even though traffic will be SRC/DST nat-ed through "net" node. In other words, you can have that traffic flow, but then for "public" network in Neutron you'd need some tagged vlan with a public subnet, to be used by L3 routers in SDN. You can do some nasty hooks, like we do in AIO, where the public network is "fake" and another SRC NAT is happening through the node default route, but it's not how you should build production setup.