Me again, and my ask for help.

I found one problem:
My globals.yml without vpnaas

workaround_ansible_issue_8743: yes

kolla_base_distro: "ubuntu"

openstack_release: "2025.2"

kolla_internal_vip_address: "192.168.1.100"

kolla_external_vip_address: "172.16.62.65"

kolla_external_fqdn: "cloud.dom1.loc"

docker_registry: "iut1r-registry.univ-grenoble-alpes.fr/openstack"

docker_registry_insecure: "no"

network_interface: "enp2s0f0"

kolla_external_vip_interface: "eno8303"

dns_interface: "eno8303"

neutron_external_interface: "eno8403"

kolla_enable_tls_external: "yes"

openstack_cacert: "/etc/ssl/certs/ca-certificates.crt"

kolla_copy_ca_into_containers: "yes"

enable_cinder: "yes"

enable_cinder_backup: "no"

enable_cinder_backend_lvm: "yes"

enable_designate: "yes"

neutron_dns_domain: "dom1.loc."

enable_heat: "no"

enable_horizon_designate: "yes"

enable_neutron_provider_networks: "yes"

cinder_volume_group: "cinder-volumes"

designate_backend: "no"

designate_backend_external: "bind9"

designate_backend_external_bind9_nameservers: "172.16.63.100"

designate_ns_record: "ordi2.dom1.loc"

nova_compute_virt_type: « kvm"


After kolla-ansible deploy…. All is good
(venv) user1@ordi1:~$ openstack network agent list
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host    | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+
| 6d7355cd-1fda-4d80-a6b1-11b7e5e1279f | L3 agent           | server1 | nova              | :-)   | UP    | neutron-l3-agent          |
| 81c8eb80-0191-4a9e-b17f-868c5df6da6f | Open vSwitch agent | server2 | None      | :-)   | UP    | neutron-openvswitch-agent |
| c4de67a0-b094-49cb-b7c0-6f3ad588a37b | Metadata agent     | server1 | None       | :-)   | UP    | neutron-metadata-agent    |
| cfbbe8fb-83a6-4e0d-9a28-57ebad89fe40 | DHCP agent         | server1 | nova           | :-)   | UP    | neutron-dhcp-agent        |
| edadba96-713f-4988-9849-b7e3f947153b | Open vSwitch agent | server1 | None     | :-)   | UP    | neutron-openvswitch-agent |
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+

Now with vpnaas:

workaround_ansible_issue_8743: yes

kolla_base_distro: "ubuntu"

openstack_release: "2025.2"

kolla_internal_vip_address: "192.168.1.100"

kolla_external_vip_address: "172.16.62.65"

kolla_external_fqdn: "cloud.dom1.loc"

docker_registry: "iut1r-registry.univ-grenoble-alpes.fr/openstack"

docker_registry_insecure: "no"

network_interface: "enp2s0f0"

kolla_external_vip_interface: "eno8303"

dns_interface: "eno8303"

neutron_external_interface: "eno8403"

kolla_enable_tls_external: "yes"

openstack_cacert: "/etc/ssl/certs/ca-certificates.crt"

kolla_copy_ca_into_containers: "yes"

enable_cinder: "yes"

enable_cinder_backup: "no"

enable_cinder_backend_lvm: "yes"

enable_designate: "yes"

neutron_dns_domain: "dom1.loc."

enable_heat: "no"

enable_horizon_designate: "yes"

enable_neutron_provider_networks: "yes"

enable_horizon_neutron_vpnaas: "yes"

enable_neutron_vpnaas: "yes"

cinder_volume_group: "cinder-volumes"

designate_backend: "no"

designate_backend_external: "bind9"

designate_backend_external_bind9_nameservers: "172.16.63.100"

designate_ns_record: "ordi2.dom1.loc"

nova_compute_virt_type: « kvm"


Agents are UP mais Alive = XXX 
(venv) user1@ordi1:~$ openstack network agent list
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host    | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+
| 6d7355cd-1fda-4d80-a6b1-11b7e5e1279f | L3 agent           | server1 | nova              | XXX   | UP    | neutron-l3-agent          |
| 81c8eb80-0191-4a9e-b17f-868c5df6da6f | Open vSwitch agent | server2 | None              | XXX   | UP    | neutron-openvswitch-agent |
| c4de67a0-b094-49cb-b7c0-6f3ad588a37b | Metadata agent     | server1 | None              | XXX   | UP    | neutron-metadata-agent    |
| cfbbe8fb-83a6-4e0d-9a28-57ebad89fe40 | DHCP agent         | server1 | nova              | XXX   | UP    | neutron-dhcp-agent        |
| edadba96-713f-4988-9849-b7e3f947153b | Open vSwitch agent | server1 | None              | XXX   | UP    | neutron-openvswitch-agent |
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+

And a lot a problems … impossible to greta an instance for example.

Is there a problem with vpnaas in flamingo/kolla-ansible deployment ?


Franck VEDEL

Le 6 mars 2026 à 09:10, Franck VEDEL (UGA) <franck.vedel@univ-grenoble-alpes.fr> a écrit :

Hello and thanks a lot for your help.

I don't use OVN; perhaps I should. But it's a student project (20 hours, 2 servers by students), updated annually, and each version update brings big surprises. I want to do relatively simple things (classic OpenStack with external Active Directory (LDAP) accounts, VPNaaS, Designate, Ceph), so OVS suits me.

But I can't seem to resolve this issue with instances lacking an AZ (even though, until now, instances created in Horizon have nova as their AZ, and there are no port binding problems). I had doubts about my installation, so I reformatted my servers, installed Ubuntu 24.04, redeployed Flamingo with kolla-ansible, and then, just to test, I ran the `init-runonce` script and tried the command given at the end of the script (`openstack server create .... demo1`).

Logs in nova_compute
ERROR: nova.compute.manager.......... _ensure_no_port_binding_failure(port)`

The problem seems more serious: `openstack network agent list` returns no value.

Well, I'll look into it... thanks anyway.

Franck VEDEL

Le 6 mars 2026 à 00:38, Joel McLean <joel.mclean@micron21.com> a écrit :

Hello Franck,
 
You may not have had to ever set this up, if you only had a single AZ in the past, however in Flamingo you might need to define:
 
neutron_ovn_availability_zones: ["your_az_name1","your_az_name2",”etc”]
 
in globals.yaml
 
And then create your networks with –availability-zone-hint arguments.
 
 
We experienced a similar issue when testing Caracal, where, if this setting was not configured, routers could be deployed to any availability zone, and therefore not be able to properly communicate outside of their AZ. It might be that Flamingo is more explicit, and requires this configuration where it was optional in the past.
 
Kind Regards,

 

Joel McLean
Cyber Security and Product Development Manager
Australia’s First Tier IV Data Centre

 
<image001.png>
Follow us on Twitter and https://m21status.com for important service and system updates.
This message is intended for the addressee named above. It may contain privileged or confidential information. If you are not the intended recipient of this message you must not use, copy, distribute or disclose it to anyone other than the addressee. If you have received this message in error please return the message to the sender by replying to it and then delete the message from your computer.
 
 
From: Franck VEDEL (UGA) <franck.vedel@univ-grenoble-alpes.fr>
Sent: Friday, 6 March 2026 12:03 AM
To: OpenStack Discuss <openstack-discuss@lists.openstack.org>
Subject: [kolla-ansible][flamingo][horizon] Network creation problem
 

Hello everyone,

I have a production OpenStack cluster for ESM students running Epoxy, deployed with Kolla-Ansible, and everything is working perfectly. I’m very satisfied with this setup.

To prepare for upgrades, I set up a new test cluster in parallel using Kolla-Ansible, but this time with Flamingo.

Everything seemed fine until I tried to create a network and its subnet in Horizon. The network is created, but no availability zone is associated with it. As a result, I cannot launch instances on this network.

Could this be a change in the new OpenStack installation process that I missed? Or is this a bug? This is the first time I’ve encountered this issue. Is there a parameter that needs to be added in the Horizon override file (_9999-custom-settings.py) to handle this?

Thanks in advance for any guidance.

 

Franck VEDEL