Hi Satish,

I've pushed a PR with what should be a fix:

https://github.com/vexxhost/magnum-cluster-api/pull/313

We've also taken advantage of this to enable you to create node groups in specific availability zones in the same patch, as well as fixing the control plane AZ.

Thanks!
Mohammed

From: Mohammed Naser <mnaser@vexxhost.com>
Sent: February 27, 2024 10:00 AM
To: Satish Patel <satish.txt@gmail.com>; OpenStack Discuss <openstack-discuss@lists.openstack.org>
Subject: Re: [magnum][nova] AZ or Host aggregates not working as expected
 
Hi Satish,

Right now this seems to be a small outstanding issue:

https://github.com/vexxhost/magnum-cluster-api/issues/257

I think you've started discussing with other users of the driver who've faced a similar issue.  I'll leave some more details in the issue right now

Thanks,
Mohammed

From: Satish Patel <satish.txt@gmail.com>
Sent: February 26, 2024 10:21 PM
To: OpenStack Discuss <openstack-discuss@lists.openstack.org>
Subject: Re: [magnum][nova] AZ or Host aggregates not working as expected
 
Update:

After doing a bunch of testing I found only multi-master nodes not respecting Host Aggregation or AZ rules. Magnum trying to schedule masters on two different AZ  (How do I tell magnum to not do that?) 

If I build with a single master then everything works and lands on proper AZ. 

On Mon, Feb 26, 2024 at 9:42 PM Satish Patel <satish.txt@gmail.com> wrote:
Folks,

I am running the kolla-ansible 2023.1 release of openstack and I have deployed magnum with ClusterAPI and things are working as expected except AZ. 

I have two AZ and I have mapped them in flavor properties accordingly, 
1. General 
2. SRIOV 

When I create k8s cluster from the horizon I do select "General" AZ to run my cluster in general AZ but somehow some nodes go to General compute pools and some go to SRIOV pool. It breaks things because of different networking in both pools. 

For testing, when I launch VMs manually then they land on their desired AZ (or Host Aggregation pool) but only magnum or k8s do not understand AZ. I am clueless and not sure what is going on here. 

# openstack flavor show gen.c4-m8-d40
+----------------------------+-----------------------------------------------+
| Field                      | Value                                         |
+----------------------------+-----------------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                         |
| OS-FLV-EXT-DATA:ephemeral  | 0                                             |
| access_project_ids         | None                                          |
| description                | None                                          |
| disk                       | 40                                            |
| id                         | c8088b3f-1e92-405d-b310-a50c25e7040d          |
| name                       | gen.c4-m8-d40                                 |
| os-flavor-access:is_public | True                                          |
| properties                 | aggregate_instance_extra_specs:general='true' |
| ram                        | 8000                                          |
| rxtx_factor                | 1.0                                           |
| swap                       | 0                                             |
| vcpus                      | 4                                             |
+----------------------------+-----------------------------------------------+

I did set property in AZ general=trure 

# openstack availability zone list
+-----------+-------------+
| Zone Name | Zone Status |
+-----------+-------------+
| general   | available   |
| internal  | available   |
| sriov     | available   |