Hi Satish, I haven't tested this locally yet so waiting for CI to go through. Also, the label will be the one matching Magnum, so it will be availability_zone. Thanks Mohammed ________________________________ From: Satish Patel <satish.txt@gmail.com> Sent: February 27, 2024 10:35 AM To: Mohammed Naser <mnaser@vexxhost.com> Cc: OpenStack Discuss <openstack-discuss@lists.openstack.org> Subject: Re: [magnum][nova] AZ or Host aggregates not working as expected Thank you Mohammed, Can I just apply [1] patch manually and give it a try? Assuming after patch I can pass labels to the magnum template using controlPlaneAvailabilityZone=foo1 right? 1. https://github.com/vexxhost/magnum-cluster-api/pull/313/files On Tue, Feb 27, 2024 at 10:25 AM Mohammed Naser <mnaser@vexxhost.com<mailto:mnaser@vexxhost.com>> wrote: Hi Satish, I've pushed a PR with what should be a fix: https://github.com/vexxhost/magnum-cluster-api/pull/313 We've also taken advantage of this to enable you to create node groups in specific availability zones in the same patch, as well as fixing the control plane AZ. Thanks! Mohammed ________________________________ From: Mohammed Naser <mnaser@vexxhost.com<mailto:mnaser@vexxhost.com>> Sent: February 27, 2024 10:00 AM To: Satish Patel <satish.txt@gmail.com<mailto:satish.txt@gmail.com>>; OpenStack Discuss <openstack-discuss@lists.openstack.org<mailto:openstack-discuss@lists.openstack.org>> Subject: Re: [magnum][nova] AZ or Host aggregates not working as expected Hi Satish, Right now this seems to be a small outstanding issue: https://github.com/vexxhost/magnum-cluster-api/issues/257 I think you've started discussing with other users of the driver who've faced a similar issue. I'll leave some more details in the issue right now Thanks, Mohammed ________________________________ From: Satish Patel <satish.txt@gmail.com<mailto:satish.txt@gmail.com>> Sent: February 26, 2024 10:21 PM To: OpenStack Discuss <openstack-discuss@lists.openstack.org<mailto:openstack-discuss@lists.openstack.org>> Subject: Re: [magnum][nova] AZ or Host aggregates not working as expected Update: After doing a bunch of testing I found only multi-master nodes not respecting Host Aggregation or AZ rules. Magnum trying to schedule masters on two different AZ (How do I tell magnum to not do that?) If I build with a single master then everything works and lands on proper AZ. On Mon, Feb 26, 2024 at 9:42 PM Satish Patel <satish.txt@gmail.com<mailto:satish.txt@gmail.com>> wrote: Folks, I am running the kolla-ansible 2023.1 release of openstack and I have deployed magnum with ClusterAPI and things are working as expected except AZ. I have two AZ and I have mapped them in flavor properties accordingly, 1. General 2. SRIOV When I create k8s cluster from the horizon I do select "General" AZ to run my cluster in general AZ but somehow some nodes go to General compute pools and some go to SRIOV pool. It breaks things because of different networking in both pools. For testing, when I launch VMs manually then they land on their desired AZ (or Host Aggregation pool) but only magnum or k8s do not understand AZ. I am clueless and not sure what is going on here. # openstack flavor show gen.c4-m8-d40 +----------------------------+-----------------------------------------------+ | Field | Value | +----------------------------+-----------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | access_project_ids | None | | description | None | | disk | 40 | | id | c8088b3f-1e92-405d-b310-a50c25e7040d | | name | gen.c4-m8-d40 | | os-flavor-access:is_public | True | | properties | aggregate_instance_extra_specs:general='true' | | ram | 8000 | | rxtx_factor | 1.0 | | swap | 0 | | vcpus | 4 | +----------------------------+-----------------------------------------------+ I did set property in AZ general=trure # openstack availability zone list +-----------+-------------+ | Zone Name | Zone Status | +-----------+-------------+ | general | available | | internal | available | | sriov | available |