Good evening, we are currently deploying openstack in a two AZ setup, in this setup we have two different SAN FC Boxes which are only reachable via the corresponding zone. Means, we have two cinder instances, each one is configured with a backend service towards the storage. In cinder.conf [storage] we have backend_availability_zone set, towards the correct zone name. If i create two bootable volumes, cinderis selecting the right cinder-schueduler with the right storage: (venv-kolla)# openstack volume create --size 10 --image cirros --availability-zone DUS --bootable DUS_1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | DUS | | bootable | false | | consistencygroup_id | None | | created_at | 2024-01-23T22:22:21.979063 | | description | None | | encrypted | False | | id | 217943c3-44dd-4a8c-aad7-2ab33b17bab3 | | migration_status | None | | multiattach | False | | name | DUS_1 | | properties | | | replication_status | None | | size | 10 | | snapshot_id | None | | source_volid | None | | status | creating | | type | __DEFAULT__ | | updated_at | None | | user_id | 47f5bd8b0f154da69bb2375fcc3a3baf | +---------------------+--------------------------------------+ (venv-kolla)# openstack volume create --size 10 --image cirros --availability-zone LEV --bootable LEV_1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | LEV | | bootable | false | | consistencygroup_id | None | | created_at | 2024-01-23T22:23:12.327244 | | description | None | | encrypted | False | | id | 7850da99-43d0-4b58-9134-a7e7bc930cf0 | | migration_status | None | | multiattach | False | | name | LEV_1 | | properties | | | replication_status | None | | size | 10 | | snapshot_id | None | | source_volid | None | | status | creating | | type | __DEFAULT__ | | updated_at | None | | user_id | 47f5bd8b0f154da69bb2375fcc3a3baf | +---------------------+--------------------------------------+ if i create servers with boot-from-volume and a availability-zone, the job failed. (venv-kolla)# openstack server create --image cirros --boot-from-volume 10 --network fw --flavor m1.tiny --availability-zone DUS DUS1 +-------------------------------------+--------------------------------------+ | Field | Value | +-------------------------------------+--------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | DUS | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | gKsNQTfC6vYe | | config_drive | | | created | 2024-01-23T22:29:47Z | | flavor | m1.tiny (1) | | hostId | | | id | a838b078-cda0-47de-bf5d-0b6cc632425c | | image | N/A (booted from volume) | | key_name | None | | name | DUS1 | | progress | 0 | | project_id | e435c2936cbf4eb9abc38a48181ab9bf | | properties | | | security_groups | name='default' | | status | BUILD | | updated | 2024-01-23T22:29:47Z | | user_id | 47f5bd8b0f154da69bb2375fcc3a3baf | | volumes_attached | | +-------------------------------------+--------------------------------------+ (venv-kolla)# openstack server create --image cirros --boot-from-volume 10 --network fw --flavor m1.tiny --availability-zone LEV LEV1 +-------------------------------------+--------------------------------------+ | Field | Value | +-------------------------------------+--------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | LEV | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | L6vDvCoevTBC | | config_drive | | | created | 2024-01-23T22:30:11Z | | flavor | m1.tiny (1) | | hostId | | | id | 2434f84f-08f3-4e8d-8c8d-67a6b78f2cbe | | image | N/A (booted from volume) | | key_name | None | | name | LEV1 | | progress | 0 | | project_id | e435c2936cbf4eb9abc38a48181ab9bf | | properties | | | security_groups | name='default' | | status | BUILD | | updated | 2024-01-23T22:30:11Z | | user_id | 47f5bd8b0f154da69bb2375fcc3a3baf | | volumes_attached | | +-------------------------------------+--------------------------------------+ in cinder-scheduler.log i see following: Task 'cinder.scheduler.flows.create_volume.ExtractSchedulerSpecTask;volume:create' (93b2aa0f-5cdb-4f13-abdf-416c2b226917) transitioned into state 'SUCCESS' from state 'RUNNING' with result '{'request_spec': RequestSpec(CG_backend=<?>,availability_zones=['nova'] ... The creation of a bootable volume, results in the following log entry: Task 'cinder.scheduler.flows.create_volume.ExtractSchedulerSpecTask;volume:create' (7d6130e8-f9d9-4d3b-bca1-e899690e7855) transitioned into state 'SUCCESS' from state 'RUNNING' with result '{'request_spec': RequestSpec(CG_backend=<?>,availability_zones=['LEV']... So from my understanding, the create request towards nova, results in AvailabilityZone=nova, which the cinder AvailabilityZoneFilter can't handle. Does someone have a idea, how i can fix this behaviour? Thanks in advance! -- Mit freundlichen Grüßen Henrik Hansen