The way we do it is using an override for the cinder-volume.conf:

$ cat cinder-volume.conf
[DEFAULT]
enabled_backends=pure-az1,pure-az2
debug = True
cross_az_attach = False

[pure-az1]
volume_driver = cinder.volume.drivers.pure.PureISCSIDriver
san_ip = {{ pure_az1_vip }}
pure_api_token = {{ pure_az1_token }}
volume_backend_name = high-performance
report_discard_supported = True
suppress_requests_ssl_warnings = True
backend_availability_zone = {{ availability_zone_1 }}
backend_host = pure

[pure-az2]
volume_driver = cinder.volume.drivers.pure.PureISCSIDriver
san_ip = {{ pure_az2_vip }}
pure_api_token = {{ pure_az2_token }}
volume_backend_name = high-performance
report_discard_supported = True
suppress_requests_ssl_warnings = True
backend_availability_zone = {{ availability_zone_2 }}
backend_host = pure

With multiple ceph clusters it becomes slightly more complicated unless you are running the bobcat version of kolla-ansible because that is the first release that officially supports disparate ceph backends:  https://github.com/openstack/kolla-ansible/blob/stable/2023.2/ansible/roles/cinder/tasks/external_ceph.yml#L19

From: garcetto <garcetto@gmail.com>
Sent: 29 January 2024 08:05
To: OpenStack Discuss <openstack-discuss@lists.openstack.org>
Subject: [kolla-ansible] [cinder] how to add 2nd storage?
 

CAUTION: This email originates from outside THG


good morning,
  how can i add a second storage (for example another ceph, or nfs...) to my openstack kolla ansible cluster?

thank you.