Good evening everyone!
Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to consume.
I'm using the following configuration:
cinder_backends:
ceph1:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: ceph1_vol
rbd_ceph_conf: /etc/ceph/ceph1.conf
rbd_store_chunk_size: 8
volume_backend_name: ceph1
rbd_user: ceph1_vol
rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
report_discard_supported: true
ceph2:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: ceph2_vol
rbd_ceph_conf: /etc/ceph/ceph2.conf
rbd_store_chunk_size: 8
volume_backend_name: ceph2
rbd_user: ceph2_vol
rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}"
report_discard_supported: true
ceph_extra_confs:
- src: /etc/openstack_deploy/ceph/ceph1.conf
dest: /etc/ceph/ceph1.conf
client_name: ceph1_vol
keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring
keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring
secret_uuid: '{{ cinder_ceph_client_uuid }}'
- src: /etc/openstack_deploy/ceph/ceph2.conf
dest: /etc/ceph/ceph2.conf
client_name: ceph2_vol
keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring
keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring
secret_uuid: '{{ cinder_ceph_client_uuid2 }}'
But when executing the `virsh secret-list` command it only shows the UUID of "cinder_ceph_client_uuid".
Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined in "user_secrets.yml".
I have a slight impression that I didn't configure something, but I don't know what, because I didn't find anything else to be done, according to the documentation [1], or it went unnoticed by me.
Thanks in advance!