[OSA] CEPH libvirt secrets
Good evening everyone! Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to consume. I'm using the following configuration: cinder_backends: ceph1: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph1_vol rbd_ceph_conf: /etc/ceph/ceph1.conf rbd_store_chunk_size: 8 volume_backend_name: ceph1 rbd_user: ceph1_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" report_discard_supported: true ceph2: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph2_vol rbd_ceph_conf: /etc/ceph/ceph2.conf rbd_store_chunk_size: 8 volume_backend_name: ceph2 rbd_user: ceph2_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" report_discard_supported: true ceph_extra_confs: - src: /etc/openstack_deploy/ceph/ceph1.conf dest: /etc/ceph/ceph1.conf client_name: ceph1_vol keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid }}' - src: /etc/openstack_deploy/ceph/ceph2.conf dest: /etc/ceph/ceph2.conf client_name: ceph2_vol keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid2 }}' But when executing the `virsh secret-list` command it only shows the UUID of "cinder_ceph_client_uuid". Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined in "user_secrets.yml". I have a slight impression that I didn't configure something, but I don't know what, because I didn't find anything else to be done, according to the documentation [1], or it went unnoticed by me. [1] https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ce... Thanks in advance!
Apparently the "mon_host" parameter is mandatory to create secrets [1], but setting this parameter also makes it SSH into MON [2], which I would like to avoid. Would this statement be true? [1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl... [2] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl... Em sex., 4 de ago. de 2023 às 19:39, Murilo Morais <murilo@evocorp.com.br> escreveu:
Good evening everyone!
Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to consume.
I'm using the following configuration:
cinder_backends: ceph1: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph1_vol rbd_ceph_conf: /etc/ceph/ceph1.conf rbd_store_chunk_size: 8 volume_backend_name: ceph1 rbd_user: ceph1_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" report_discard_supported: true
ceph2: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph2_vol rbd_ceph_conf: /etc/ceph/ceph2.conf rbd_store_chunk_size: 8 volume_backend_name: ceph2 rbd_user: ceph2_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" report_discard_supported: true
ceph_extra_confs: - src: /etc/openstack_deploy/ceph/ceph1.conf dest: /etc/ceph/ceph1.conf client_name: ceph1_vol keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid }}' - src: /etc/openstack_deploy/ceph/ceph2.conf dest: /etc/ceph/ceph2.conf client_name: ceph2_vol keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid2 }}'
But when executing the `virsh secret-list` command it only shows the UUID of "cinder_ceph_client_uuid".
Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined in "user_secrets.yml".
I have a slight impression that I didn't configure something, but I don't know what, because I didn't find anything else to be done, according to the documentation [1], or it went unnoticed by me.
[1] https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ce...
Thanks in advance!
Hey Murilo, I'm not sure that ceph_cliebt role does support multiple secrets right now, I will be able to look deeper into this on Monday But there's yet another place where we set secrets [1], so it shouldn't be required to have mon_hosts defined. But yes, having mon_hosts would require ssh access to them to fetch ceph.conf and authx keys. [1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/commit/05e3c... On Sat, Aug 5, 2023, 15:46 Murilo Morais <murilo@evocorp.com.br> wrote:
Apparently the "mon_host" parameter is mandatory to create secrets [1], but setting this parameter also makes it SSH into MON [2], which I would like to avoid. Would this statement be true?
[1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl... [2] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl...
Em sex., 4 de ago. de 2023 às 19:39, Murilo Morais <murilo@evocorp.com.br> escreveu:
Good evening everyone!
Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to consume.
I'm using the following configuration:
cinder_backends: ceph1: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph1_vol rbd_ceph_conf: /etc/ceph/ceph1.conf rbd_store_chunk_size: 8 volume_backend_name: ceph1 rbd_user: ceph1_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" report_discard_supported: true
ceph2: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph2_vol rbd_ceph_conf: /etc/ceph/ceph2.conf rbd_store_chunk_size: 8 volume_backend_name: ceph2 rbd_user: ceph2_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" report_discard_supported: true
ceph_extra_confs: - src: /etc/openstack_deploy/ceph/ceph1.conf dest: /etc/ceph/ceph1.conf client_name: ceph1_vol keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid }}' - src: /etc/openstack_deploy/ceph/ceph2.conf dest: /etc/ceph/ceph2.conf client_name: ceph2_vol keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid2 }}'
But when executing the `virsh secret-list` command it only shows the UUID of "cinder_ceph_client_uuid".
Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined in "user_secrets.yml".
I have a slight impression that I didn't configure something, but I don't know what, because I didn't find anything else to be done, according to the documentation [1], or it went unnoticed by me.
[1] https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ce...
Thanks in advance!
Dmitry, hello! I have to be honest, can't understand properly how to apply. Do I just have to set the "nova_ceph_client_uuid"? Em sáb., 5 de ago. de 2023 às 11:11, Dmitriy Rabotyagov < noonedeadpunk@gmail.com> escreveu:
Hey Murilo,
I'm not sure that ceph_cliebt role does support multiple secrets right now, I will be able to look deeper into this on Monday
But there's yet another place where we set secrets [1], so it shouldn't be required to have mon_hosts defined. But yes, having mon_hosts would require ssh access to them to fetch ceph.conf and authx keys.
[1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/commit/05e3c...
On Sat, Aug 5, 2023, 15:46 Murilo Morais <murilo@evocorp.com.br> wrote:
Apparently the "mon_host" parameter is mandatory to create secrets [1], but setting this parameter also makes it SSH into MON [2], which I would like to avoid. Would this statement be true?
[1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl... [2] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl...
Em sex., 4 de ago. de 2023 às 19:39, Murilo Morais <murilo@evocorp.com.br> escreveu:
Good evening everyone!
Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to consume.
I'm using the following configuration:
cinder_backends: ceph1: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph1_vol rbd_ceph_conf: /etc/ceph/ceph1.conf rbd_store_chunk_size: 8 volume_backend_name: ceph1 rbd_user: ceph1_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" report_discard_supported: true
ceph2: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph2_vol rbd_ceph_conf: /etc/ceph/ceph2.conf rbd_store_chunk_size: 8 volume_backend_name: ceph2 rbd_user: ceph2_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" report_discard_supported: true
ceph_extra_confs: - src: /etc/openstack_deploy/ceph/ceph1.conf dest: /etc/ceph/ceph1.conf client_name: ceph1_vol keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid }}' - src: /etc/openstack_deploy/ceph/ceph2.conf dest: /etc/ceph/ceph2.conf client_name: ceph2_vol keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid2 }}'
But when executing the `virsh secret-list` command it only shows the UUID of "cinder_ceph_client_uuid".
Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined in "user_secrets.yml".
I have a slight impression that I didn't configure something, but I don't know what, because I didn't find anything else to be done, according to the documentation [1], or it went unnoticed by me.
[1] https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ce...
Thanks in advance!
Hi, Sorry for the delay with reply - it was quite a busy week. So what you are talking about is a proper bug, that has been fixed in 2023.1 (Antelope) but was not backported back to Zed. You can find the patch that fixes the behaviour via the link [1] So indeed with current code you either need to use just the same secret uuid or allow accessing monitors. Alternatively, you can use ceph_client role from 2023.1. For that create a file /etc/openstack_deploy/user-role-requirements.yml with content like that: --- - name: ansible-hardening scm: git src: https://opendev.org/openstack/openstack-ansible-ceph_client version: 3bc73a8ab6d33fbee81aefb13b25b67e0ca42324 shallow_since: '2023-06-26' In the meanwhile we will try to backport the mentioned fix to previous branches. [1] https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/86797... вт, 8 авг. 2023 г. в 22:37, Murilo Morais <murilo@evocorp.com.br>:
Dmitry, hello!
I have to be honest, can't understand properly how to apply. Do I just have to set the "nova_ceph_client_uuid"?
Em sáb., 5 de ago. de 2023 às 11:11, Dmitriy Rabotyagov <noonedeadpunk@gmail.com> escreveu:
Hey Murilo,
I'm not sure that ceph_cliebt role does support multiple secrets right now, I will be able to look deeper into this on Monday
But there's yet another place where we set secrets [1], so it shouldn't be required to have mon_hosts defined. But yes, having mon_hosts would require ssh access to them to fetch ceph.conf and authx keys.
[1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/commit/05e3c...
On Sat, Aug 5, 2023, 15:46 Murilo Morais <murilo@evocorp.com.br> wrote:
Apparently the "mon_host" parameter is mandatory to create secrets [1], but setting this parameter also makes it SSH into MON [2], which I would like to avoid. Would this statement be true?
[1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl... [2] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl...
Em sex., 4 de ago. de 2023 às 19:39, Murilo Morais <murilo@evocorp.com.br> escreveu:
Good evening everyone!
Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to consume.
I'm using the following configuration:
cinder_backends: ceph1: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph1_vol rbd_ceph_conf: /etc/ceph/ceph1.conf rbd_store_chunk_size: 8 volume_backend_name: ceph1 rbd_user: ceph1_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" report_discard_supported: true
ceph2: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph2_vol rbd_ceph_conf: /etc/ceph/ceph2.conf rbd_store_chunk_size: 8 volume_backend_name: ceph2 rbd_user: ceph2_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" report_discard_supported: true
ceph_extra_confs: - src: /etc/openstack_deploy/ceph/ceph1.conf dest: /etc/ceph/ceph1.conf client_name: ceph1_vol keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid }}' - src: /etc/openstack_deploy/ceph/ceph2.conf dest: /etc/ceph/ceph2.conf client_name: ceph2_vol keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid2 }}'
But when executing the `virsh secret-list` command it only shows the UUID of "cinder_ceph_client_uuid".
Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined in "user_secrets.yml".
I have a slight impression that I didn't configure something, but I don't know what, because I didn't find anything else to be done, according to the documentation [1], or it went unnoticed by me.
[1] https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ce...
Thanks in advance!
* sorry, content of user-role-requirements.yml should be ofc that: --- - name: ceph_client scm: git src: https://opendev.org/openstack/openstack-ansible-ceph_client version: 3bc73a8ab6d33fbee81aefb13b25b67e0ca42324 shallow_since: '2023-06-26' пт, 11 авг. 2023 г. в 10:42, Dmitriy Rabotyagov <noonedeadpunk@gmail.com>:
Hi,
Sorry for the delay with reply - it was quite a busy week.
So what you are talking about is a proper bug, that has been fixed in 2023.1 (Antelope) but was not backported back to Zed. You can find the patch that fixes the behaviour via the link [1]
So indeed with current code you either need to use just the same secret uuid or allow accessing monitors.
Alternatively, you can use ceph_client role from 2023.1. For that create a file /etc/openstack_deploy/user-role-requirements.yml with content like that:
--- - name: ansible-hardening scm: git src: https://opendev.org/openstack/openstack-ansible-ceph_client version: 3bc73a8ab6d33fbee81aefb13b25b67e0ca42324 shallow_since: '2023-06-26'
In the meanwhile we will try to backport the mentioned fix to previous branches.
[1] https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/86797...
вт, 8 авг. 2023 г. в 22:37, Murilo Morais <murilo@evocorp.com.br>:
Dmitry, hello!
I have to be honest, can't understand properly how to apply. Do I just have to set the "nova_ceph_client_uuid"?
Em sáb., 5 de ago. de 2023 às 11:11, Dmitriy Rabotyagov <noonedeadpunk@gmail.com> escreveu:
Hey Murilo,
I'm not sure that ceph_cliebt role does support multiple secrets right now, I will be able to look deeper into this on Monday
But there's yet another place where we set secrets [1], so it shouldn't be required to have mon_hosts defined. But yes, having mon_hosts would require ssh access to them to fetch ceph.conf and authx keys.
[1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/commit/05e3c...
On Sat, Aug 5, 2023, 15:46 Murilo Morais <murilo@evocorp.com.br> wrote:
Apparently the "mon_host" parameter is mandatory to create secrets [1], but setting this parameter also makes it SSH into MON [2], which I would like to avoid. Would this statement be true?
[1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl... [2] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stabl...
Em sex., 4 de ago. de 2023 às 19:39, Murilo Morais <murilo@evocorp.com.br> escreveu:
Good evening everyone!
Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to consume.
I'm using the following configuration:
cinder_backends: ceph1: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph1_vol rbd_ceph_conf: /etc/ceph/ceph1.conf rbd_store_chunk_size: 8 volume_backend_name: ceph1 rbd_user: ceph1_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" report_discard_supported: true
ceph2: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph2_vol rbd_ceph_conf: /etc/ceph/ceph2.conf rbd_store_chunk_size: 8 volume_backend_name: ceph2 rbd_user: ceph2_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" report_discard_supported: true
ceph_extra_confs: - src: /etc/openstack_deploy/ceph/ceph1.conf dest: /etc/ceph/ceph1.conf client_name: ceph1_vol keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid }}' - src: /etc/openstack_deploy/ceph/ceph2.conf dest: /etc/ceph/ceph2.conf client_name: ceph2_vol keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid2 }}'
But when executing the `virsh secret-list` command it only shows the UUID of "cinder_ceph_client_uuid".
Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined in "user_secrets.yml".
I have a slight impression that I didn't configure something, but I don't know what, because I didn't find anything else to be done, according to the documentation [1], or it went unnoticed by me.
[1] https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ce...
Thanks in advance!
participants (2)
-
Dmitriy Rabotyagov
-
Murilo Morais