[OSA] CEPH libvirt secrets

Murilo Morais murilo at evocorp.com.br
Tue Aug 8 20:37:04 UTC 2023


Dmitry, hello!

I have to be honest, can't understand properly how to apply. Do I just have
to set the "nova_ceph_client_uuid"?

Em sáb., 5 de ago. de 2023 às 11:11, Dmitriy Rabotyagov <
noonedeadpunk at gmail.com> escreveu:

> Hey Murilo,
>
> I'm not sure that ceph_cliebt role does support multiple secrets right
> now, I will be able to look deeper into this on Monday
>
> But there's yet another place where we set secrets [1], so it shouldn't be
> required to have mon_hosts defined. But yes, having mon_hosts would require
> ssh access to them to fetch ceph.conf and authx keys.
>
>
> [1]
> https://opendev.org/openstack/openstack-ansible-ceph_client/src/commit/05e3c0f18394e5f23d79bff08280e9c09af7b5ca/tasks/ceph_auth.yml#L67
>
> On Sat, Aug 5, 2023, 15:46 Murilo Morais <murilo at evocorp.com.br> wrote:
>
>> Apparently the "mon_host" parameter is mandatory to create secrets [1],
>> but setting this parameter also makes it SSH into MON [2], which I would
>> like to avoid. Would this statement be true?
>>
>> [1]
>> https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stable/zed/tasks/ceph_auth_extra_compute.yml#L92
>> [2]
>> https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stable/zed/tasks/ceph_config_extra.yml#L23
>>
>> Em sex., 4 de ago. de 2023 às 19:39, Murilo Morais <murilo at evocorp.com.br>
>> escreveu:
>>
>>> Good evening everyone!
>>>
>>> Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to
>>> consume.
>>>
>>> I'm using the following configuration:
>>>
>>> cinder_backends:
>>>   ceph1:
>>>     volume_driver: cinder.volume.drivers.rbd.RBDDriver
>>>     rbd_pool: ceph1_vol
>>>     rbd_ceph_conf: /etc/ceph/ceph1.conf
>>>     rbd_store_chunk_size: 8
>>>     volume_backend_name: ceph1
>>>     rbd_user: ceph1_vol
>>>     rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
>>>     report_discard_supported: true
>>>
>>>   ceph2:
>>>     volume_driver: cinder.volume.drivers.rbd.RBDDriver
>>>     rbd_pool: ceph2_vol
>>>     rbd_ceph_conf: /etc/ceph/ceph2.conf
>>>     rbd_store_chunk_size: 8
>>>     volume_backend_name: ceph2
>>>     rbd_user: ceph2_vol
>>>     rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}"
>>>     report_discard_supported: true
>>>
>>> ceph_extra_confs:
>>>   - src: /etc/openstack_deploy/ceph/ceph1.conf
>>>     dest: /etc/ceph/ceph1.conf
>>>     client_name: ceph1_vol
>>>     keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring
>>>     keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring
>>>     secret_uuid: '{{ cinder_ceph_client_uuid }}'
>>>   - src: /etc/openstack_deploy/ceph/ceph2.conf
>>>     dest: /etc/ceph/ceph2.conf
>>>     client_name: ceph2_vol
>>>     keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring
>>>     keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring
>>>     secret_uuid: '{{ cinder_ceph_client_uuid2 }}'
>>>
>>> But when executing the `virsh secret-list` command it only shows the
>>> UUID of "cinder_ceph_client_uuid".
>>>
>>> Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are
>>> defined in "user_secrets.yml".
>>>
>>> I have a slight impression that I didn't configure something, but I
>>> don't know what, because I didn't find anything else to be done, according
>>> to the documentation [1], or it went unnoticed by me.
>>>
>>> [1]
>>> https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ceph.html#extra-client-configuration-files
>>>
>>> Thanks in advance!
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230808/004f506f/attachment.htm>


More information about the openstack-discuss mailing list