thanks for your help.
rbd -m ceph1 -n client.cinder -k /home/user1/ceph.client.cinder.keyring ls images
I can see:
test
I think it’s ok.
Logs from cinder-schedule:
2023-11-15 08:36:30.965 7 ERROR cinder.scheduler.flows.create_volume [req-8d476977-656b-48da-b03e-87ad193fe73d req-e0109262-976c-4afc-b0d3-b7d272f1a15b 9078f3f29a36400c8b217585b71b4e07 6bf25257fa744028b808f2aa5d261e7d - - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. Exceeded max scheduling attempts 3 for resource 6b7c2adb-9c05-4697-8768-5871b22e636b: cinder.exception.NoValidBackend: No valid backend was found. Exceeded max scheduling attempts 3 for resource 6b7c2adb-9c05-4697-8768-5871b22e636b
Logs from cinder_volume
2023-11-15 08:11:14.354 1018 INFO cinder.volume.manager [None req-c295a329-03af-492a-b5b7-9ac2fa913667 - - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.3.0)
2023-11-15 08:11:14.411 1018 INFO cinder.volume.manager [None req-c295a329-03af-492a-b5b7-9ac2fa913667 - - - - - -] Driver post RPC initialization completed successfully.
tree /etc/kolla/config
/etc/kolla/config
├── cinder
│ ├── ceph.client.cinder.keyring
│ ├── ceph.conf
│ ├── cinder-backup
│ │ └── ceph.client.cinder.keyring
│ └── cinder-volume
│ ├── ceph.client.cinder.keyring
│ └── ceph.conf
├── glance
│ ├── ceph.client.glance.keyring
│ └── ceph.conf
└── nova
├── ceph.client.cinder.keyring
├── ceph.client.nova.keyring
└── ceph.conf
ceph/conf:
[global]
fsid = 7bcf95a2-7eef-11ee-a183-080027950478
mon_host = [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] [v2:192.168.1.2:3300/0,v1:192.168.1.2:6789/0] [v2:192.168.1.3:3300/0,v1:1
92.168.1.3:6789/0]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
I'm probably forgetting something but I don't see what
Le 15 nov. 2023 à 09:54, Pierre Riteau <pierre@stackhpc.com> a écrit :
Hi Franck,
It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs).
Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user?
Best wishes,
Pierre Riteau (priteau)
Good morning,
I'll come back for help, thanks in advance.
I am testing an Openstack 2023.1/Ceph Pacific environment.
The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule.
In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).