[kolla-ansible][cinder-schedule] Problem between Ceph and cinder-schedule
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected. if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf. Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04). Franck
Hi Franck, It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs). Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user? Best wishes, Pierre Riteau (priteau) On Wed, 15 Nov 2023 at 09:50, Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> wrote:
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).
Franck
Hi Pierre, thanks for your help. some informations about my lab: If I try this, from open stack : rbd -m ceph1 -n cinder.glance -k /home/user1/ceph.client.cinder.keyring create images/test --size 1G and after rbd -m ceph1 -n client.cinder -k /home/user1/ceph.client.cinder.keyring ls images I can see: test I think it’s ok. Logs from cinder-schedule: 2023-11-15 08:36:30.965 7 ERROR cinder.scheduler.flows.create_volume [req-8d476977-656b-48da-b03e-87ad193fe73d req-e0109262-976c-4afc-b0d3-b7d272f1a15b 9078f3f29a36400c8b217585b71b4e07 6bf25257fa744028b808f2aa5d261e7d - - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. Exceeded max scheduling attempts 3 for resource 6b7c2adb-9c05-4697-8768-5871b22e636b: cinder.exception.NoValidBackend: No valid backend was found. Exceeded max scheduling attempts 3 for resource 6b7c2adb-9c05-4697-8768-5871b22e636b Logs from cinder_volume 2023-11-15 08:11:14.354 1018 INFO cinder.volume.manager [None req-c295a329-03af-492a-b5b7-9ac2fa913667 - - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.3.0) 2023-11-15 08:11:14.411 1018 INFO cinder.volume.manager [None req-c295a329-03af-492a-b5b7-9ac2fa913667 - - - - - -] Driver post RPC initialization completed successfully. tree /etc/kolla/config /etc/kolla/config ├── cinder │ ├── ceph.client.cinder.keyring │ ├── ceph.conf │ ├── cinder-backup │ │ └── ceph.client.cinder.keyring │ └── cinder-volume │ ├── ceph.client.cinder.keyring │ └── ceph.conf ├── glance │ ├── ceph.client.glance.keyring │ └── ceph.conf └── nova ├── ceph.client.cinder.keyring ├── ceph.client.nova.keyring └── ceph.conf ceph/conf: [global] fsid = 7bcf95a2-7eef-11ee-a183-080027950478 mon_host = [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] [v2:192.168.1.2:3300/0,v1:192.168.1.2:6789/0] [v2:192.168.1.3:3300/0,v1:1 92.168.1.3:6789/0] auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx I'm probably forgetting something but I don't see what Franck
Le 15 nov. 2023 à 09:54, Pierre Riteau <pierre@stackhpc.com> a écrit :
Hi Franck,
It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs).
Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user?
Best wishes, Pierre Riteau (priteau)
On Wed, 15 Nov 2023 at 09:50, Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).
Franck
Dear Franck, For me, I needed a custom /etc/kolla/config/cinder.conf for ceph and cinder to work together. Since I am using kayobe, and two named backends, this may not be your requirement. But here're my cinder.conf My /etc/kolla/config/cinder.conf is, cat /etc/kolla/config/cinder.conf # Ansible managed ####################### # Extra configuration ####################### [DEFAULT] enabled_backends = ceph-ssd, ceph-hdd glance_api_version = 2 [ceph-ssd] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph-ssd rbd_pool = volumes-ssd rbd_ceph_conf = /etc/ceph/ceph.conf rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = <use cinder_rbd_secret_uuid value from /etc/kolla/passwords.yml or /etc/kayobe/kolla/passwords.yml> report_discard_supported = True [ceph-hdd] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph-hdd rbd_pool = volumes-hdd rbd_ceph_conf = /etc/ceph/ceph.conf rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = <use cinder_rbd_secret_uuid value from /etc/kolla/passwords.yml or /etc/kayobe/kolla/passwords.yml> report_discard_supported = True and my /etc/kayobe/kolla/config/cinder.conf is cat /etc/kayobe/kolla/config/cinder.conf [DEFAULT] enabled_backends = ceph-ssd, ceph-hdd glance_api_version = 2 [ceph-ssd] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph-ssd rbd_pool = volumes-ssd rbd_ceph_conf = /etc/ceph/ceph.conf rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = <use cinder_rbd_secret_uuid value from /etc/kolla/passwords.yml or /etc/kayobe/kolla/passwords.yml> report_discard_supported = True [ceph-hdd] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph-hdd rbd_pool = volumes-hdd rbd_ceph_conf = /etc/ceph/ceph.conf rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = <use cinder_rbd_secret_uuid value from /etc/kolla/passwords.yml or /etc/kayobe/kolla/passwords.yml> report_discard_supported = True Hope this helps On Wed, 15 Nov 2023 at 18:31, Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> wrote:
Hi Pierre, thanks for your help. some informations about my lab:
If I try this, from open stack : rbd -m ceph1 -n cinder.glance -k /home/user1/ceph.client.cinder.keyring create images/test --size 1G
and after rbd -m ceph1 -n client.cinder -k /home/user1/ceph.client.cinder.keyring ls images I can see: test
I think it’s ok.
Logs from cinder-schedule: 2023-11-15 08:36:30.965 7 ERROR cinder.scheduler.flows.create_volume [req-8d476977-656b-48da-b03e-87ad193fe73d req-e0109262-976c-4afc-b0d3-b7d272f1a15b 9078f3f29a36400c8b217585b71b4e07 6bf25257fa744028b808f2aa5d261e7d - - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. Exceeded max scheduling attempts 3 for resource 6b7c2adb-9c05-4697-8768-5871b22e636b: cinder.exception.NoValidBackend: No valid backend was found. Exceeded max scheduling attempts 3 for resource 6b7c2adb-9c05-4697-8768-5871b22e636b
Logs from cinder_volume 2023-11-15 08:11:14.354 1018 INFO cinder.volume.manager [None req-c295a329-03af-492a-b5b7-9ac2fa913667 - - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.3.0) 2023-11-15 08:11:14.411 1018 INFO cinder.volume.manager [None req-c295a329-03af-492a-b5b7-9ac2fa913667 - - - - - -] Driver post RPC initialization completed successfully.
tree /etc/kolla/config
/etc/kolla/config ├── cinder │ ├── ceph.client.cinder.keyring │ ├── ceph.conf │ ├── cinder-backup │ │ └── ceph.client.cinder.keyring │ └── cinder-volume │ ├── ceph.client.cinder.keyring │ └── ceph.conf ├── glance │ ├── ceph.client.glance.keyring │ └── ceph.conf └── nova ├── ceph.client.cinder.keyring ├── ceph.client.nova.keyring └── ceph.conf
ceph/conf: [global] fsid = 7bcf95a2-7eef-11ee-a183-080027950478 mon_host = [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] [v2: 192.168.1.2:3300/0,v1:192.168.1.2:6789/0] [v2:192.168.1.3:3300/0,v1:1 92.168.1.3:6789/0] auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx
I'm probably forgetting something but I don't see what
Franck
Le 15 nov. 2023 à 09:54, Pierre Riteau <pierre@stackhpc.com> a écrit :
Hi Franck,
It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs).
Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user?
Best wishes, Pierre Riteau (priteau)
On Wed, 15 Nov 2023 at 09:50, Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> wrote:
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).
Franck
-- බුද්ධික සංජීව ගොඩාකුරු Buddhika Sanjeewa Godakuru Systems Analyst/Programmer Deputy Webmaster / University of Kelaniya Information and Communication Technology Centre (ICTC) University of Kelaniya, Sri Lanka, Kelaniya, Sri Lanka. Mobile : (+94) 071 5696981 Office : (+94) 011 2903420 / 2903424 -- **** *University of Kelaniya, Sri Lanka, accepts no liability for the content of this email or for the consequences of any actions taken on the basis of the information provided unless that information is subsequently confirmed in writing. * *If you are not the intended recipient, this email and/or any information it contains should not be copied, disclosed, retained or used by you or any other party, and the email and all its contents should be promptly deleted fully, and the sender informed. * ** ** *E-mail transmissions cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete. * ****
hi, I don't know Kayobe. I'm going to use this configuration to see what's wrong with mine. Thank you so much ! Franck
Le 15 nov. 2023 à 17:35, Buddhika S. Godakuru - University of Kelaniya <bsanjeewa@kln.ac.lk> a écrit :
Dear Franck, For me, I needed a custom /etc/kolla/config/cinder.conf for ceph and cinder to work together. Since I am using kayobe, and two named backends, this may not be your requirement.
But here're my cinder.conf
My /etc/kolla/config/cinder.conf is,
cat /etc/kolla/config/cinder.conf # Ansible managed
####################### # Extra configuration #######################
[DEFAULT] enabled_backends = ceph-ssd, ceph-hdd glance_api_version = 2
[ceph-ssd] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph-ssd rbd_pool = volumes-ssd rbd_ceph_conf = /etc/ceph/ceph.conf rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = <use cinder_rbd_secret_uuid value from /etc/kolla/passwords.yml or /etc/kayobe/kolla/passwords.yml> report_discard_supported = True
[ceph-hdd] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph-hdd rbd_pool = volumes-hdd rbd_ceph_conf = /etc/ceph/ceph.conf rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = <use cinder_rbd_secret_uuid value from /etc/kolla/passwords.yml or /etc/kayobe/kolla/passwords.yml> report_discard_supported = True
and my /etc/kayobe/kolla/config/cinder.conf is
cat /etc/kayobe/kolla/config/cinder.conf [DEFAULT] enabled_backends = ceph-ssd, ceph-hdd glance_api_version = 2
[ceph-ssd] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph-ssd rbd_pool = volumes-ssd rbd_ceph_conf = /etc/ceph/ceph.conf rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = <use cinder_rbd_secret_uuid value from /etc/kolla/passwords.yml or /etc/kayobe/kolla/passwords.yml> report_discard_supported = True
[ceph-hdd] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph-hdd rbd_pool = volumes-hdd rbd_ceph_conf = /etc/ceph/ceph.conf rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = <use cinder_rbd_secret_uuid value from /etc/kolla/passwords.yml or /etc/kayobe/kolla/passwords.yml> report_discard_supported = True
Hope this helps
On Wed, 15 Nov 2023 at 18:31, Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Hi Pierre, thanks for your help. some informations about my lab:
If I try this, from open stack : rbd -m ceph1 -n cinder.glance -k /home/user1/ceph.client.cinder.keyring create images/test --size 1G
and after rbd -m ceph1 -n client.cinder -k /home/user1/ceph.client.cinder.keyring ls images I can see: test
I think it’s ok.
Logs from cinder-schedule: 2023-11-15 08:36:30.965 7 ERROR cinder.scheduler.flows.create_volume [req-8d476977-656b-48da-b03e-87ad193fe73d req-e0109262-976c-4afc-b0d3-b7d272f1a15b 9078f3f29a36400c8b217585b71b4e07 6bf25257fa744028b808f2aa5d261e7d - - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. Exceeded max scheduling attempts 3 for resource 6b7c2adb-9c05-4697-8768-5871b22e636b: cinder.exception.NoValidBackend: No valid backend was found. Exceeded max scheduling attempts 3 for resource 6b7c2adb-9c05-4697-8768-5871b22e636b
Logs from cinder_volume 2023-11-15 08:11:14.354 1018 INFO cinder.volume.manager [None req-c295a329-03af-492a-b5b7-9ac2fa913667 - - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.3.0) 2023-11-15 08:11:14.411 1018 INFO cinder.volume.manager [None req-c295a329-03af-492a-b5b7-9ac2fa913667 - - - - - -] Driver post RPC initialization completed successfully.
tree /etc/kolla/config
/etc/kolla/config ├── cinder │ ├── ceph.client.cinder.keyring │ ├── ceph.conf │ ├── cinder-backup │ │ └── ceph.client.cinder.keyring │ └── cinder-volume │ ├── ceph.client.cinder.keyring │ └── ceph.conf ├── glance │ ├── ceph.client.glance.keyring │ └── ceph.conf └── nova ├── ceph.client.cinder.keyring ├── ceph.client.nova.keyring └── ceph.conf
ceph/conf: [global] fsid = 7bcf95a2-7eef-11ee-a183-080027950478 mon_host = [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0 <http://192.168.1.1:3300/0,v1:192.168.1.1:6789/0>] [v2:192.168.1.2:3300/0,v1:192.168.1.2:6789/0 <http://192.168.1.2:3300/0,v1:192.168.1.2:6789/0>] [v2:192.168.1.3:3300/0,v1:1 <http://192.168.1.3:3300/0,v1:1> 92.168.1.3:6789/0 <http://92.168.1.3:6789/0>] auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx
I'm probably forgetting something but I don't see what
Franck
Le 15 nov. 2023 à 09:54, Pierre Riteau <pierre@stackhpc.com <mailto:pierre@stackhpc.com>> a écrit :
Hi Franck,
It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs).
Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user?
Best wishes, Pierre Riteau (priteau)
On Wed, 15 Nov 2023 at 09:50, Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).
Franck
--
බුද්ධික සංජීව ගොඩාකුරු Buddhika Sanjeewa Godakuru
Systems Analyst/Programmer Deputy Webmaster / University of Kelaniya
Information and Communication Technology Centre (ICTC) University of Kelaniya, Sri Lanka, Kelaniya, Sri Lanka.
Mobile : (+94) 071 5696981 Office : (+94) 011 2903420 / 2903424
**** University of Kelaniya, Sri Lanka, accepts no liability for the content of this email or for the consequences of any actions taken on the basis of the information provided unless that information is subsequently confirmed in writing. If you are not the intended recipient, this email and/or any information it contains should not be copied, disclosed, retained or used by you or any other party, and the email and all its contents should be promptly deleted fully, and the sender informed. E-mail transmissions cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete. ****
Hi Pierre Maybe I found something: In the container « cinder_volume », there is a file /etc/ceph/ceph.conf But, no /etc/ceph/ceph.conf file in « cinder_scheduler » container. I think it’s my problem !! but why ? What is the option in globals.yml ? Or maybe there is something to add in /etc/kolla/config/cinder/ I have this (the last try): enable_cinder: "yes" enable_cinder_backup: "no" enable_cinder_backend_lvm: "no" external_ceph_cephx_enabled: "yes" # Glance ceph_glance_keyring: "ceph.client.glance.keyring" ceph_glance_user: "glance" ceph_glance_pool_name: "images" # Cinder ceph_cinder_keyring: "ceph.client.cinder.keyring" ceph_cinder_user: "cinder" ceph_cinder_pool_name: "volumes" #ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" #ceph_cinder_backup_keyring: "ceph.client.cinder.keyring" #ceph_cinder_backup_user: "cinder" #ceph_cinder_backup_user: "cinder-backup" #ceph_cinder_backup_pool_name: "backups" # Nova #ceph_nova_keyring: "{{ ceph_cinder_keyring }}" ceph_nova_keyring: "ceph.client.nova.keyring" ceph_nova_user: "nova" ceph_nova_pool_name: "vms" # Configure image backend. glance_backend_ceph: "yes" # Enable / disable Cinder backends cinder_backend_ceph: "yes" # Nova - Compute Options ######################## nova_backend_ceph: "yes" Thanks a lot Franck
Le 15 nov. 2023 à 09:54, Pierre Riteau <pierre@stackhpc.com> a écrit :
Hi Franck,
It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs).
Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user?
Best wishes, Pierre Riteau (priteau)
On Wed, 15 Nov 2023 at 09:50, Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).
Franck
On Wed, Nov 15, 2023 at 6:32 AM Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> wrote:
Hi Pierre Maybe I found something: In the container « cinder_volume », there is a file /etc/ceph/ceph.conf But, no /etc/ceph/ceph.conf file in « cinder_scheduler » container. I think it’s my problem !!
Sorry, but the scheduler does not require access to /etc/ceph files. I think you need to review the scheduler logs to see why it concluded there are no available c-vol backends. Since you are able to create a volume, I assume the ceph backend is "up," but it would also be good to verify it remains up, especially during the time when you're trying to create a volume from an image. Alan but why ?
What is the option in globals.yml ? Or maybe there is something to add in /etc/kolla/config/cinder/
I have this (the last try):
enable_cinder: "yes" enable_cinder_backup: "no" enable_cinder_backend_lvm: "no" external_ceph_cephx_enabled: "yes"
# Glance ceph_glance_keyring: "ceph.client.glance.keyring" ceph_glance_user: "glance" ceph_glance_pool_name: "images" # Cinder ceph_cinder_keyring: "ceph.client.cinder.keyring" ceph_cinder_user: "cinder" ceph_cinder_pool_name: "volumes"
#ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" #ceph_cinder_backup_keyring: "ceph.client.cinder.keyring" #ceph_cinder_backup_user: "cinder" #ceph_cinder_backup_user: "cinder-backup" #ceph_cinder_backup_pool_name: "backups" # Nova #ceph_nova_keyring: "{{ ceph_cinder_keyring }}" ceph_nova_keyring: "ceph.client.nova.keyring" ceph_nova_user: "nova" ceph_nova_pool_name: "vms"
# Configure image backend. glance_backend_ceph: "yes"
# Enable / disable Cinder backends cinder_backend_ceph: "yes"
# Nova - Compute Options ######################## nova_backend_ceph: "yes"
Thanks a lot
Franck
Le 15 nov. 2023 à 09:54, Pierre Riteau <pierre@stackhpc.com> a écrit :
Hi Franck,
It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs).
Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user?
Best wishes, Pierre Riteau (priteau)
On Wed, 15 Nov 2023 at 09:50, Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> wrote:
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).
Franck
Hi Alan. Thanks for your help. As I don't see the problem, as I can't find the solution in the cinder-scheduler logs, and as it's a lab with physical servers (openstack) and virtual machines (ceph), I'm going to start all over again. I’m a beginner with Ceph. Maybe there is something I did wrong. I will also change the version of Ceph. Thanks again. Franck
Le 15 nov. 2023 à 15:43, Alan Bishop <abishop@redhat.com> a écrit :
On Wed, Nov 15, 2023 at 6:32 AM Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Hi Pierre Maybe I found something: In the container « cinder_volume », there is a file /etc/ceph/ceph.conf But, no /etc/ceph/ceph.conf file in « cinder_scheduler » container. I think it’s my problem !!
Sorry, but the scheduler does not require access to /etc/ceph files. I think you need to review the scheduler logs to see why it concluded there are no available c-vol backends. Since you are able to create a volume, I assume the ceph backend is "up," but it would also be good to verify it remains up, especially during the time when you're trying to create a volume from an image.
Alan
but why ? What is the option in globals.yml ? Or maybe there is something to add in /etc/kolla/config/cinder/
I have this (the last try):
enable_cinder: "yes" enable_cinder_backup: "no" enable_cinder_backend_lvm: "no" external_ceph_cephx_enabled: "yes"
# Glance ceph_glance_keyring: "ceph.client.glance.keyring" ceph_glance_user: "glance" ceph_glance_pool_name: "images" # Cinder ceph_cinder_keyring: "ceph.client.cinder.keyring" ceph_cinder_user: "cinder" ceph_cinder_pool_name: "volumes"
#ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" #ceph_cinder_backup_keyring: "ceph.client.cinder.keyring" #ceph_cinder_backup_user: "cinder" #ceph_cinder_backup_user: "cinder-backup" #ceph_cinder_backup_pool_name: "backups" # Nova #ceph_nova_keyring: "{{ ceph_cinder_keyring }}" ceph_nova_keyring: "ceph.client.nova.keyring" ceph_nova_user: "nova" ceph_nova_pool_name: "vms"
# Configure image backend. glance_backend_ceph: "yes"
# Enable / disable Cinder backends cinder_backend_ceph: "yes"
# Nova - Compute Options ######################## nova_backend_ceph: "yes"
Thanks a lot
Franck
Le 15 nov. 2023 à 09:54, Pierre Riteau <pierre@stackhpc.com <mailto:pierre@stackhpc.com>> a écrit :
Hi Franck,
It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs).
Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user?
Best wishes, Pierre Riteau (priteau)
On Wed, 15 Nov 2023 at 09:50, Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).
Franck
Alan ! Thanks a lot…. I had doubts about the last config because I only wanted to use the Ceph cluster, but cinder-scheduler was looking for a backend which it could not find. But the cluster was functional so: kolla-ansible -i multinode stop …..destroy…. deploy…. post-deploy…. init-runonce…. Et VOILA. This reset all configurations, adds and rollbacks to 0. And it works ! Thanks!! Franck
Le 15 nov. 2023 à 15:43, Alan Bishop <abishop@redhat.com> a écrit :
On Wed, Nov 15, 2023 at 6:32 AM Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Hi Pierre Maybe I found something: In the container « cinder_volume », there is a file /etc/ceph/ceph.conf But, no /etc/ceph/ceph.conf file in « cinder_scheduler » container. I think it’s my problem !!
Sorry, but the scheduler does not require access to /etc/ceph files. I think you need to review the scheduler logs to see why it concluded there are no available c-vol backends. Since you are able to create a volume, I assume the ceph backend is "up," but it would also be good to verify it remains up, especially during the time when you're trying to create a volume from an image.
Alan
but why ? What is the option in globals.yml ? Or maybe there is something to add in /etc/kolla/config/cinder/
I have this (the last try):
enable_cinder: "yes" enable_cinder_backup: "no" enable_cinder_backend_lvm: "no" external_ceph_cephx_enabled: "yes"
# Glance ceph_glance_keyring: "ceph.client.glance.keyring" ceph_glance_user: "glance" ceph_glance_pool_name: "images" # Cinder ceph_cinder_keyring: "ceph.client.cinder.keyring" ceph_cinder_user: "cinder" ceph_cinder_pool_name: "volumes"
#ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" #ceph_cinder_backup_keyring: "ceph.client.cinder.keyring" #ceph_cinder_backup_user: "cinder" #ceph_cinder_backup_user: "cinder-backup" #ceph_cinder_backup_pool_name: "backups" # Nova #ceph_nova_keyring: "{{ ceph_cinder_keyring }}" ceph_nova_keyring: "ceph.client.nova.keyring" ceph_nova_user: "nova" ceph_nova_pool_name: "vms"
# Configure image backend. glance_backend_ceph: "yes"
# Enable / disable Cinder backends cinder_backend_ceph: "yes"
# Nova - Compute Options ######################## nova_backend_ceph: "yes"
Thanks a lot
Franck
Le 15 nov. 2023 à 09:54, Pierre Riteau <pierre@stackhpc.com <mailto:pierre@stackhpc.com>> a écrit :
Hi Franck,
It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs).
Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user?
Best wishes, Pierre Riteau (priteau)
On Wed, 15 Nov 2023 at 09:50, Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).
Franck
participants (4)
-
Alan Bishop
-
Buddhika S. Godakuru - University of Kelaniya
-
Franck VEDEL
-
Pierre Riteau