RE: openstack-discuss Digest, Vol 61, Issue 61- Nova- Compute Service Issue
Hello Openstack Team, I have deployed openstack zed release with multinode cluster through kolla-ansible controller with 1x storage, 1x compute node. The nova-compute service failed to register itself on the compute-host as attached; due to which there encounters an error of no availability zone for an instance to launch. Can any one guide me how to resolve this nova service issues. -----Original Message----- From: openstack-discuss-request@lists.openstack.org <openstack-discuss-request@lists.openstack.org> Sent: Thursday, November 16, 2023 2:15 AM To: openstack-discuss@lists.openstack.org Subject: openstack-discuss Digest, Vol 61, Issue 61 Send openstack-discuss mailing list submissions to openstack-discuss@lists.openstack.org To subscribe or unsubscribe via email, send a message with subject or body 'help' to openstack-discuss-request@lists.openstack.org You can reach the person managing the list at openstack-discuss-owner@lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [kolla-ansible][cinder-schedule] Problem between Ceph and cinder-schedule (Franck VEDEL) 2. Re: Re: [keystone] temporary workaround for deprecated _member_ role? (Michael Knox) 3. openstack: aborted: Failed to allocate the network(s), not rescheduling.]. (kjme001@gmail.com) ---------------------------------------------------------------------- Message: 1 Date: Wed, 15 Nov 2023 21:33:15 +0100 From: Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr> Subject: Re: [kolla-ansible][cinder-schedule] Problem between Ceph and cinder-schedule To: Alan Bishop <abishop@redhat.com> Cc: openstack-discuss@lists.openstack.org Message-ID: <63B87539-94A5-4CA5-A6CA-9CB67124EAEF@univ-grenoble-alpes.fr> Content-Type: multipart/alternative; boundary="Apple-Mail=_880E21AE-FC09-43C2-8C33-2E2670B2D035" Alan ! Thanks a lot…. I had doubts about the last config because I only wanted to use the Ceph cluster, but cinder-scheduler was looking for a backend which it could not find. But the cluster was functional so: kolla-ansible -i multinode stop …..destroy…. deploy…. post-deploy…. init-runonce…. Et VOILA. This reset all configurations, adds and rollbacks to 0. And it works ! Thanks!! Franck
Le 15 nov. 2023 à 15:43, Alan Bishop <abishop@redhat.com> a écrit :
On Wed, Nov 15, 2023 at 6:32 AM Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Hi Pierre Maybe I found something: In the container « cinder_volume », there is a file /etc/ceph/ceph.conf But, no /etc/ceph/ceph.conf file in « cinder_scheduler » container. I think it’s my problem !!
Sorry, but the scheduler does not require access to /etc/ceph files. I think you need to review the scheduler logs to see why it concluded there are no available c-vol backends. Since you are able to create a volume, I assume the ceph backend is "up," but it would also be good to verify it remains up, especially during the time when you're trying to create a volume from an image.
Alan
but why ? What is the option in globals.yml ? Or maybe there is something to add in /etc/kolla/config/cinder/
I have this (the last try):
enable_cinder: "yes" enable_cinder_backup: "no" enable_cinder_backend_lvm: "no" external_ceph_cephx_enabled: "yes"
# Glance ceph_glance_keyring: "ceph.client.glance.keyring" ceph_glance_user: "glance" ceph_glance_pool_name: "images" # Cinder ceph_cinder_keyring: "ceph.client.cinder.keyring" ceph_cinder_user: "cinder" ceph_cinder_pool_name: "volumes"
#ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" #ceph_cinder_backup_keyring: "ceph.client.cinder.keyring" #ceph_cinder_backup_user: "cinder" #ceph_cinder_backup_user: "cinder-backup" #ceph_cinder_backup_pool_name: "backups" # Nova #ceph_nova_keyring: "{{ ceph_cinder_keyring }}" ceph_nova_keyring: "ceph.client.nova.keyring" ceph_nova_user: "nova" ceph_nova_pool_name: "vms"
# Configure image backend. glance_backend_ceph: "yes"
# Enable / disable Cinder backends cinder_backend_ceph: "yes"
# Nova - Compute Options ######################## nova_backend_ceph: "yes"
Thanks a lot
Franck
Le 15 nov. 2023 à 09:54, Pierre Riteau <pierre@stackhpc.com <mailto:pierre@stackhpc.com>> a écrit :
Hi Franck,
It would help if you could share more details about the error (check both cinder-scheduler and cinder-volume logs).
Do you have capabilities to access the images pool (presumably what is used by Glance) on your client.cinder user?
Best wishes, Pierre Riteau (priteau)
On Wed, 15 Nov 2023 at 09:50, Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> wrote:
Good morning, I'll come back for help, thanks in advance. I am testing an Openstack 2023.1/Ceph Pacific environment. The cluster works, and from Horizon for example, if I create a simple volume, it works normally, it is placed in the "volumes" pool in the Ceph cluster, exactly as expected.
if I create a volume from an image, ERROR cinder-schedule. In the cinder_volume and cinder_scheduler containers, same settings in cinder.conf.
Before continuing to find out what the error could be, could there be a problem between Antelope (2023.1) and Ceph Pacific? (I'm on Ubuntu 22.04).
Franck
participants (1)
-
Asma Naz Shariq