G’day Max,

 

At risk of the blind leading the blind, we have a kolla-ansible with multiple ceph backends, and we haven’t run into the issue you’re describing, which indicates that you might be missing some config, or some context.

 

 

I can confirm that we have cinder and glance with 3 different ceph backends and 5 volume types which happily allow volume to image conversion in both horizon and the Openstack-client without having to specify the image_service:store_id; but we had some teething issues which we worked through that was mostly to do with missing cinder.conf, and correctly setting up projects and user policy. For example, you might have everything configured correctly, but policy is preventing your user from doing the requested task.

 

Hopefully rubber-ducking some of these things will lead you to the answers you seek.

 

Kind Regards,

 

Joel McLean – Micron21 Pty Ltd

 

From: Maximilian Stinsky-Damke <Maximilian.Stinsky-Damke@wiit.cloud>
Sent: Friday, 6 June 2025 6:43 PM
To: openstack-discuss@lists.openstack.org
Subject: [cinder] Volume to Image Upload with Multi-Store Backends (Cinder + Glance)

 

Greetings,

 

I'm in the process of setting up a new OpenStack deployment where both Cinder and Glance are configured with multiple Ceph-based backends using the multi-store functionality.

Basic operations (create, delete, snapshot, upload, etc.) work fine for each store in both services. However, I'm encountering issues when trying to upload a Cinder volume to Glance as an image in this setup.

My understanding is that when the volume type does not include image_service:store_id, Cinder should upload the image to the default Glance store. However, in my case, this upload fails with the following error:

ERROR oslo_messaging.rpc.server FileNotFoundError: [Errno 2] No such file or directory: '/etc/ceph/ceph.client.cinder.keyring'

 

I'm having trouble understanding why Cinder is trying to access this particular keyring. Each of my Cinder backends has its own Ceph config and keyring pointing to the appropriate cluster, and everything else works correctly.

Here are my specific questions:

  1. Should Cinder not be uploading the volume to Glance via the API, without needing direct access to the Glance Ceph store?
  2. If Cinder is expected to access the Glance backend directly (e.g., to optimize data movement), how does it determine which Ceph config and keyring to use?
  3. Is there any documentation or configuration guidance available on how to tell Cinder which credentials to use when interacting with Glance stores?

Any insights or pointers would be greatly appreciated.

 

Best regards,
Max