Greetings,

Thanks for the answer and sorry for the late response I sadly got sick the last weeks.
Sadly I was not able to get any further on this topic so far.

Just for reference I have the following config in cinder:
[DEFAULT]
enabled_backends = rbd-az1,rbd-az2,rbd-az3,rbd-sz1

Eeach of these cinder backends got its own backend config pointing to its own ceph conf and ceph keyring.
I am able to create volumes in each backend attach them to nova and so on, thats working all fine.

In glance I have more or less the same:
[DEFAULT]
enabled_backends = rbd-az1:rbd, rbd-az2:rbd, rbd-az3:rbd, rbd-sz1:rbd

[glance_store]
default_backend = rbd-sz1

And each of those backends point again to its own rbd_store_ceph_conf.

Now the confusing part is in regards to the error message I posted.
(ERROR oslo_messaging.rpc.server FileNotFoundError: [Errno 2] No such file or directory: '/etc/ceph/ceph.client.cinder.keyring')

This message comes from cinder-volume when I try to do openstack image create --volume $volume_id $image_name
Even though none of my backends have /etc/ceph/ceph.client.cinder.keyring configured as their keyring in cinder.
Therefore I think cinder-volume tries to use a default keyring when trying to connect to ceph.

But here is the main question why is cinder trying to connect to ceph when doing an image upload?
Should cinder not just use the glance api for that?
How would cinder know which ceph backend is behind the default backend of glance?

The option image_upload_use_cinder_backend is default false which I dont set in my config. Therefore I would think cinder should not try and connect to ceph to do a rbd clone into glance.
But maybe I somehow fundamentally misunderstand something with the volume image upload why cinder is trying to directly connect to some ceph cluster while doing so.

Thanks again for any help.

Best regards
Max



From: Joel McLean <joel.mclean@micron21.com>
Sent: 11 June 2025 09:44
To: Dmitriy Rabotyagov <noonedeadpunk@gmail.com>
Cc: Maximilian Stinsky-Damke <Maximilian.Stinsky-Damke@wiit.cloud>; openstack-discuss <openstack-discuss@lists.openstack.org>
Subject: RE: [cinder] Volume to Image Upload with Multi-Store Backends (Cinder + Glance)
 
This email is from an unusual correspondent. Make sure this is someone you trust.

In writing this all out the first time I had to delete it all, because I realise that in my config we only are using a single of our 3 ceph clusters for glance; 3 for cinder, more or less following the documentation, but only a single back end for glance which follows a similar setup, only that it isn’t a named group. This is the default supported arrangement for kolla-ansible deployments.

 

So, turns out, this was a case of blind-leading-blind.

 

That said, the configuration at https://docs.openstack.org/kolla-ansible/2024.1/reference/storage/external-ceph-guide.html is identical to how our configuration is set up – we just don’t have the other two ceph back ends configured for glance, only cinder.

 

Reading this documentation, it’s not clear to me that if I were to run

I’m not sure how Openstack would decide which glance storage backend it should use; Openstack client does not appear to show any argument for specifying.

Perhaps only specified by metadata/properties when creating the image, based on information from https://docs.openstack.org/glance/latest/admin/multistores.html

Specifically, this document discusses that the glance-api.conf must have enabled_backends defined, a default backend must be set.

[DEFAULT]
enabled_backends = ceph:rbd
[glance_store]

default_backend = ceph

On that page there’s a reference to some meta data that might indicate that image metadata might be use to help it select which store it should live on, but it’s not explicitly documented.

 

Specific to kolla-ansible, backends for multistore are defined: https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/glance/templates/glance-api.conf.j2

So, if you haven’t set them, you can probably set them via the appropriate override which appears to be /etc/kolla/config/glance-api.conf

 

The oversight with kolla-ansible out of the box is that it defaults to your first configured ceph backend:

/glance/defaults/main.yml:glance_default_backend: "{% if glance_backend_vmware | bool %}vmware{% elif glance_backend_ceph | bool %}{{ glance_ceph_backends[0].name }}{% elif glance_backend_swift | bool %}swift{% elif glance_backend_s3 | bool %}s3{% else %}file{% endif %}"

 

… and the glance ceph backend default template only had one called “rbd”, which is effectively hard coded. This could be worked around in kolla-ansible by modifying the template, but that’s where things will fall off the rails for most users.

 

Still, hopefully this points you in the right direction to get this working?

 

Kind Regards,

 

Joel McLean – Micron21 Pty Ltd

 

From: Dmitriy Rabotyagov <noonedeadpunk@gmail.com>
Sent: Wednesday, 11 June 2025 4:31 PM
To: Joel McLean <joel.mclean@micron21.com>
Cc: Maximilian Stinsky-Damke <Maximilian.Stinsky-Damke@wiit.cloud>; openstack-discuss <openstack-discuss@lists.openstack.org>
Subject: Re: [cinder] Volume to Image Upload with Multi-Store Backends (Cinder + Glance)

 

 

 

I can confirm that we have cinder and glance with 3 different ceph backends 

I hope you don't mind some questions about this part of setup, as I kind of never understood how to make such setup work.

As I have not found how user/service can tell which glance backend to use when spawning instance or magnum cluster, or something like that.

 

Also I was wondering do you have network reachability between ceph clusters or they are isolated in different network segments and can't be cross-reached? As network isolated clusters (different availability zones) was a use case I was looking into, while have the same image set available for all of them, and I kinda stuck with understanding how multi-storage glance is actually used with import to multiple stores, and how cinder will be selecting storage which is available, except checking randomly and waiting connection timeouts, until it finds the reachable backend.

 

I think I have missed something in my logic, so if you can share some insights on the topic - it would be really appreciated.

 

 


--
This message has been checked by Libraesva ESG and is found to be clean.
Report as bad/spam
Blocklist sender