[cinder] Volume to Image Upload with Multi-Store Backends (Cinder + Glance)
Greetings, I'm in the process of setting up a new OpenStack deployment where both Cinder and Glance are configured with multiple Ceph-based backends using the multi-store functionality. Basic operations (create, delete, snapshot, upload, etc.) work fine for each store in both services. However, I'm encountering issues when trying to upload a Cinder volume to Glance as an image in this setup. My understanding is that when the volume type does not include image_service:store_id, Cinder should upload the image to the default Glance store. However, in my case, this upload fails with the following error: ERROR oslo_messaging.rpc.server FileNotFoundError: [Errno 2] No such file or directory: '/etc/ceph/ceph.client.cinder.keyring' I'm having trouble understanding why Cinder is trying to access this particular keyring. Each of my Cinder backends has its own Ceph config and keyring pointing to the appropriate cluster, and everything else works correctly. Here are my specific questions: 1. Should Cinder not be uploading the volume to Glance via the API, without needing direct access to the Glance Ceph store? 2. If Cinder is expected to access the Glance backend directly (e.g., to optimize data movement), how does it determine which Ceph config and keyring to use? 3. Is there any documentation or configuration guidance available on how to tell Cinder which credentials to use when interacting with Glance stores? Any insights or pointers would be greatly appreciated. Best regards, Max
G'day Max, At risk of the blind leading the blind, we have a kolla-ansible with multiple ceph backends, and we haven't run into the issue you're describing, which indicates that you might be missing some config, or some context. * How have you deployed your Openstack? * What do the ceph.conf, cinder.conf and glance.confs look like? * In our kolla-ansible * the cinder.conf specifies a default_volume_type and a few other minor setting * the cinder-volume confs look like: joel.mclean@jblin-kil-dcl-050:/etc/kolla$ ls config/cinder/cinder-volume/ ceph-geographic.client.cinder.keyring ceph-hybrid.client.cinder.keyring ceph-nvme.client.cinder.keyring ceph-geographic.conf ceph-hybrid.conf ceph-nvme.conf * is ceph.client.cinder.keyring a valid storage backend? I.e. you have a backend just called "ceph"? * How did you attempt the conversion of the volume to the image? * We've found that actions taken through Horizon doesn't always give a meaningful error, also actions taken through horizon sometimes provide incorrect context (such as default volume type) if not set up properly, and may not be visible to the member. * The error indicates that the keyring wasn't found, but which process was searching for it? * Review the cinder and glance logs to determine what they were actually trying to do at the point in time. I can confirm that we have cinder and glance with 3 different ceph backends and 5 volume types which happily allow volume to image conversion in both horizon and the Openstack-client without having to specify the image_service:store_id; but we had some teething issues which we worked through that was mostly to do with missing cinder.conf, and correctly setting up projects and user policy. For example, you might have everything configured correctly, but policy is preventing your user from doing the requested task. Hopefully rubber-ducking some of these things will lead you to the answers you seek. Kind Regards, Joel McLean - Micron21 Pty Ltd From: Maximilian Stinsky-Damke <Maximilian.Stinsky-Damke@wiit.cloud> Sent: Friday, 6 June 2025 6:43 PM To: openstack-discuss@lists.openstack.org Subject: [cinder] Volume to Image Upload with Multi-Store Backends (Cinder + Glance) Greetings, I'm in the process of setting up a new OpenStack deployment where both Cinder and Glance are configured with multiple Ceph-based backends using the multi-store functionality. Basic operations (create, delete, snapshot, upload, etc.) work fine for each store in both services. However, I'm encountering issues when trying to upload a Cinder volume to Glance as an image in this setup. My understanding is that when the volume type does not include image_service:store_id, Cinder should upload the image to the default Glance store. However, in my case, this upload fails with the following error: ERROR oslo_messaging.rpc.server FileNotFoundError: [Errno 2] No such file or directory: '/etc/ceph/ceph.client.cinder.keyring' I'm having trouble understanding why Cinder is trying to access this particular keyring. Each of my Cinder backends has its own Ceph config and keyring pointing to the appropriate cluster, and everything else works correctly. Here are my specific questions: 1. Should Cinder not be uploading the volume to Glance via the API, without needing direct access to the Glance Ceph store? 2. If Cinder is expected to access the Glance backend directly (e.g., to optimize data movement), how does it determine which Ceph config and keyring to use? 3. Is there any documentation or configuration guidance available on how to tell Cinder which credentials to use when interacting with Glance stores? Any insights or pointers would be greatly appreciated. Best regards, Max
I can confirm that we have cinder and glance with 3 different ceph backends
I hope you don't mind some questions about this part of setup, as I kind of never understood how to make such setup work. As I have not found how user/service can tell which glance backend to use when spawning instance or magnum cluster, or something like that. Also I was wondering do you have network reachability between ceph clusters or they are isolated in different network segments and can't be cross-reached? As network isolated clusters (different availability zones) was a use case I was looking into, while have the same image set available for all of them, and I kinda stuck with understanding how multi-storage glance is actually used with import to multiple stores, and how cinder will be selecting storage which is available, except checking randomly and waiting connection timeouts, until it finds the reachable backend. I think I have missed something in my logic, so if you can share some insights on the topic - it would be really appreciated.
In writing this all out the first time I had to delete it all, because I realise that in my config we only are using a single of our 3 ceph clusters for glance; 3 for cinder, more or less following the documentation, but only a single back end for glance which follows a similar setup, only that it isn’t a named group. This is the default supported arrangement for kolla-ansible deployments. So, turns out, this was a case of blind-leading-blind. That said, the configuration at https://docs.openstack.org/kolla-ansible/2024.1/reference/storage/external-c... is identical to how our configuration is set up – we just don’t have the other two ceph back ends configured for glance, only cinder. Reading this documentation, it’s not clear to me that if I were to run * openstack image create --volume my-volume my-new-image-name I’m not sure how Openstack would decide which glance storage backend it should use; Openstack client does not appear to show any argument for specifying. Perhaps only specified by metadata/properties when creating the image, based on information from https://docs.openstack.org/glance/latest/admin/multistores.html Specifically, this document discusses that the glance-api.conf must have enabled_backends defined, a default backend must be set. [DEFAULT] enabled_backends = ceph:rbd [glance_store] default_backend = ceph On that page there’s a reference to some meta data that might indicate that image metadata might be use to help it select which store it should live on, but it’s not explicitly documented. Specific to kolla-ansible, backends for multistore are defined: https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/glance/... So, if you haven’t set them, you can probably set them via the appropriate override which appears to be /etc/kolla/config/glance-api.conf The oversight with kolla-ansible out of the box is that it defaults to your first configured ceph backend: /glance/defaults/main.yml:glance_default_backend: "{% if glance_backend_vmware | bool %}vmware{% elif glance_backend_ceph | bool %}{{ glance_ceph_backends[0].name }}{% elif glance_backend_swift | bool %}swift{% elif glance_backend_s3 | bool %}s3{% else %}file{% endif %}" … and the glance ceph backend default template only had one called “rbd”, which is effectively hard coded. This could be worked around in kolla-ansible by modifying the template, but that’s where things will fall off the rails for most users. Still, hopefully this points you in the right direction to get this working? Kind Regards, Joel McLean – Micron21 Pty Ltd From: Dmitriy Rabotyagov <noonedeadpunk@gmail.com> Sent: Wednesday, 11 June 2025 4:31 PM To: Joel McLean <joel.mclean@micron21.com> Cc: Maximilian Stinsky-Damke <Maximilian.Stinsky-Damke@wiit.cloud>; openstack-discuss <openstack-discuss@lists.openstack.org> Subject: Re: [cinder] Volume to Image Upload with Multi-Store Backends (Cinder + Glance) I can confirm that we have cinder and glance with 3 different ceph backends I hope you don't mind some questions about this part of setup, as I kind of never understood how to make such setup work. As I have not found how user/service can tell which glance backend to use when spawning instance or magnum cluster, or something like that. Also I was wondering do you have network reachability between ceph clusters or they are isolated in different network segments and can't be cross-reached? As network isolated clusters (different availability zones) was a use case I was looking into, while have the same image set available for all of them, and I kinda stuck with understanding how multi-storage glance is actually used with import to multiple stores, and how cinder will be selecting storage which is available, except checking randomly and waiting connection timeouts, until it finds the reachable backend. I think I have missed something in my logic, so if you can share some insights on the topic - it would be really appreciated.
Greetings, Thanks for the answer and sorry for the late response I sadly got sick the last weeks. Sadly I was not able to get any further on this topic so far. Just for reference I have the following config in cinder: [DEFAULT] enabled_backends = rbd-az1,rbd-az2,rbd-az3,rbd-sz1 Eeach of these cinder backends got its own backend config pointing to its own ceph conf and ceph keyring. I am able to create volumes in each backend attach them to nova and so on, thats working all fine. In glance I have more or less the same: [DEFAULT] enabled_backends = rbd-az1:rbd, rbd-az2:rbd, rbd-az3:rbd, rbd-sz1:rbd [glance_store] default_backend = rbd-sz1 And each of those backends point again to its own rbd_store_ceph_conf. Now the confusing part is in regards to the error message I posted. (ERROR oslo_messaging.rpc.server FileNotFoundError: [Errno 2] No such file or directory: '/etc/ceph/ceph.client.cinder.keyring') This message comes from cinder-volume when I try to do openstack image create --volume $volume_id $image_name Even though none of my backends have /etc/ceph/ceph.client.cinder.keyring configured as their keyring in cinder. Therefore I think cinder-volume tries to use a default keyring when trying to connect to ceph. But here is the main question why is cinder trying to connect to ceph when doing an image upload? Should cinder not just use the glance api for that? How would cinder know which ceph backend is behind the default backend of glance? The option image_upload_use_cinder_backend is default false which I dont set in my config. Therefore I would think cinder should not try and connect to ceph to do a rbd clone into glance. But maybe I somehow fundamentally misunderstand something with the volume image upload why cinder is trying to directly connect to some ceph cluster while doing so. Thanks again for any help. Best regards Max ________________________________ From: Joel McLean <joel.mclean@micron21.com> Sent: 11 June 2025 09:44 To: Dmitriy Rabotyagov <noonedeadpunk@gmail.com> Cc: Maximilian Stinsky-Damke <Maximilian.Stinsky-Damke@wiit.cloud>; openstack-discuss <openstack-discuss@lists.openstack.org> Subject: RE: [cinder] Volume to Image Upload with Multi-Store Backends (Cinder + Glance) This email is from an unusual correspondent. Make sure this is someone you trust. In writing this all out the first time I had to delete it all, because I realise that in my config we only are using a single of our 3 ceph clusters for glance; 3 for cinder, more or less following the documentation, but only a single back end for glance which follows a similar setup, only that it isn’t a named group. This is the default supported arrangement for kolla-ansible deployments. So, turns out, this was a case of blind-leading-blind. That said, the configuration at https://docs.openstack.org/kolla-ansible/2024.1/reference/storage/external-ceph-guide.html<https://urlsand.esvalabs.com/?u=https%3A%2F%2Fdocs.openstack.org%2Fkolla-ansible%2F2024.1%2Freference%2Fstorage%2Fexternal-ceph-guide.html&e=50525cf7&h=4c131c41&f=y&p=y> is identical to how our configuration is set up – we just don’t have the other two ceph back ends configured for glance, only cinder. Reading this documentation, it’s not clear to me that if I were to run * openstack image create --volume my-volume my-new-image-name I’m not sure how Openstack would decide which glance storage backend it should use; Openstack client does not appear to show any argument for specifying. Perhaps only specified by metadata/properties when creating the image, based on information from https://docs.openstack.org/glance/latest/admin/multistores.html<https://urlsand.esvalabs.com/?u=https%3A%2F%2Fdocs.openstack.org%2Fglance%2Flatest%2Fadmin%2Fmultistores.html&e=50525cf7&h=1ce54ed2&f=y&p=y> Specifically, this document discusses that the glance-api.conf must have enabled_backends defined, a default backend must be set. [DEFAULT] enabled_backends = ceph:rbd [glance_store] default_backend = ceph On that page there’s a reference to some meta data that might indicate that image metadata might be use to help it select which store it should live on, but it’s not explicitly documented. Specific to kolla-ansible, backends for multistore are defined: https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/glance/templates/glance-api.conf.j2<https://urlsand.esvalabs.com/?u=https%3A%2F%2Fgithub.com%2Fopenstack%2Fkolla-ansible%2Fblob%2Fmaster%2Fansible%2Froles%2Fglance%2Ftemplates%2Fglance-api.conf.j2&e=50525cf7&h=2237a010&f=y&p=y> So, if you haven’t set them, you can probably set them via the appropriate override which appears to be /etc/kolla/config/glance-api.conf The oversight with kolla-ansible out of the box is that it defaults to your first configured ceph backend: /glance/defaults/main.yml:glance_default_backend: "{% if glance_backend_vmware | bool %}vmware{% elif glance_backend_ceph | bool %}{{ glance_ceph_backends[0].name }}{% elif glance_backend_swift | bool %}swift{% elif glance_backend_s3 | bool %}s3{% else %}file{% endif %}" … and the glance ceph backend default template only had one called “rbd”, which is effectively hard coded. This could be worked around in kolla-ansible by modifying the template, but that’s where things will fall off the rails for most users. Still, hopefully this points you in the right direction to get this working? Kind Regards, Joel McLean – Micron21 Pty Ltd From: Dmitriy Rabotyagov <noonedeadpunk@gmail.com> Sent: Wednesday, 11 June 2025 4:31 PM To: Joel McLean <joel.mclean@micron21.com> Cc: Maximilian Stinsky-Damke <Maximilian.Stinsky-Damke@wiit.cloud>; openstack-discuss <openstack-discuss@lists.openstack.org> Subject: Re: [cinder] Volume to Image Upload with Multi-Store Backends (Cinder + Glance) I can confirm that we have cinder and glance with 3 different ceph backends I hope you don't mind some questions about this part of setup, as I kind of never understood how to make such setup work. As I have not found how user/service can tell which glance backend to use when spawning instance or magnum cluster, or something like that. Also I was wondering do you have network reachability between ceph clusters or they are isolated in different network segments and can't be cross-reached? As network isolated clusters (different availability zones) was a use case I was looking into, while have the same image set available for all of them, and I kinda stuck with understanding how multi-storage glance is actually used with import to multiple stores, and how cinder will be selecting storage which is available, except checking randomly and waiting connection timeouts, until it finds the reachable backend. I think I have missed something in my logic, so if you can share some insights on the topic - it would be really appreciated. -- This message has been checked by Libraesva ESG and is found to be clean. Report as bad/spam<https://mx10.wiit.cloud/action/4bHHjL2XNQzTgcr/report-as-bad> Blocklist sender<https://mx10.wiit.cloud/action/4bHHjL2XNQzTgcr/blocklist>
participants (3)
-
Dmitriy Rabotyagov
-
Joel McLean
-
Maximilian Stinsky-Damke