In writing this all out the first time I had to delete it all, because I realise that in my config we only are using a single of our 3 ceph clusters for glance; 3 for cinder, more or less following
the documentation, but only a single back end for glance which follows a similar setup, only that it isn’t a named group. This is the default supported arrangement for kolla-ansible deployments.
So, turns out, this was a case of blind-leading-blind.
That said, the configuration at
https://docs.openstack.org/kolla-ansible/2024.1/reference/storage/external-ceph-guide.html is identical to how our configuration is set up – we just don’t have the other two ceph back ends configured for glance, only cinder.
Reading this documentation, it’s not clear to me that if I were to run
I’m not sure how Openstack would decide which glance storage backend it should use; Openstack client does not appear to show any argument for specifying.
Perhaps only specified by metadata/properties when creating the image, based on information from
https://docs.openstack.org/glance/latest/admin/multistores.html
Specifically, this document discusses that the glance-api.conf must have enabled_backends defined, a default backend must be set.
[DEFAULT]
enabled_backends = ceph:rbd
[glance_store]
default_backend = ceph
On that page there’s a reference to some meta data that might indicate that image metadata might be use to help it select which store it should live on, but it’s not explicitly documented.
Specific to kolla-ansible, backends for multistore are defined:
https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/glance/templates/glance-api.conf.j2
So, if you haven’t set them, you can probably set them via the appropriate override which appears to be /etc/kolla/config/glance-api.conf
The oversight with kolla-ansible out of the box is that it defaults to your first configured ceph backend:
/glance/defaults/main.yml:glance_default_backend: "{% if glance_backend_vmware | bool %}vmware{% elif glance_backend_ceph | bool %}{{ glance_ceph_backends[0].name }}{% elif glance_backend_swift
| bool %}swift{% elif glance_backend_s3 | bool %}s3{% else %}file{% endif %}"
… and the glance ceph backend default template only had one called “rbd”, which is effectively hard coded. This could be worked around in kolla-ansible by modifying the template, but that’s where things will fall off
the rails for most users.
Still, hopefully this points you in the right direction to get this working?
Kind Regards,
Joel McLean – Micron21 Pty Ltd
From: Dmitriy Rabotyagov <noonedeadpunk@gmail.com>
Sent: Wednesday, 11 June 2025 4:31 PM
To: Joel McLean <joel.mclean@micron21.com>
Cc: Maximilian Stinsky-Damke <Maximilian.Stinsky-Damke@wiit.cloud>; openstack-discuss <openstack-discuss@lists.openstack.org>
Subject: Re: [cinder] Volume to Image Upload with Multi-Store Backends (Cinder + Glance)
I can confirm that we have cinder and glance with 3 different ceph backends
I hope you don't mind some questions about this part of setup, as I kind of never understood how to make such setup work.
As I have not found how user/service can tell which glance backend to use when spawning instance or magnum cluster, or something like that.
Also I was wondering do you have network reachability between ceph clusters or they are isolated in different network segments and can't be cross-reached? As network isolated clusters (different availability zones) was a use case I was
looking into, while have the same image set available for all of them, and I kinda stuck with understanding how multi-storage glance is actually used with import to multiple stores, and how cinder will be selecting storage which is available, except checking
randomly and waiting connection timeouts, until it finds the reachable backend.
I think I have missed something in my logic, so if you can share some insights on the topic - it would be really appreciated.