Hi, I am able to use tiny pools in my (virtual) test cluster without such an error. At which point do you get that error? Can you show the entire log entry? Maybe turn on debug mode to see more details. Does the cinder keying have access to both metadata and data pools? Do you specify an rbd data pool in the ceph.conf of the cinder host? If you had 1.5 TB free space, small volumes shouldn’t be a problem, I would assume. So can you create any volumes at all or just not big ones? Zitat von "Yuta Kambe (Fujitsu)" <yuta.kambe@fujitsu.com>:
Hi everyone,
I'm using a Ceph Erasure Coded pool as the backend for OpenStack Cinder. I've created a replicated pool (volumes_meta) for metadata and an erasure-coded pool (volumes_data) for data, following the documentation here: https://docs.ceph.com/en/latest/rados/operations/erasure-code/
When I set rbd_pool to volumes_meta in cinder.conf, I receive an "Insufficient free virtual space" error, even though volumes_data has ample free space. This is because Cinder is calculating free space based on the metadata pool.
When rbd_pool is set to volumes_data, Cinder correctly calculates free space based on the data pool. However, I then encounter an "RBD operation not supported" error during volume creation.
I need Cinder to recognize the free space available in volumes_data while avoiding the "RBD operation not supported" error. I'm unsure how to configure this correctly and would appreciate any advice.
The following is supplemental information.
cinder.conf is:
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes_meta rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_secret_uuid = <uuid>
For example, if the pool usage is as follows, Cinder gets 1.6TiB as the pool free space:
$ ceph df --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL volumes_meta 4 32 40 KiB 24 189 KiB 0 1.6 TiB volumes_data 7 32 61 GiB 15.67k 92 GiB 2.80 2.1 TiB
$ cinder get-pools --detail
+-----------------------------+-----------------------------------------------------------------------------------+ | Property | Value |
+-----------------------------+-----------------------------------------------------------------------------------+ | allocated_capacity_gb | 523 | | backend_state | up | | driver_version | 1.3.0 | | filter_function | None | | free_capacity_gb | 1592.07 <-★free capacity of volumes_meta | | goodness_function | None | | location_info | ceph:/etc/ceph/ceph.conf:<uuid>:cinder:volumes_meta | | max_over_subscription_ratio | 20.0 | | multiattach | True | | name | <hostname>@ceph#ceph | | qos_support | True | | replication_enabled | False | | reserved_percentage | 0 | | storage_protocol | ceph | | thin_provisioning_support | True | | timestamp | 2025-05-29T07:44:17.903149 | | total_capacity_gb | 1592.07 | | vendor_name | Open Source | | volume_backend_name | ceph |
+-----------------------------+-----------------------------------------------------------------------------------+
OpenStack and Ceph versions are:
$ ceph -v ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy (stable) $ dnf list --installed | grep openstack-cinder openstack-cinder.noarch 1:21.3.2-1.el9s @centos-openstack-zed