[cinder]How do I configure Cinder to recognize free space in the data pool when using a Ceph Erasure Coded Pool?
Hi everyone, I'm using a Ceph Erasure Coded pool as the backend for OpenStack Cinder. I've created a replicated pool (volumes_meta) for metadata and an erasure-coded pool (volumes_data) for data, following the documentation here: https://docs.ceph.com/en/latest/rados/operations/erasure-code/ When I set rbd_pool to volumes_meta in cinder.conf, I receive an "Insufficient free virtual space" error, even though volumes_data has ample free space. This is because Cinder is calculating free space based on the metadata pool. When rbd_pool is set to volumes_data, Cinder correctly calculates free space based on the data pool. However, I then encounter an "RBD operation not supported" error during volume creation. I need Cinder to recognize the free space available in volumes_data while avoiding the "RBD operation not supported" error. I'm unsure how to configure this correctly and would appreciate any advice. The following is supplemental information. cinder.conf is: [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes_meta rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_secret_uuid = <uuid> For example, if the pool usage is as follows, Cinder gets 1.6TiB as the pool free space: $ ceph df --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL volumes_meta 4 32 40 KiB 24 189 KiB 0 1.6 TiB volumes_data 7 32 61 GiB 15.67k 92 GiB 2.80 2.1 TiB $ cinder get-pools --detail +-----------------------------+-----------------------------------------------------------------------------------+ | Property | Value | +-----------------------------+-----------------------------------------------------------------------------------+ | allocated_capacity_gb | 523 | | backend_state | up | | driver_version | 1.3.0 | | filter_function | None | | free_capacity_gb | 1592.07 <-★free capacity of volumes_meta | | goodness_function | None | | location_info | ceph:/etc/ceph/ceph.conf:<uuid>:cinder:volumes_meta | | max_over_subscription_ratio | 20.0 | | multiattach | True | | name | <hostname>@ceph#ceph | | qos_support | True | | replication_enabled | False | | reserved_percentage | 0 | | storage_protocol | ceph | | thin_provisioning_support | True | | timestamp | 2025-05-29T07:44:17.903149 | | total_capacity_gb | 1592.07 | | vendor_name | Open Source | | volume_backend_name | ceph | +-----------------------------+-----------------------------------------------------------------------------------+ OpenStack and Ceph versions are: $ ceph -v ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy (stable) $ dnf list --installed | grep openstack-cinder openstack-cinder.noarch 1:21.3.2-1.el9s @centos-openstack-zed
Hi, I am able to use tiny pools in my (virtual) test cluster without such an error. At which point do you get that error? Can you show the entire log entry? Maybe turn on debug mode to see more details. Does the cinder keying have access to both metadata and data pools? Do you specify an rbd data pool in the ceph.conf of the cinder host? If you had 1.5 TB free space, small volumes shouldn’t be a problem, I would assume. So can you create any volumes at all or just not big ones? Zitat von "Yuta Kambe (Fujitsu)" <yuta.kambe@fujitsu.com>:
Hi everyone,
I'm using a Ceph Erasure Coded pool as the backend for OpenStack Cinder. I've created a replicated pool (volumes_meta) for metadata and an erasure-coded pool (volumes_data) for data, following the documentation here: https://docs.ceph.com/en/latest/rados/operations/erasure-code/
When I set rbd_pool to volumes_meta in cinder.conf, I receive an "Insufficient free virtual space" error, even though volumes_data has ample free space. This is because Cinder is calculating free space based on the metadata pool.
When rbd_pool is set to volumes_data, Cinder correctly calculates free space based on the data pool. However, I then encounter an "RBD operation not supported" error during volume creation.
I need Cinder to recognize the free space available in volumes_data while avoiding the "RBD operation not supported" error. I'm unsure how to configure this correctly and would appreciate any advice.
The following is supplemental information.
cinder.conf is:
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes_meta rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_secret_uuid = <uuid>
For example, if the pool usage is as follows, Cinder gets 1.6TiB as the pool free space:
$ ceph df --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL volumes_meta 4 32 40 KiB 24 189 KiB 0 1.6 TiB volumes_data 7 32 61 GiB 15.67k 92 GiB 2.80 2.1 TiB
$ cinder get-pools --detail
+-----------------------------+-----------------------------------------------------------------------------------+ | Property | Value |
+-----------------------------+-----------------------------------------------------------------------------------+ | allocated_capacity_gb | 523 | | backend_state | up | | driver_version | 1.3.0 | | filter_function | None | | free_capacity_gb | 1592.07 <-★free capacity of volumes_meta | | goodness_function | None | | location_info | ceph:/etc/ceph/ceph.conf:<uuid>:cinder:volumes_meta | | max_over_subscription_ratio | 20.0 | | multiattach | True | | name | <hostname>@ceph#ceph | | qos_support | True | | replication_enabled | False | | reserved_percentage | 0 | | storage_protocol | ceph | | thin_provisioning_support | True | | timestamp | 2025-05-29T07:44:17.903149 | | total_capacity_gb | 1592.07 | | vendor_name | Open Source | | volume_backend_name | ceph |
+-----------------------------+-----------------------------------------------------------------------------------+
OpenStack and Ceph versions are:
$ ceph -v ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy (stable) $ dnf list --installed | grep openstack-cinder openstack-cinder.noarch 1:21.3.2-1.el9s @centos-openstack-zed
Hello, Cinder does not support EC pools, I’ve proposed [1] so address that but won’t have time to push it through as of now. /Tobias [1] https://review.opendev.org/c/openstack/cinder/+/914930
On 30 May 2025, at 13:56, Eugen Block <eblock@nde.ag> wrote:
Hi,
I am able to use tiny pools in my (virtual) test cluster without such an error. At which point do you get that error? Can you show the entire log entry? Maybe turn on debug mode to see more details. Does the cinder keying have access to both metadata and data pools? Do you specify an rbd data pool in the ceph.conf of the cinder host? If you had 1.5 TB free space, small volumes shouldn’t be a problem, I would assume. So can you create any volumes at all or just not big ones?
Zitat von "Yuta Kambe (Fujitsu)" <yuta.kambe@fujitsu.com>:
Hi everyone,
I'm using a Ceph Erasure Coded pool as the backend for OpenStack Cinder. I've created a replicated pool (volumes_meta) for metadata and an erasure-coded pool (volumes_data) for data, following the documentation here: https://docs.ceph.com/en/latest/rados/operations/erasure-code/
When I set rbd_pool to volumes_meta in cinder.conf, I receive an "Insufficient free virtual space" error, even though volumes_data has ample free space. This is because Cinder is calculating free space based on the metadata pool.
When rbd_pool is set to volumes_data, Cinder correctly calculates free space based on the data pool. However, I then encounter an "RBD operation not supported" error during volume creation.
I need Cinder to recognize the free space available in volumes_data while avoiding the "RBD operation not supported" error. I'm unsure how to configure this correctly and would appreciate any advice.
The following is supplemental information.
cinder.conf is:
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes_meta rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_secret_uuid = <uuid>
For example, if the pool usage is as follows, Cinder gets 1.6TiB as the pool free space:
$ ceph df --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL volumes_meta 4 32 40 KiB 24 189 KiB 0 1.6 TiB volumes_data 7 32 61 GiB 15.67k 92 GiB 2.80 2.1 TiB
$ cinder get-pools --detail +-----------------------------+-----------------------------------------------------------------------------------+ | Property | Value | +-----------------------------+-----------------------------------------------------------------------------------+ | allocated_capacity_gb | 523 | | backend_state | up | | driver_version | 1.3.0 | | filter_function | None | | free_capacity_gb | 1592.07 <-★free capacity of volumes_meta | | goodness_function | None | | location_info | ceph:/etc/ceph/ceph.conf:<uuid>:cinder:volumes_meta | | max_over_subscription_ratio | 20.0 | | multiattach | True | | name | <hostname>@ceph#ceph | | qos_support | True | | replication_enabled | False | | reserved_percentage | 0 | | storage_protocol | ceph | | thin_provisioning_support | True | | timestamp | 2025-05-29T07:44:17.903149 | | total_capacity_gb | 1592.07 | | vendor_name | Open Source | | volume_backend_name | ceph | +-----------------------------+-----------------------------------------------------------------------------------+
OpenStack and Ceph versions are:
$ ceph -v ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy (stable) $ dnf list --installed | grep openstack-cinder openstack-cinder.noarch 1:21.3.2-1.el9s @centos-openstack-zed
On Fri, May 30, 2025 at 09:09:17AM +0000, Yuta Kambe (Fujitsu) wrote:
Hi everyone,
I'm using a Ceph Erasure Coded pool as the backend for OpenStack Cinder. I've created a replicated pool (volumes_meta) for metadata and an erasure-coded pool (volumes_data) for data, following the documentation here: https://docs.ceph.com/en/latest/rados/operations/erasure-code/
When I set rbd_pool to volumes_meta in cinder.conf, I receive an "Insufficient free virtual space" error, even though volumes_data has ample free space. This is because Cinder is calculating free space based on the metadata pool.
When rbd_pool is set to volumes_data, Cinder correctly calculates free space based on the data pool. However, I then encounter an "RBD operation not supported" error during volume creation.
I need Cinder to recognize the free space available in volumes_data while avoiding the "RBD operation not supported" error. I'm unsure how to configure this correctly and would appreciate any advice.
One way that should make EC pools work with cinder is to specify `rbd default data pool` in the client section for your cinder user in the specified ceph.conf for the backend, i.e.: ``` [client.cinder] rbd default data pool = volumes_data ``` Then point your rbd_pool to the metadata pool (volumes_meta) in your cinder backend config. This way, whenever the cinder user is used it will default to volumes_data as its data pool.
The following is supplemental information.
cinder.conf is:
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes_meta rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_secret_uuid = <uuid>
For example, if the pool usage is as follows, Cinder gets 1.6TiB as the pool free space:
$ ceph df --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL volumes_meta 4 32 40 KiB 24 189 KiB 0 1.6 TiB volumes_data 7 32 61 GiB 15.67k 92 GiB 2.80 2.1 TiB
$ cinder get-pools --detail +-----------------------------+-----------------------------------------------------------------------------------+ | Property | Value | +-----------------------------+-----------------------------------------------------------------------------------+ | allocated_capacity_gb | 523 | | backend_state | up | | driver_version | 1.3.0 | | filter_function | None | | free_capacity_gb | 1592.07 <-★free capacity of volumes_meta | | goodness_function | None | | location_info | ceph:/etc/ceph/ceph.conf:<uuid>:cinder:volumes_meta | | max_over_subscription_ratio | 20.0 | | multiattach | True | | name | <hostname>@ceph#ceph | | qos_support | True | | replication_enabled | False | | reserved_percentage | 0 | | storage_protocol | ceph | | thin_provisioning_support | True | | timestamp | 2025-05-29T07:44:17.903149 | | total_capacity_gb | 1592.07 | | vendor_name | Open Source | | volume_backend_name | ceph | +-----------------------------+-----------------------------------------------------------------------------------+
OpenStack and Ceph versions are:
$ ceph -v ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy (stable) $ dnf list --installed | grep openstack-cinder openstack-cinder.noarch 1:21.3.2-1.el9s @centos-openstack-zed
-- Jan Horstmann Senior Cloud Engineer Mail: horstmann@osism.tech Web: https://osism.tech OSISM GmbH Talweg 8 / 75417 Mühlacker / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Mühlacker Amtsgericht Mannheim, HRB 750852
Thank you for your reply. The configuration you suggested is already set up, and I have confirmed that volume creation works with this setting. However, when rbd_pool is set to the metadata pool (volumes_meta), my understanding is that Cinder calculates the virtual space based on the free space of volumes_meta. Is there currently no way for Cinder to calculate the virtual space based on the free space of the data pool (volumes data)? Since Cinder's EC Pool support is in progress and it doesn't currently support EC pools, should I use a replicated pool? Best Regards, ________________________________ 差出人: Jan Horstmann <horstmann@osism.tech> 送信: 2025 年 6 月 3 日 (火曜日) 1:20 宛先: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> 件名: Re: [cinder]How do I configure Cinder to recognize free space in the data pool when using a Ceph Erasure Coded Pool? On Fri, May 30, 2025 at 09:09:17AM +0000, Yuta Kambe (Fujitsu) wrote:
Hi everyone,
I'm using a Ceph Erasure Coded pool as the backend for OpenStack Cinder. I've created a replicated pool (volumes_meta) for metadata and an erasure-coded pool (volumes_data) for data, following the documentation here: https://docs.ceph.com/en/latest/rados/operations/erasure-code/
When I set rbd_pool to volumes_meta in cinder.conf, I receive an "Insufficient free virtual space" error, even though volumes_data has ample free space. This is because Cinder is calculating free space based on the metadata pool.
When rbd_pool is set to volumes_data, Cinder correctly calculates free space based on the data pool. However, I then encounter an "RBD operation not supported" error during volume creation.
I need Cinder to recognize the free space available in volumes_data while avoiding the "RBD operation not supported" error. I'm unsure how to configure this correctly and would appreciate any advice.
One way that should make EC pools work with cinder is to specify `rbd default data pool` in the client section for your cinder user in the specified ceph.conf for the backend, i.e.: ``` [client.cinder] rbd default data pool = volumes_data ``` Then point your rbd_pool to the metadata pool (volumes_meta) in your cinder backend config. This way, whenever the cinder user is used it will default to volumes_data as its data pool.
The following is supplemental information.
cinder.conf is:
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes_meta rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_secret_uuid = <uuid>
For example, if the pool usage is as follows, Cinder gets 1.6TiB as the pool free space:
$ ceph df --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL volumes_meta 4 32 40 KiB 24 189 KiB 0 1.6 TiB volumes_data 7 32 61 GiB 15.67k 92 GiB 2.80 2.1 TiB
$ cinder get-pools --detail +-----------------------------+-----------------------------------------------------------------------------------+ | Property | Value | +-----------------------------+-----------------------------------------------------------------------------------+ | allocated_capacity_gb | 523 | | backend_state | up | | driver_version | 1.3.0 | | filter_function | None | | free_capacity_gb | 1592.07 <-★free capacity of volumes_meta | | goodness_function | None | | location_info | ceph:/etc/ceph/ceph.conf:<uuid>:cinder:volumes_meta | | max_over_subscription_ratio | 20.0 | | multiattach | True | | name | <hostname>@ceph#ceph | | qos_support | True | | replication_enabled | False | | reserved_percentage | 0 | | storage_protocol | ceph | | thin_provisioning_support | True | | timestamp | 2025-05-29T07:44:17.903149 | | total_capacity_gb | 1592.07 | | vendor_name | Open Source | | volume_backend_name | ceph | +-----------------------------+-----------------------------------------------------------------------------------+
OpenStack and Ceph versions are:
$ ceph -v ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy (stable) $ dnf list --installed | grep openstack-cinder openstack-cinder.noarch 1:21.3.2-1.el9s @centos-openstack-zed
-- Jan Horstmann Senior Cloud Engineer Mail: horstmann@osism.tech Web: https://osism.tech OSISM GmbH Talweg 8 / 75417 Mühlacker / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Mühlacker Amtsgericht Mannheim, HRB 750852
参加者 (4)
-
Eugen Block
-
Jan Horstmann
-
Tobias Urdin - Binero
-
Yuta Kambe (Fujitsu)