I just noticed something else from your pg-ls output. Your average object size is bigger than expected. You have 22 GB per PG and around 1960 objects per PG, which are around 11 MB per object. In a rbd pool used by cinder you usually have objects of 4MB, that's the default chunk size rbd objects are split into (rbd_store_chunk_size). How did you setup that Ceph cluster? Do you have some non-default configs? I would recommend to review your configuration, otherwise you could end up in the same situation when migrating your VMs to a new cluster. Is it a cephadm managed cluster or package based? For starters, you could share your ceph.conf from control/compute nodes (one is sufficient if they're identical). What is rbd_store_chunk_size from cinder.conf? Zitat von lfsilva@binario.cloud:
Thank you very much for your answer,
I ran the rbd sparsify command on some pool volumes and there really wasn't much freeing, and since we didn't identify the problem, let's take an alternative route, creating a new environment and migrating the openstack vms and volumes to this new one ceph cluster in the latest version, to check if the problem persists.