Hello, I'm experiencing an issue with the volume usage of the Ceph cluster, which is currently used in OpenStack for volumes. I am working with Ceph version Octopus (15.2.17). When I run the ceph df command, I get this output: --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 120 TiB 33 TiB 87 TiB 87 TiB 72.72 TOTAL 120 TiB 33 TiB 87 TiB 87 TiB 72.72 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 32 69 MiB 101 137 MiB 0 11 TiB images 2 256 1.3 TiB 166.90k 2.5 TiB 10.72 11 TiB vms 3 64 574 KiB 21 2.5 MiB 0 11 TiB volumes 4 2048 41 TiB 3.94M 82 TiB 79.42 11 TiB backups 5 1024 407 GiB 111.85k 818 GiB 3.63 11 TiB And when I run the rbd du -p volumes command, I get this total: NAME PROVISIONED USED <TOTAL> 14 TiB 8.3 TiB This "volumes" pool is currently set to replica 2, and mirroring is not enabled. I have checked for any locked or deleted snapshots but found none. I also ran a ceph osd pool deep-scrub volumes, but it didn't resolve the issue. Has anyone encountered this problem before? Could someone provide assistance?