That would have been my next suggestion, go though the pool and match all existing volumes to their rados objects and find orphans. You can identify the matching cinder volume to their rbd_data prefix: for i in $(rbd -p volumes ls); do if [ $(rbd info --pretty-format --format json volumes/$i | jq -r '.block_name_prefix') = "rbd_data.{YOUR_PREFIX}" ]; then echo "Volume: $i"; fi ;done And then check cinder if the volume actually exists. With {YOUR_PREFIX} I mean the string between the two dots from your example: c7fb271f7be3b0 Zitat von lfsilva@binario.cloud:
I added the objects from the volumes in the pool volumes(3253434) and then ran rados -p volumes ls |wc -l(3946219), apparently there are around 692785 orphaned objects. I'm going to run a pg repair on all the pgs in this pool and check if I have any positive feedback, while running I also decided to list the objects to see the average size of each one and see how much this would free me up, I ran the command: "rados -p volumes ls | while read obj; do rados -p volumes stat $obj; done > size-obj.txt", and I started getting the following message: "error stat-ing volumes/rbd_data.c7fb271f7be3b0.0000000000001c00: (2) No such file or directory error stat-ing volumes/rbd_data.c7fb271f7be3b0.0000000000005362: (2) No such file or directory error stat-ing volumes/rbd_data.c7fb271f7be3b0.0000000000002914: (2) No such file or directory error stat-ing volumes/rbd_data.c7fb271f7be3b0.0000000000008869: (2) No such file or directory error stat-ing volumes/rbd_data.c7fb271f7be3b0.0000000000008873: (2) No such file or directory error stat-ing volumes/rbd_data.c7fb271f7be3b0.0000000000004b11: (2) No such file or directory " Are these the objects that are impacting growth? If I delete them manually, could this cause any inconsistency in the volumes?