I am sending the file with the ceph osd df tree and ceph pg ls-by-pool volumes output as an attachment.
ceph -s:
cluster:
id: ccdcc86e-c0e7-49ba-b2cc-f8c162a42f91
health: HEALTH_OK
services:
mon: 3 daemons, quorum hc-node02,hc-node04,hc-node03 (age 5w)
mgr: hc-node02(active, since 4M)
osd: 54 osds: 54 up (since 14h), 54 in (since 3M)
task status:
data:
pools: 5 pools, 3424 pgs
objects: 4.25M objects, 43 TiB
usage: 88 TiB used, 32 TiB / 120 TiB avail
pgs: 3424 active+clean
io:
client: 16 MiB/s rd, 5.7 MiB/s wr, 856 op/s rd, 483 op/s wr
Thank you very much, we will adjust to replica 3.