Hello Eugen, This cluster is package based, follows ceph.conf configuration and output from ceph config dump and pool: CEPH.CONF: [global] fsid = ccdcc86e-c0e7-49ba-b2cc-f8c162a42f91 mon_initial_members = hc-node02, hc-node01, hc-node04 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx rbd default features = 125 rbd mirroring replay delay = 300 mon pg warn max object skew = 20 #Choose reasonable numbers for number of replicas and placement groups. osd pool default size = 2 # Write an object 2 times osd pool default min size = 1 # Allow writing 1 copy in a degraded state osd pool default pg num = 64 osd pool default pgp n0um = 64 #Choose a reasonable crush leaf type #0 for a 1-node cluster. #1 for a multi node cluster in a single rack #2 for a multi node, multi chassis cluster with multiple hosts in a chassis #3 for a multi node cluster with hosts across racks, etc. osd crush chooseleaf type = 1 debug ms = 0 debug mds = 0 debug osd = 0 debug optracker = 0 debug auth = 0 debug asok = 0 debug bluestore = 0 debug bluefs = 0 debug bdev = 0 debug kstore = 0 debug rocksdb = 0 debug eventtrace = 0 debug default = 0 debug rados = 0 debug client = 0 debug perfcounter = 0 debug finisher = 0 [osd] debug osd = 0/0 debug bluestore = 0/0 debug ms = 0/0 osd scrub begin_hour = 19 osd scrub end_hour = 6 osd scrub sleep = 0.1 bluestore cache size ssd = 8589934592 CEPH CONFIG DUMP: WHO MASK LEVEL OPTION VALUE RO mon advanced auth_allow_insecure_global_id_reclaim false mgr advanced mgr/balancer/active true mgr advanced mgr/balancer/mode upmap mgr advanced mgr/dashboard/server_addr 0.0.0.0 * mgr advanced mgr/dashboard/server_port 7000 * mgr advanced mgr/dashboard/ssl true * mgr advanced mgr/restful/server_addr 0.0.0.0 * mgr advanced mgr/restful/server_port 8003 * mgr advanced mgr/telemetry/enabled true * mgr advanced mgr/telemetry/last_opt_revision 3 * CEPH OSD POOL GET VOLUMES ALL: size: 2 min_size: 1 pg_num: 2048 pgp_num: 2048 crush_rule: replicated_rule hashpspool: true nodelete: false nopgchange: false nosizechange: false write_fadvise_dontneed: false noscrub: false nodeep-scrub: false use_gmt_hitset: 1 fast_read: 0 pg_autoscale_mode: off In fact rbd_store_chunk_size is not configured in ceph but it is in cinder.conf, I will add the cinder output too. [rbd-backend-hdd] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = backend-hdd rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = true rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 I also noticed that there are rbd mirror settings in the configuration files that are currently disabled.