I have 4 fast nvme targets that I want to combine using RAID 10 and use with cinder-lvm. Unfortunately, lvm+mdadm is breaking all of my writes down to 4K -- without even increasing the queue depth to match. SSDs hate this workload. I tried bypassing mdadm by seting up LVM RAID10 by hand and got roughly a 10x increase in performance, so I definitely want to use this if possible. However, it looks like maybe the lvm_mirrors code hasn't been kept up to date. I ran into two main problems trying to use it: 1) The algorithm to calculate free space is broken and always returns 0. This means that the scheduler will never even try to use this pool 2) Thin provisioning is getting in the way... with lvm_mirrors enabled the code is apparently trying to create a new LV on top of the thin provisioned LV... and it is using an lvcreate option that appears to be deprecated (--mirrorlog mirrored) I expect I can fix #1, but what is the best way to handle #2? I was thinking that the easiest fix would be to use the mirror options when the thin LV is created, though I haven't tried it by hand yet to see if it works. The advantage here is that this is a one-time setting and the code path to create individual volumes wouldn't need to check lvm_mirrors at all. Thanks, Mark