Final suggestion: on my system, lvm seems to work best with the "--type raid10" option. Perhaps a cinder.conf option for lvm_type=raid10 would be useful? raid1 too? I am seeing this warning from lvcreate quite a lot, so something needs to change fairly soon: WARNING: Log type "mirrored" is DEPRECATED and will be removed in the future. Use RAID1 LV or disk log instead. Thanks, Mark On Tue, Jul 30, 2019 at 10:08 AM Mark Lehrer <lehrer@gmail.com> wrote:
OK things aren't quite as broken as I thought at first.
It looks like lvm_type needs to be set to default (instead of thin), otherwise the lvm_mirrors option causes havoc. Maybe a warning in the logs about incompatible options would save the next person some time.
Thanks! Mark
On Tue, Jul 30, 2019 at 9:01 AM Mark Lehrer <lehrer@gmail.com> wrote:
I have 4 fast nvme targets that I want to combine using RAID 10 and use with cinder-lvm.
Unfortunately, lvm+mdadm is breaking all of my writes down to 4K -- without even increasing the queue depth to match. SSDs hate this workload.
I tried bypassing mdadm by seting up LVM RAID10 by hand and got roughly a 10x increase in performance, so I definitely want to use this if possible.
However, it looks like maybe the lvm_mirrors code hasn't been kept up to date. I ran into two main problems trying to use it:
1) The algorithm to calculate free space is broken and always returns 0. This means that the scheduler will never even try to use this pool
2) Thin provisioning is getting in the way... with lvm_mirrors enabled the code is apparently trying to create a new LV on top of the thin provisioned LV... and it is using an lvcreate option that appears to be deprecated (--mirrorlog mirrored)
I expect I can fix #1, but what is the best way to handle #2? I was thinking that the easiest fix would be to use the mirror options when the thin LV is created, though I haven't tried it by hand yet to see if it works. The advantage here is that this is a one-time setting and the code path to create individual volumes wouldn't need to check lvm_mirrors at all.
Thanks, Mark