I am happy to share numbers from my iscsi setup. However these numbers probably won't mean much for your workloads. I tuned my openstack to perform as well as possible for a specific workload (Openstack CI), so some of the things I have put my efforts into are for CI work and not really relevant to general purpose. Also your cinder performance hinges greatly on your networks capabilities. I use a dedicated nic for iscsi traffic, and MTU's are set at 9000 for every device in the iscsi path. *Only* that nic is set at MTU 9000, because if the rest of the openstack network is, it can create more problems than it solves. My network spine is 40G, and each compute node has 4 10G nics. I only use one nic for iscsi traffic. The block storage node has two 40G nics.
With that said, I use the fio tool to benchmark performance on linux systems.
Here is the command i use to run the benchmark
fio --numjobs=16 --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=32 --size=10G --readwrite=randrw --rwmixread=50
From the block storage node locally
Run status group 0 (all jobs):
READ: bw=2960MiB/s (3103MB/s), 185MiB/s-189MiB/s (194MB/s-198MB/s), io=79.9GiB (85.8GB), run=26948-27662msec
WRITE: bw=2963MiB/s (3107MB/s), 185MiB/s-191MiB/s (194MB/s-200MB/s), io=80.1GiB (85.0GB), run=26948-27662msec
From inside a vm
Run status group 0 (all jobs):
READ: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=30.0GiB (32.2GB), run=69242-69605msec
WRITE: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=29.0GiB (32.2GB), run=69242-69605msec
The vm side of the test is able to push pretty close to the limits of the nic. My cloud also currently has a full workload on it, as I have learned in working to get an optimized for CI cloud... it does matter if there is a workload or not.
Are you using raid for your ssd's, if so what type?
Do you mind sharing what workload will go on your Openstack deployment?
Is it DB, web, general purpose, etc.
~/Donny D