Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri)

Vinh Nguyen Duc vinhducnguyen1708 at gmail.com
Thu Jul 7 04:27:21 UTC 2022


I have a problem with I/O performance on Openstack block device HDD.

*Environment:**Openstack version: Ussuri*
- OS: CentOS8
- Kernel: 4.18.0-240.15.1.el8_3.x86_64
- KVM: qemu-kvm-5.1.0-20.el8
*CEPH version: Octopus * *15.2.8-0.el8.x84_64*
- OS: CentOS8
- Kernel: 4.18.0-240.15.1.el8_3.x86_64
In CEPH Cluster we have 2 class:
- Bluestore
- HDD (only for cinder volume)
- SSD (images, cinder volume)
*Hardware:*
- Ceph-client: 2x10Gbps (bond) MTU 9000
- Ceph-replicate: 2x10Gbps (bond) MTU 9000
*VM:*
- Swapoff
- non LVM

*Issue*When create VM on Openstack using cinder volume HDD, have really
poor performance: 60-85 MB/s writes. And when tests with ioping have high
latency.
*Diagnostic*
1.  I have checked the performance between Compute Host (Openstack) and
CEPH, and created an RBD (HDD class) mounted on Compute Host. And the
performance is 300-400 MB/s.
=>  So i think the problem is in the hypervisor
But when I check performance on VM using cinder Volume SSD, the result
equals performance when test RBD (SSD) mounted on a Compute host.
2.  I already have to configure disk_cachemodes="network=writeback"(and
enable rbd cache client) or test with disk_cachemodes="none" but nothing
different.
3.  Push iperf3 from compute host to random ceph host still has 20Gb
traffic.
4.  Compute Host and CEPH host connected to the same switch (layer2).
Where else can I look for issues?
Please help me in this case.
Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220707/36d66afc/attachment.htm>


More information about the openstack-discuss mailing list