Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri)

Gorka Eguileor geguileo at redhat.com
Thu Jul 7 10:06:28 UTC 2022


On 07/07, Vinh Nguyen Duc wrote:
> I have a problem with I/O performance on Openstack block device HDD.
>
> *Environment:**Openstack version: Ussuri*
> - OS: CentOS8
> - Kernel: 4.18.0-240.15.1.el8_3.x86_64
> - KVM: qemu-kvm-5.1.0-20.el8
> *CEPH version: Octopus * *15.2.8-0.el8.x84_64*
> - OS: CentOS8
> - Kernel: 4.18.0-240.15.1.el8_3.x86_64
> In CEPH Cluster we have 2 class:
> - Bluestore
> - HDD (only for cinder volume)
> - SSD (images, cinder volume)
> *Hardware:*
> - Ceph-client: 2x10Gbps (bond) MTU 9000
> - Ceph-replicate: 2x10Gbps (bond) MTU 9000
> *VM:*
> - Swapoff
> - non LVM
>
> *Issue*When create VM on Openstack using cinder volume HDD, have really
> poor performance: 60-85 MB/s writes. And when tests with ioping have high
> latency.
> *Diagnostic*
> 1.  I have checked the performance between Compute Host (Openstack) and
> CEPH, and created an RBD (HDD class) mounted on Compute Host. And the
> performance is 300-400 MB/s.

Hi,

I probably won't be able to help you on the hypervisor side, but I have
a couple of questions that may help narrow down the issue:

- Are Cinder volumes using encryption?

- How did you connect the volume to the Compute Host, using krbd or
  rbd-nbd?

- Do both RBD images (Cinder and yours) have the same Ceph flags?

- Did you try connecting to the Compute Host the same RBD image created
  by Cinder instead of creating a new one?

Cheers,
Gorka.

> =>  So i think the problem is in the hypervisor
> But when I check performance on VM using cinder Volume SSD, the result
> equals performance when test RBD (SSD) mounted on a Compute host.
> 2.  I already have to configure disk_cachemodes="network=writeback"(and
> enable rbd cache client) or test with disk_cachemodes="none" but nothing
> different.
> 3.  Push iperf3 from compute host to random ceph host still has 20Gb
> traffic.
> 4.  Compute Host and CEPH host connected to the same switch (layer2).
> Where else can I look for issues?
> Please help me in this case.
> Thank you.




More information about the openstack-discuss mailing list