Hi,

I was conducting another round of tests, which is not a complete solution for the OpenStack platform itself. However, it served as a clever method to assess real NVMe performance within a virtual machine (VM). I decided to attach a full NVMe disk to a VM as "vdb" and assess the performance there. Interestingly, I managed to achieve approximately 80,000 IOPS, signifying a significant improvement. Nevertheless, it's worth noting that this approach may not be directly applicable to my solution, as the root disk configured in the flavor must be labeled as "vda". Regardless, I wanted to present this as a reference to demonstrate that achieving higher IOPS is indeed possible. With collaborative efforts, perhaps similar results can be attained for "vda" disks in fully OpenStack-managed VMs. The disk was added using the following "virsh" command: "virsh attach-disk instance-000034ba /dev/nvme1n1p1 vdb". Additional results, as well as the "dumpxml" output for this VM, are presented in references [1] and [2].

While achieving 80,000 IOPS is satisfactory for me, I also conducted separate tests with a VM that was entirely managed by Libvirt, without involving OpenStack. The VM was set up using the following command: "virt-install --virt-type=kvm --name=local-ubuntu --vcpus=2 --memory=4096 --disk path=/var/lib/nova/instances/test/disk,format=qcow2 --import --network default --graphics none"

In this case, the OS image used was identical to the one employed in my full OpenStack test. The procedure for attaching a "vdb" drive was replicated exactly as it was for my OpenStack VM. The outcome of these tests is quite surprising. I was able to achieve around 130,000 IOPS, despite the configuration being nearly identical. This discrepancy is perplexing and suggests that there might be an issue with the Nova component itself. Although this may be a bold assertion, it's a hypothesis I'm considering until further clarification is obtained. The configuration details for this specific VM, along with the results from "fio" tests, can be found in references [3] and [4].

If anyone possesses insights into how to achieve around 80,000 IOPS within a fully OpenStack-operated environment, I'm eager to receive such suggestions. My objective here is to bridge this gap, and I would greatly appreciate any guidance in this regard.

/Jan Wasilewski

References:
[1] dumpxml of OpenStack managed instance with "vdb" attached: https://paste.openstack.org/show/bQvGUIM3FSHIyA9JoThY/
[2] fio results of OpenStack managed instance with "vdb" attached: https://paste.openstack.org/show/bViUpJTf7UYpsRyGCAt9/
[3] dumpxml of Libvirt managed instance with "vdb" attached: https://paste.openstack.org/show/bGv8dT1l2QaTiAybYrJi/
[4] fio results of Libvirt managed instance with "vdb" attached: https://paste.openstack.org/show/bOzYXkbco0oDfgaD0co8/
[5] xml configuration of vdb drive: https://paste.openstack.org/show/bAJ9MyEWEGOteeJnH5D8/

pt., 11 sie 2023 o 11:27 Jan Wasilewski <finarffin@gmail.com> napisał(a):
Hi Sven,

maybe you missed it, but kernel is provided in a link here [1]. In short: 5.4.0-155-generic. If something additional is needed, just let me know.
/Jan Wasilewski

[1] https://paste.openstack.org/show/bcGw3Glm6U0r1kUsg8nU/

pt., 11 sie 2023 o 10:48 Sven Kieske <kieske@osism.tech> napisał(a):
as a last resort, what kernel is that ubuntu 20.04 running?

I'd advise to use the HWE Kernel at least, maybe even test latest
kernel.org LTS release.

HTH

--
Sven Kieske
Senior Cloud Engineer

Mail: kieske@osism.tech
Web: https://osism.tech

OSISM GmbH
Teckstraße 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139