[cinder][nova] Local storage in compute node

Eric K. Miller emiller at genesishosting.com
Thu Aug 6 07:57:54 UTC 2020


> No - a thin-provisioned LV in LVM would be best.

From testing, it looks like thick-provisioned is the only choice at this stage.  That's fine.

> I will let everyone know how testing goes.

So far, everything is working perfectly with Nova using LVM.  It was a quick configuration and it did exactly what I expected, which is always nice. :)

As far as performance goes, it is decent, but not stellar.  Of course, I'm comparing crazy fast native NVMe storage in RAID 0 across 4 x Micron 9300 SSDs (using md as the underlying physical volume in LVM) to virtualized storage.

Some numbers from fio, just to get an idea for how good/bad the IOPS will be:

Configuration:
32 core EPYC 7502P with 512GiB of RAM - CentOS 7 latest updates - Kolla Ansible (Stein) deployment
32 vCPU VM with 64GiB of RAM
32 x 10GiB test files (I'm using file tests, not raw device tests, so not optimal, but easiest when the VM root disk is the test disk)
iodepth=10
numofjobs=32
time=30 (seconds)

The VM was deployed using a qcow2 image, then deployed as a raw image, to see the difference in performance.  There was none, which makes sense, since I'm pretty sure the qcow2 image was decompressed and stored in the LVM logical volume - so both tests were measuring the same thing.

Bare metal (random 4KiB reads):
8066MiB/sec
154.34 microsecond avg latency
2.065 million IOPS

VM qcow2 (random 4KiB reads):
589MiB/sec
2122.10 microsecond avg latency
151k IOPS

Bare metal (random 4KiB writes):
4940MiB/sec
252.44 microsecond avg latency
1.265 million IOPS

VM qcow2 (random 4KiB writes):
589MiB/sec
2119.16 microsecond avg latency
151k IOPS

Since the read and write VM results are nearly identical, my assumption is that the emulation layer is the bottleneck.  CPUs in the VM were all at 55% utilization (all kernel usage).  The qemu process on the bare metal machine indicated 1600% (or so) CPU utilization.

Below are runs with sequential 1MiB block tests

Bare metal (sequential 1MiB reads):
13.3GiB/sec
23446.43 microsecond avg latency
13.7k IOPS

VM qcow2 (sequential 1MiB reads):
8378MiB/sec
38164.52 microsecond avg latency
8377 IOPS

Bare metal (sequential 1MiB writes):
8098MiB/sec
39488.00 microsecond avg latency
8097 million IOPS

VM qcow2 (sequential 1MiB writes):
8087MiB/sec
39534.96 microsecond avg latency
8087 IOPS

Amazing that a VM can move 8GiB/sec to/from storage. :)  However, IOPS limits are a bit disappointing when compared to bare metal (but this is relative since 151k IOPS is quite a bit!).

Not sure if additional "iothreads" QEMU would help, but that is not set in the Libvirt XML file, and I don't see any way to use Nova to set it.

The Libvirt XML for the disk appears as:

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source dev='/dev/nova_vg/4cc7dfa4-c57f-4e73-a6fa-0da283244a4b_disk'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>

Any suggestions for improvement?

I "think" that the "images_type = flat" option in nova.conf indicates that images are stored in the /var/lib/nova/instances/* directories?  If so, that might be an option, but since we're using Kolla, that directory (or rather /var/lib/nova) is currently a docker volume.  So, it might be necessary to mount the NVMe storage at its respective /var/lib/docker/volumes/nova_compute/_data/instances directory.

Not sure if the "flat" option will be any faster, especially since Docker would be another layer to go through.  Any opinions?

Thanks!

Eric



More information about the openstack-discuss mailing list