[nova]is there support for discard/trim in virtio-blk?

Mark Mielke mark.mielke at gmail.com
Thu Sep 2 22:47:42 UTC 2021

On Thu, Sep 2, 2021 at 1:47 PM Sean Mooney <smooney at redhat.com> wrote:

> On Thu, 2021-09-02 at 16:48 +0000, Sven Kieske wrote:
> > Virtio-blk in upstream Kernel[2] and in qemu[3] does clearly support
> > discard/trim, which we discovered thanks to StackExchange[4].
> >
> > So my question is, has someone successfully used trim/discard with
> > virtio-blk in nova provisioned vms?
> and there is an nova config option to enable it.
> https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.hw_disk_discard
> so perhaps you just need to set that o unmap
> e.g.
> /etc/nova/nova.conf:
> [libvirt]
> hw_disk_discard=unmap

With a recent enough qemu on hypervisor, and discard support on the block
store, it can technically work. However, the guest was the problem for me.
For example, I believe the RHEL/CentOS 8.x / 4.18 kernel does NOT provide
support for this capability. But, Linux 5.4 works fine.

Other than this, it works fine. I have used it for a while now, and it is
just bringing the guests up-to-date that prevented it from being more
useful. For older guests, I have a simple script that basically fills the
guests file system up to 95% full with zero blocks, and the zero block has
the same effect on my underlying storage.

Finally, I would caveat that discard should not be a storage management
solution. The storage should be the right size, and it should get naturally
recycled. But, over time - perhaps every few months, or once a year, there
is a consequence that guests don't pass through "free" information to the
hypervisor on the regular, and it will capture dead blocks in the
underlying storage that never get garbage collected, and this is an
unfortunate waste. In our case, SolidFire stores the blocks 3X and it adds
up. In five years, I've done the clean up with discard or zero only twice.

Mark Mielke <mark.mielke at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210902/f3252cd1/attachment.html>

More information about the openstack-discuss mailing list