On 28/03/2025 14:34, Thomas Goirand wrote:
Hi,
I've quickly discussed with Shean on IRC, and he told me the new virtio-fs driver for Manila was part of Epoxy. This is very exciting. Thanks to everyone who worked on it.
I'd like to deploy it, however, I have no clue how. Note that I already implemented the generic driver using (puppet-manila). But I haven't been able to read any doc about it. Is there anything I should know?
for virtio-fs is build into qemu/libvirt so to be able to create the vm dfintion there is nothing extra for you to install. nova now has a shares module for interacting with manilla but that using the sdk so provided that is install all at the right version all the deps for nova to talk to manilla are in place https://github.com/openstack/nova/blob/master/nova/share/manila.py that just leaves the code to attach a manilla share to the host so that we can present it to the guest. nova know has the concept of a "share driver manager" https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4... which just reuses our cinder volume code for nfs https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/nfs.p... with the addtion of a new cephfs client https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/cephf... they both just use the mount command under the hood to actully mount the share on the compute node so for cephfs you will need a ceph config an key but for nfs you dont really need anything special. if you cloud is configured for ceph cinder volumes or nova on ceph then it should work fine for cephfs. so there isnt anything supper special to call out. the docs are listed here https://docs.openstack.org/nova/latest/admin/manage-shares.html you do need to take some steps to make the vm capable of attaching a manila share with virtio-fs see the limitation sections for some more info https://docs.openstack.org/nova/latest/admin/manage-shares.html#limitations but the tldr is qemu cannot add/remove virtio-fs filestyems to a vm while it is running today so that has implication for when a share can be attached and what operation you can do on a vm with a share. virtio-fs also need the vm memory to be shared. today that mean using hugepags or file backed memory. i have wanted to change nova's default memory more do to memfd and shared for a long time as that will make this and other features like dpdk "just work" its slightly less secure but only in the sense that if you have hypervisor access the guest memory is readable with less steps then it would be otherwise and no worse the using file backed memory or hugepages today. This is a case where i think opting out for new vms is the wrige approch and we can opt exsiting vms into the legacy behavior if there was an upgrade concern. unfortunely im not sure we will have time to work on that in the near term so we are stuck with hugepages/file backed memory for now. the min version and the memory requriemets are covered in the spec along with the basic usage https://specs.openstack.org/openstack/nova-specs/specs/2025.1/implemented/li... they are also in the admin doc i linked above.
Also, is there any pending patches that should get on top of Epoxy? Shean already told me about these 2: https://review.opendev.org/c/openstack/python-openstackclient/+/881540 https://review.opendev.org/c/openstack/openstacksdk/+/880056
It'd be IMO super nice if we could them in Epoxy proper (even though I could manage to sneak them as Debian specific patches into my Debian packages).
well they wont be in the install release but the client team might be open to backpacking after that in a future minor release.
Cheers,
Thomas Goirand (zigo)