On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better options
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com> wrote:
Folks,
I am configuring NFS for the cinder backend but somehow it doesn't go well. I am running kolla-ansible with the 2023.1 release.
cinder.conf
[DEFAULT] enabled_backends = volumes-ssd,volumes-nfs
[volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False
Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I am able to write on NFS share also so it's not a permission issue also.
$ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.16 8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245)
nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
But service is still showing down.
cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down | 2024-02-28T04:13:11.000000 |
In logs I am seeing these 3 lines but then no activity in logs even after restarting the container so that is very strange.
2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration.
Has anyone configured NFS and noticed any behavior?