[kolla][cinder][nfs] NFS backend error for volumes
Folks, I am configuring NFS for the cinder backend but somehow it doesn't go well. I am running kolla-ansible with the 2023.1 release. cinder.conf [DEFAULT] enabled_backends = volumes-ssd,volumes-nfs [volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I am able to write on NFS share also so it's not a permission issue also. $ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245) But service is still showing down. | cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down | 2024-02-28T04:13:11.000000 | In logs I am seeing these 3 lines but then no activity in logs even after restarting the container so that is very strange. 2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration. Has anyone configured NFS and noticed any behavior?
It Looks like everyone hates NFS and nobody uses it :) On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com> wrote:
Folks,
I am configuring NFS for the cinder backend but somehow it doesn't go well. I am running kolla-ansible with the 2023.1 release.
cinder.conf
[DEFAULT] enabled_backends = volumes-ssd,volumes-nfs
[volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False
Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I am able to write on NFS share also so it's not a permission issue also.
$ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245)
But service is still showing down.
| cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down | 2024-02-28T04:13:11.000000 |
In logs I am seeing these 3 lines but then no activity in logs even after restarting the container so that is very strange.
2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration.
Has anyone configured NFS and noticed any behavior?
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better options
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com> wrote:
Folks,
I am configuring NFS for the cinder backend but somehow it doesn't go well. I am running kolla-ansible with the 2023.1 release.
cinder.conf
[DEFAULT] enabled_backends = volumes-ssd,volumes-nfs
[volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False
Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I am able to write on NFS share also so it's not a permission issue also.
$ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.16 8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245)
nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
But service is still showing down.
cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down | 2024-02-28T04:13:11.000000 |
In logs I am seeing these 3 lines but then no activity in logs even after restarting the container so that is very strange.
2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration.
Has anyone configured NFS and noticed any behavior?
Thank you for detailed information, I would like to use NFS and see how it works. My end goal is to use iSCSI for cinder volume service. Do you think switching to NFS v4 will fix the problem in my case. My setup is super simple but somehow it just doesn't like NFS and cinder-volume service is showing down even all the permission and configs are correct. On Fri, Mar 1, 2024 at 9:19 AM <smooney@redhat.com> wrote:
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better options
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com>
wrote:
Folks,
I am configuring NFS for the cinder backend but somehow it doesn't go well. I am running kolla-ansible with the 2023.1 release.
cinder.conf
[DEFAULT] enabled_backends = volumes-ssd,volumes-nfs
[volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False
Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I am able to write on NFS share also so it's not a permission issue also.
$ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs
(rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.16
8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245) nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
But service is still showing down.
cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down | 2024-02-28T04:13:11.000000 |
In logs I am seeing these 3 lines but then no activity in logs even
after
restarting the container so that is very strange.
2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NAS configuration.
Has anyone configured NFS and noticed any behavior?
Hello Sean, I've configured NFS v4 but still I am seeing cinder-volume@nfs service is down and nothing interesting in logs (I have turned on DEBUG also). What could be the problem? I have just found out that this is not the 2023.1 release but the Zed release of openstack. 2024-03-05 20:39:35.855 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 32 in 0.697s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:35.857 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] 'sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332' failed. Not Retrying. execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:473 2024-03-05 20:39:35.858 150 ERROR os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to mount 192.168.18.245:/volume1/ISO, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-03-05 20:39:35.859 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to do pnfs mount. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:157 2024-03-05 20:39:35.860 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:36.753 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.892s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:36.754 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Mounted 192.168.18.245:/volume1/ISO using nfs. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:152 2024-03-05 20:39:36.755 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.756 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS variable secure_file_permissions setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:432 2024-03-05 20:39:36.757 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-03-05 20:39:36.757 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS secure file operations setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:458 2024-03-05 20:39:36.758 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration. 2024-03-05 20:39:36.759 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Loading shares from /etc/cinder/nfs_shares. _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:597 2024-03-05 20:39:36.760 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] shares loaded: {'192.168.18.245:/volume1/ISO': None} _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:629 2024-03-05 20:39:36.761 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Already mounted: /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 mount /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:105 2024-03-05 20:39:36.761 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.762 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:37.458 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.696s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:37.460 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 On Tue, Mar 5, 2024 at 12:55 PM Satish Patel <satish.txt@gmail.com> wrote:
Thank you for detailed information, I would like to use NFS and see how it works. My end goal is to use iSCSI for cinder volume service.
Do you think switching to NFS v4 will fix the problem in my case. My setup is super simple but somehow it just doesn't like NFS and cinder-volume service is showing down even all the permission and configs are correct.
On Fri, Mar 1, 2024 at 9:19 AM <smooney@redhat.com> wrote:
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better options
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com>
wrote:
Folks,
I am configuring NFS for the cinder backend but somehow it doesn't go well. I am running kolla-ansible with the 2023.1 release.
cinder.conf
[DEFAULT] enabled_backends = volumes-ssd,volumes-nfs
[volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False
Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I
am
able to write on NFS share also so it's not a permission issue also.
$ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs
(rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.16
8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245) nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
But service is still showing down.
cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down | 2024-02-28T04:13:11.000000 |
In logs I am seeing these 3 lines but then no activity in logs even
after
restarting the container so that is very strange.
2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NAS configuration.
Has anyone configured NFS and noticed any behavior?
Hi Sean, I have stupid question, How does vm running on compute nodes talk to NFS block volume? Do I need to mount NFS on compute nodes and if yes then at what location? On Tue, Mar 5, 2024 at 3:49 PM Satish Patel <satish.txt@gmail.com> wrote:
Hello Sean,
I've configured NFS v4 but still I am seeing cinder-volume@nfs service is down and nothing interesting in logs (I have turned on DEBUG also). What could be the problem? I have just found out that this is not the 2023.1 release but the Zed release of openstack.
2024-03-05 20:39:35.855 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 32 in 0.697s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:35.857 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] 'sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332' failed. Not Retrying. execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:473 2024-03-05 20:39:35.858 150 ERROR os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to mount 192.168.18.245:/volume1/ISO, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-03-05 20:39:35.859 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to do pnfs mount. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:157 2024-03-05 20:39:35.860 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:36.753 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.892s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:36.754 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Mounted 192.168.18.245:/volume1/ISO using nfs. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:152 2024-03-05 20:39:36.755 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.756 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS variable secure_file_permissions setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:432 2024-03-05 20:39:36.757 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-03-05 20:39:36.757 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS secure file operations setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:458 2024-03-05 20:39:36.758 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration. 2024-03-05 20:39:36.759 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Loading shares from /etc/cinder/nfs_shares. _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:597 2024-03-05 20:39:36.760 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] shares loaded: {'192.168.18.245:/volume1/ISO': None} _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:629 2024-03-05 20:39:36.761 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Already mounted: /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 mount /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:105 2024-03-05 20:39:36.761 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.762 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:37.458 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.696s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:37.460 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
On Tue, Mar 5, 2024 at 12:55 PM Satish Patel <satish.txt@gmail.com> wrote:
Thank you for detailed information, I would like to use NFS and see how it works. My end goal is to use iSCSI for cinder volume service.
Do you think switching to NFS v4 will fix the problem in my case. My setup is super simple but somehow it just doesn't like NFS and cinder-volume service is showing down even all the permission and configs are correct.
On Fri, Mar 1, 2024 at 9:19 AM <smooney@redhat.com> wrote:
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better options
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com>
wrote:
Folks,
I am configuring NFS for the cinder backend but somehow it doesn't go well. I am running kolla-ansible with the 2023.1 release.
cinder.conf
[DEFAULT] enabled_backends = volumes-ssd,volumes-nfs
[volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False
Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I
am
able to write on NFS share also so it's not a permission issue also.
$ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs
(rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.16
8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245) nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
But service is still showing down.
cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down
|
2024-02-28T04:13:11.000000 |
In logs I am seeing these 3 lines but then no activity in logs even after restarting the container so that is very strange.
2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NAS configuration.
Has anyone configured NFS and noticed any behavior?
Hi Oliver, Thank you! could you please share your config + what proto are you using for the NFS server? How does kolla-ansible mount NFS share on compute nodes on what container (libvirt or nova_compute?) because in my case it's not able to mount :( On Tue, Mar 5, 2024 at 5:40 PM Oliver Weinmann <oliver.weinmann@me.com> wrote:
Hi Satish,
I’m using NFS in kolla-Ansible and I think your problem is this:
mount.nfs: Protocol not supported
By default it will try to use NFS 4.1. I’m using OmniOS as a storage system and this only support NFS 4 and not 4.1. what storage system are you using?
In my case I switched to NFS 3. When I’m back at work tomorrow, I can send you my config.
If I remember correctly you have to set this in the nfs_shares file.
For stability I can’t really say much since I’m using Ceph as a main storage system.
Cheers, Oliver
Von meinem iPhone gesendet
Am 05.03.2024 um 22:56 schrieb Satish Patel <satish.txt@gmail.com>:
Hi Sean,
I have stupid question, How does vm running on compute nodes talk to NFS block volume? Do I need to mount NFS on compute nodes and if yes then at what location?
On Tue, Mar 5, 2024 at 3:49 PM Satish Patel <satish.txt@gmail.com> wrote:
Hello Sean,
I've configured NFS v4 but still I am seeing cinder-volume@nfs service is down and nothing interesting in logs (I have turned on DEBUG also). What could be the problem? I have just found out that this is not the 2023.1 release but the Zed release of openstack.
2024-03-05 20:39:35.855 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 32 in 0.697s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:35.857 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] 'sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332' failed. Not Retrying. execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:473 2024-03-05 20:39:35.858 150 ERROR os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to mount 192.168.18.245:/volume1/ISO, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-03-05 20:39:35.859 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to do pnfs mount. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:157 2024-03-05 20:39:35.860 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:36.753 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.892s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:36.754 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Mounted 192.168.18.245:/volume1/ISO using nfs. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:152 2024-03-05 20:39:36.755 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.756 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS variable secure_file_permissions setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:432 2024-03-05 20:39:36.757 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-03-05 20:39:36.757 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS secure file operations setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:458 2024-03-05 20:39:36.758 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration. 2024-03-05 20:39:36.759 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Loading shares from /etc/cinder/nfs_shares. _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:597 2024-03-05 20:39:36.760 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] shares loaded: {'192.168.18.245:/volume1/ISO': None} _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:629 2024-03-05 20:39:36.761 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Already mounted: /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 mount /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:105 2024-03-05 20:39:36.761 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.762 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:37.458 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.696s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:37.460 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
On Tue, Mar 5, 2024 at 12:55 PM Satish Patel <satish.txt@gmail.com> wrote:
Thank you for detailed information, I would like to use NFS and see how it works. My end goal is to use iSCSI for cinder volume service.
Do you think switching to NFS v4 will fix the problem in my case. My setup is super simple but somehow it just doesn't like NFS and cinder-volume service is showing down even all the permission and configs are correct.
On Fri, Mar 1, 2024 at 9:19 AM <smooney@redhat.com> wrote:
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better options
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com>
wrote:
Folks,
I am configuring NFS for the cinder backend but somehow it doesn't
go
well. I am running kolla-ansible with the 2023.1 release.
cinder.conf
[DEFAULT] enabled_backends = volumes-ssd,volumes-nfs
[volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False
Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I am able to write on NFS share also so it's not a permission issue also.
$ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs
(rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.16
8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245) nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
But service is still showing down.
> cinder-volume | os-ctrl2@volumes-nfs | nova | enabled |
down |
2024-02-28T04:13:11.000000 |
In logs I am seeing these 3 lines but then no activity in logs even after restarting the container so that is very strange.
2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NAS configuration.
Has anyone configured NFS and noticed any behavior?
Hi Sean,
I have stupid question, How does vm running on compute nodes talk to NFS block volume? Do I need to mount NFS on compute nodes and if yes then at what location? yes we mount the nfs share on the compute node and then the voluem is just a qcow or raw file on that share
On Tue, 2024-03-05 at 16:48 -0500, Satish Patel wrote: the location of the mount point is specifed by https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.nfs... it defaults too $state_path/mnt which is by default /var/lib/nova/mnt so the volume will be something like /var/lib/nova/mnt/<cinder volume uuid>/<volume uuid>.qcow the exact location is specifived by cinder in the connection info as part of the volume attachment. because of how snapshots are implented in this case the <volume uuid>.qcow file may have multiple addtional backing files for the various volume snapshots so there may be multiple files within that host volume mount. if you want to play with this in a test env you can use the nfs devstack plugin https://github.com/openstack/devstack-plugin-nfs to spine this up in vm and then compare to your actul env. or you can look at the logs form the devstack-plugin-nfs-tempest-full https://zuul.openstack.org/builds?job_name=devstack-plugin-nfs-tempest-full if we look at the nova logs form that job https://78efdeb5d53dd83161de-d5860347ad41e6939aca030b514b1ef7.ssl.cf5.rackcd... we see the nfs mount ends up looking like this Mar 06 13:44:04.314787 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.volume.mount [None req-06796a19-47f0- 482d-9af7-4f2758161bde tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] [instance: bbc16e80-ebf2-40f2-afa5-f947e0af096b] _HostMountState.mount(fstype=nfs, export=localhost:/srv/nfs1, vol_name=volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1, /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57, options=[]) generation 0 {{(pid=85429) mount /opt/stack/nova/nova/virt/libvirt/volume/mount.py:288}} so /srv/nfs1 on the nfs server is mounted at /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57 the libvirt xml for this volume is generated as so Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.guest [None req-ba5c8b84-0e2c-4b45- 8235-60954aed3426 tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] detach device xml: <disk type="file" device="disk"> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <driver name="qemu" type="raw" cache="none" io="native"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <alias name="ua-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <source file="/opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57/volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <target dev="vdb" bus="virtio"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <serial>5e87db18-0cfe-46ff-b34d-85f4606c1fa1</serial> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <address type="pci" domain="0x0000" bus="0x00" slot="0x08" function="0x0"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: </disk> so within the /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57 mount there is a raw file called volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1 this was form tempest-TaggedAttachmentsTest-673386850-project-member so that is why if you look in the logs you see the vm does not have this volume innnitall and then we attach it in the job. from qemu's perspective this volume is just a local file, that local file just happens to be on an nfs filesystem rahter then a local disk.
On Tue, Mar 5, 2024 at 3:49 PM Satish Patel <satish.txt@gmail.com> wrote:
Hello Sean,
I've configured NFS v4 but still I am seeing cinder-volume@nfs service is down and nothing interesting in logs (I have turned on DEBUG also). What could be the problem? I have just found out that this is not the 2023.1 release but the Zed release of openstack.
2024-03-05 20:39:35.855 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 32 in 0.697s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:35.857 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] 'sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332' failed. Not Retrying. execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:473 2024-03-05 20:39:35.858 150 ERROR os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to mount 192.168.18.245:/volume1/ISO, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-03-05 20:39:35.859 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to do pnfs mount. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:157 2024-03-05 20:39:35.860 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:36.753 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.892s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:36.754 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Mounted 192.168.18.245:/volume1/ISO using nfs. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:152 2024-03-05 20:39:36.755 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.756 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS variable secure_file_permissions setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:432 2024-03-05 20:39:36.757 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-03-05 20:39:36.757 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS secure file operations setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:458 2024-03-05 20:39:36.758 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration. 2024-03-05 20:39:36.759 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Loading shares from /etc/cinder/nfs_shares. _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:597 2024-03-05 20:39:36.760 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] shares loaded: {'192.168.18.245:/volume1/ISO': None} _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:629 2024-03-05 20:39:36.761 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Already mounted: /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 mount /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:105 2024-03-05 20:39:36.761 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.762 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:37.458 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.696s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:37.460 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
On Tue, Mar 5, 2024 at 12:55 PM Satish Patel <satish.txt@gmail.com> wrote:
Thank you for detailed information, I would like to use NFS and see how it works. My end goal is to use iSCSI for cinder volume service.
Do you think switching to NFS v4 will fix the problem in my case. My setup is super simple but somehow it just doesn't like NFS and cinder-volume service is showing down even all the permission and configs are correct.
On Fri, Mar 1, 2024 at 9:19 AM <smooney@redhat.com> wrote:
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better options
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com>
wrote:
Folks,
I am configuring NFS for the cinder backend but somehow it doesn't go well. I am running kolla-ansible with the 2023.1 release.
cinder.conf
[DEFAULT] enabled_backends = volumes-ssd,volumes-nfs
[volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False
Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I
am
able to write on NFS share also so it's not a permission issue also.
$ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs
(rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=19 2.16
8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245) nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
But service is still showing down.
> cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down
2024-02-28T04:13:11.000000 |
In logs I am seeing these 3 lines but then no activity in logs even after restarting the container so that is very strange.
2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NAS configuration.
Has anyone configured NFS and noticed any behavior?
Hi Sean, Everything works without any error as soon as I configured openstack with NGX storage which is running NFS v4.1 https://www.ngxstorage.com/ Damn, looks like 4.1 is the key here. On Wed, Mar 6, 2024 at 10:36 AM <smooney@redhat.com> wrote:
Hi Sean,
I have stupid question, How does vm running on compute nodes talk to NFS block volume? Do I need to mount NFS on compute nodes and if yes then at what location? yes we mount the nfs share on the compute node and then the voluem is just a qcow or raw file on that share
On Tue, 2024-03-05 at 16:48 -0500, Satish Patel wrote: the location of the mount point is specifed by
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.nfs... it defaults too $state_path/mnt
which is by default /var/lib/nova/mnt so the volume will be something like /var/lib/nova/mnt/<cinder volume uuid>/<volume uuid>.qcow
the exact location is specifived by cinder in the connection info as part of the volume attachment.
because of how snapshots are implented in this case the <volume uuid>.qcow file may have multiple addtional backing files for the various volume snapshots so there may be multiple files within that host volume mount.
if you want to play with this in a test env you can use the nfs devstack plugin https://github.com/openstack/devstack-plugin-nfs to spine this up in vm and then compare to your actul env. or you can look at the logs form the devstack-plugin-nfs-tempest-full
https://zuul.openstack.org/builds?job_name=devstack-plugin-nfs-tempest-full
if we look at the nova logs form that job
https://78efdeb5d53dd83161de-d5860347ad41e6939aca030b514b1ef7.ssl.cf5.rackcd...
we see the nfs mount ends up looking like this
Mar 06 13:44:04.314787 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.volume.mount [None req-06796a19-47f0- 482d-9af7-4f2758161bde tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] [instance: bbc16e80-ebf2-40f2-afa5-f947e0af096b] _HostMountState.mount(fstype=nfs, export=localhost:/srv/nfs1, vol_name=volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1, /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57, options=[]) generation 0 {{(pid=85429) mount /opt/stack/nova/nova/virt/libvirt/volume/mount.py:288}}
so /srv/nfs1 on the nfs server is mounted at /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57
the libvirt xml for this volume is generated as so
Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.guest [None req-ba5c8b84-0e2c-4b45- 8235-60954aed3426 tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] detach device xml: <disk type="file" device="disk"> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <driver name="qemu" type="raw" cache="none" io="native"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <alias name="ua-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <source
file="/opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57/volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <target dev="vdb" bus="virtio"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <serial>5e87db18-0cfe-46ff-b34d-85f4606c1fa1</serial> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <address type="pci" domain="0x0000" bus="0x00" slot="0x08" function="0x0"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: </disk>
so within the /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57 mount there is a raw file called volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1
this was form tempest-TaggedAttachmentsTest-673386850-project-member
so that is why if you look in the logs you see the vm does not have this volume innnitall and then we attach it in the job.
from qemu's perspective this volume is just a local file, that local file just happens to be on an nfs filesystem rahter then a local disk.
On Tue, Mar 5, 2024 at 3:49 PM Satish Patel <satish.txt@gmail.com>
wrote:
Hello Sean,
I've configured NFS v4 but still I am seeing cinder-volume@nfs
down and nothing interesting in logs (I have turned on DEBUG also). What could be the problem? I have just found out that this is not the 2023.1 release but the Zed release of openstack.
2024-03-05 20:39:35.855 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 32 in 0.697s execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422
2024-03-05 20:39:35.857 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] 'sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332' failed. Not Retrying. execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:473
2024-03-05 20:39:35.858 150 ERROR os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to mount 192.168.18.245:/volume1/ISO, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-03-05 20:39:35.859 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to do pnfs mount. _mount_nfs
/var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:157
2024-03-05 20:39:35.860 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
2024-03-05 20:39:36.753 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245: /volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.892s execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422
2024-03-05 20:39:36.754 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Mounted 192.168.18.245:/volume1/ISO using nfs. _mount_nfs
/var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:152
2024-03-05 20:39:36.755 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358
2024-03-05 20:39:36.756 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS variable secure_file_permissions setting is: false set_nas_security_options
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:432
2024-03-05 20:39:36.757 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NFS configuration. 2024-03-05 20:39:36.757 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS secure file operations setting is: false set_nas_security_options
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:458
2024-03-05 20:39:36.758 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NAS configuration. 2024-03-05 20:39:36.759 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Loading shares from /etc/cinder/nfs_shares. _load_shares_config
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:597
2024-03-05 20:39:36.760 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] shares loaded: {'192.168.18.245:/volume1/ISO': None} _load_shares_config
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:629
2024-03-05 20:39:36.761 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Already mounted: /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 mount
/var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:105
2024-03-05 20:39:36.761 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358
2024-03-05 20:39:36.762 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
2024-03-05 20:39:37.458 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.696s execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422
2024-03-05 20:39:37.460 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
On Tue, Mar 5, 2024 at 12:55 PM Satish Patel <satish.txt@gmail.com>
wrote:
Thank you for detailed information, I would like to use NFS and see how it works. My end goal is to use iSCSI for cinder volume service.
Do you think switching to NFS v4 will fix the problem in my case. My setup is super simple but somehow it just doesn't like NFS and cinder-volume service is showing down even all the permission and
configs
are correct.
On Fri, Mar 1, 2024 at 9:19 AM <smooney@redhat.com> wrote:
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <
satish.txt@gmail.com>
wrote:
> Folks, > > I am configuring NFS for the cinder backend but somehow it
doesn't go
> well. I am running kolla-ansible with the 2023.1 release. > > cinder.conf > > [DEFAULT] > enabled_backends = volumes-ssd,volumes-nfs > > [volumes-nfs] > volume_driver = cinder.volume.drivers.nfs.NfsDriver > volume_backend_name = volumes-nfs > nfs_shares_config = /etc/cinder/nfs_shares > nfs_snapshot_support = True > nas_secure_file_permissions = False > nas_secure_file_operations = False > > Inside the cinder_volume docker container I can see it mounts NFS > automatically and directory permissions is also cinder:cinder also I am > able to write on NFS share also so it's not a permission issue also. > > $ docker exec -it cinder_volume mount | grep nfs > 192.168.18.245:/volume1/NFS on > /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs >
(rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=19
2.16
>
8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245)
nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
> > But service is still showing down. > > > cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down
> 2024-02-28T04:13:11.000000 | > > In logs I am seeing these 3 lines but then no activity in logs even after > restarting the container so that is very strange. > > 2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None > req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount > 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported > : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error > while running command. > 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None > req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file > permissions mode will be 666 (allowing other/world read & write access). > This is considered an insecure NAS environment. Please see >
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
> for information on a secure NFS configuration. > 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None > req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file > operations will be run as root: allowing root level access at
service is options the
storage
> backend. This is considered an insecure NAS environment. Please see >
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
> for information on a secure NAS configuration. > > Has anyone configured NFS and noticed any behavior? > > > >
But in nova-compute logs I am seeing following WARNING not sure what is that about 2024-03-06 15:57:57.687 7 WARNING os_brick.initiator.connectors.nvmeof [None req-b1eae455-a2fc-44c7-9e80-aaffc0358a1b 6bdb8c56c8e1408dac18d820e3d48119 b7ef60710f9a470785a32afa4134342e - - default default] Process execution error in _get_host_uuid: Unexpected error while running command. Command: blkid overlay -s UUID -o value Exit code: 2 Stdout: '' Stderr: '': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. On Wed, Mar 6, 2024 at 10:50 AM Satish Patel <satish.txt@gmail.com> wrote:
Hi Sean,
Everything works without any error as soon as I configured openstack with NGX storage which is running NFS v4.1 https://www.ngxstorage.com/
Damn, looks like 4.1 is the key here.
On Wed, Mar 6, 2024 at 10:36 AM <smooney@redhat.com> wrote:
Hi Sean,
I have stupid question, How does vm running on compute nodes talk to NFS block volume? Do I need to mount NFS on compute nodes and if yes then at what location? yes we mount the nfs share on the compute node and then the voluem is just a qcow or raw file on that share
On Tue, 2024-03-05 at 16:48 -0500, Satish Patel wrote: the location of the mount point is specifed by
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.nfs... it defaults too $state_path/mnt
which is by default /var/lib/nova/mnt so the volume will be something like /var/lib/nova/mnt/<cinder volume uuid>/<volume uuid>.qcow
the exact location is specifived by cinder in the connection info as part of the volume attachment.
because of how snapshots are implented in this case the <volume uuid>.qcow file may have multiple addtional backing files for the various volume snapshots so there may be multiple files within that host volume mount.
if you want to play with this in a test env you can use the nfs devstack plugin https://github.com/openstack/devstack-plugin-nfs to spine this up in vm and then compare to your actul env. or you can look at the logs form the devstack-plugin-nfs-tempest-full
https://zuul.openstack.org/builds?job_name=devstack-plugin-nfs-tempest-full
if we look at the nova logs form that job
https://78efdeb5d53dd83161de-d5860347ad41e6939aca030b514b1ef7.ssl.cf5.rackcd...
we see the nfs mount ends up looking like this
Mar 06 13:44:04.314787 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.volume.mount [None req-06796a19-47f0- 482d-9af7-4f2758161bde tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] [instance: bbc16e80-ebf2-40f2-afa5-f947e0af096b] _HostMountState.mount(fstype=nfs, export=localhost:/srv/nfs1, vol_name=volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1, /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57, options=[]) generation 0 {{(pid=85429) mount /opt/stack/nova/nova/virt/libvirt/volume/mount.py:288}}
so /srv/nfs1 on the nfs server is mounted at /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57
the libvirt xml for this volume is generated as so
Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.guest [None req-ba5c8b84-0e2c-4b45- 8235-60954aed3426 tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] detach device xml: <disk type="file" device="disk"> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <driver name="qemu" type="raw" cache="none" io="native"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <alias name="ua-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <source
file="/opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57/volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <target dev="vdb" bus="virtio"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <serial>5e87db18-0cfe-46ff-b34d-85f4606c1fa1</serial> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <address type="pci" domain="0x0000" bus="0x00" slot="0x08" function="0x0"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: </disk>
so within the /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57 mount there is a raw file called volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1
this was form tempest-TaggedAttachmentsTest-673386850-project-member
so that is why if you look in the logs you see the vm does not have this volume innnitall and then we attach it in the job.
from qemu's perspective this volume is just a local file, that local file just happens to be on an nfs filesystem rahter then a local disk.
On Tue, Mar 5, 2024 at 3:49 PM Satish Patel <satish.txt@gmail.com>
wrote:
Hello Sean,
I've configured NFS v4 but still I am seeing cinder-volume@nfs
down and nothing interesting in logs (I have turned on DEBUG also). What could be the problem? I have just found out that this is not the 2023.1 release but the Zed release of openstack.
2024-03-05 20:39:35.855 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 32 in 0.697s execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422
2024-03-05 20:39:35.857 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] 'sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332' failed. Not Retrying. execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:473
2024-03-05 20:39:35.858 150 ERROR os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to mount 192.168.18.245:/volume1/ISO, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-03-05 20:39:35.859 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to do
mount. _mount_nfs
/var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:157
2024-03-05 20:39:35.860 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
2024-03-05 20:39:36.753 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245: /volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.892s execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422
2024-03-05 20:39:36.754 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Mounted 192.168.18.245:/volume1/ISO using nfs. _mount_nfs
/var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:152
2024-03-05 20:39:36.755 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358
2024-03-05 20:39:36.756 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS variable secure_file_permissions setting is: false set_nas_security_options
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:432
2024-03-05 20:39:36.757 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NFS configuration. 2024-03-05 20:39:36.757 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS secure file operations setting is: false set_nas_security_options
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:458
2024-03-05 20:39:36.758 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NAS configuration. 2024-03-05 20:39:36.759 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Loading shares from /etc/cinder/nfs_shares. _load_shares_config
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:597
2024-03-05 20:39:36.760 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] shares loaded: {'192.168.18.245:/volume1/ISO': None} _load_shares_config
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:629
2024-03-05 20:39:36.761 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Already mounted: /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 mount
/var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:105
2024-03-05 20:39:36.761 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358
2024-03-05 20:39:36.762 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
2024-03-05 20:39:37.458 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.696s execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422
2024-03-05 20:39:37.460 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
On Tue, Mar 5, 2024 at 12:55 PM Satish Patel <satish.txt@gmail.com>
wrote:
Thank you for detailed information, I would like to use NFS and see how it works. My end goal is to use iSCSI for cinder volume service.
Do you think switching to NFS v4 will fix the problem in my case. My setup is super simple but somehow it just doesn't like NFS and cinder-volume service is showing down even all the permission and
configs
are correct.
On Fri, Mar 1, 2024 at 9:19 AM <smooney@redhat.com> wrote:
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote: > It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph > > On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <
satish.txt@gmail.com>
wrote: > > > Folks, > > > > I am configuring NFS for the cinder backend but somehow it doesn't go > > well. I am running kolla-ansible with the 2023.1 release. > > > > cinder.conf > > > > [DEFAULT] > > enabled_backends = volumes-ssd,volumes-nfs > > > > [volumes-nfs] > > volume_driver = cinder.volume.drivers.nfs.NfsDriver > > volume_backend_name = volumes-nfs > > nfs_shares_config = /etc/cinder/nfs_shares > > nfs_snapshot_support = True > > nas_secure_file_permissions = False > > nas_secure_file_operations = False > > > > Inside the cinder_volume docker container I can see it mounts NFS > > automatically and directory permissions is also cinder:cinder also I am > > able to write on NFS share also so it's not a permission issue also. > > > > $ docker exec -it cinder_volume mount | grep nfs > > 192.168.18.245:/volume1/NFS on > > /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs > >
(rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=19
2.16 > >
8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245)
nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
> > > > But service is still showing down. > > > > > cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down > > > 2024-02-28T04:13:11.000000 | > > > > In logs I am seeing these 3 lines but then no activity in logs even after > > restarting the container so that is very strange. > > > > 2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None > > req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount > > 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported > > : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error > > while running command. > > 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None > > req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file > > permissions mode will be 666 (allowing other/world read & write access). > > This is considered an insecure NAS environment. Please see > >
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
> > for information on a secure NFS configuration. > > 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None > > req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file > > operations will be run as root: allowing root level access at
service is pnfs options the
storage > > backend. This is considered an insecure NAS environment. Please see > >
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
> > for information on a secure NAS configuration. > > > > Has anyone configured NFS and noticed any behavior? > > > > > > > >
Hi Satish, I use NFSv3 and this is my config: cat > /etc/kolla/config/cinder/nfs_shares << EOF #HOST:SHARE 192.168.2.2:/odroidxu4/openstack_volumes -o nfsvers=3 #Enable Cinder NFS Backend sed -i 's/^#enable_cinder_backend_nfs:.*/enable_cinder_backend_nfs: “yes"/g' /etc/kolla/globals.yml #Enable Cinder Backup NFS Backend sed -i 's/^#cinder_backup_driver:.*/cinder_backup_driver: "nfs"/g' /etc/kolla/globals.yml sed -i 's?^#cinder_backup_share:.*?cinder_backup_share: "192.168.2.2:/odroidxu4/openstack_backup"?g' /etc/kolla/globals.yml sed -i 's/^#cinder_backup_mount_options_nfs:.*/cinder_backup_mount_options_nfs: "vers=3"/g' /etc/kolla/globals.yml Hope this helps. Cheers, Oliver
On 6. Mar 2024, at 16:36, smooney@redhat.com wrote:
Hi Sean,
I have stupid question, How does vm running on compute nodes talk to NFS block volume? Do I need to mount NFS on compute nodes and if yes then at what location? yes we mount the nfs share on the compute node and then the voluem is just a qcow or raw file on that share
On Tue, 2024-03-05 at 16:48 -0500, Satish Patel wrote: the location of the mount point is specifed by https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.nfs... it defaults too $state_path/mnt
which is by default /var/lib/nova/mnt so the volume will be something like /var/lib/nova/mnt/<cinder volume uuid>/<volume uuid>.qcow
the exact location is specifived by cinder in the connection info as part of the volume attachment.
because of how snapshots are implented in this case the <volume uuid>.qcow file may have multiple addtional backing files for the various volume snapshots so there may be multiple files within that host volume mount.
if you want to play with this in a test env you can use the nfs devstack plugin https://github.com/openstack/devstack-plugin-nfs to spine this up in vm and then compare to your actul env. or you can look at the logs form the devstack-plugin-nfs-tempest-full
https://zuul.openstack.org/builds?job_name=devstack-plugin-nfs-tempest-full
if we look at the nova logs form that job https://78efdeb5d53dd83161de-d5860347ad41e6939aca030b514b1ef7.ssl.cf5.rackcd...
we see the nfs mount ends up looking like this
Mar 06 13:44:04.314787 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.volume.mount [None req-06796a19-47f0- 482d-9af7-4f2758161bde tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] [instance: bbc16e80-ebf2-40f2-afa5-f947e0af096b] _HostMountState.mount(fstype=nfs, export=localhost:/srv/nfs1, vol_name=volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1, /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57, options=[]) generation 0 {{(pid=85429) mount /opt/stack/nova/nova/virt/libvirt/volume/mount.py:288}}
so /srv/nfs1 on the nfs server is mounted at /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57
the libvirt xml for this volume is generated as so
Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.guest [None req-ba5c8b84-0e2c-4b45- 8235-60954aed3426 tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] detach device xml: <disk type="file" device="disk"> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <driver name="qemu" type="raw" cache="none" io="native"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <alias name="ua-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <source file="/opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57/volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <target dev="vdb" bus="virtio"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <serial>5e87db18-0cfe-46ff-b34d-85f4606c1fa1</serial> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <address type="pci" domain="0x0000" bus="0x00" slot="0x08" function="0x0"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: </disk>
so within the /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57 mount there is a raw file called volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1
this was form tempest-TaggedAttachmentsTest-673386850-project-member
so that is why if you look in the logs you see the vm does not have this volume innnitall and then we attach it in the job.
from qemu's perspective this volume is just a local file, that local file just happens to be on an nfs filesystem rahter then a local disk.
On Tue, Mar 5, 2024 at 3:49 PM Satish Patel <satish.txt@gmail.com> wrote:
Hello Sean,
I've configured NFS v4 but still I am seeing cinder-volume@nfs service is down and nothing interesting in logs (I have turned on DEBUG also). What could be the problem? I have just found out that this is not the 2023.1 release but the Zed release of openstack.
2024-03-05 20:39:35.855 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 32 in 0.697s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:35.857 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] 'sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332' failed. Not Retrying. execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:473 2024-03-05 20:39:35.858 150 ERROR os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to mount 192.168.18.245:/volume1/ISO, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-03-05 20:39:35.859 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to do pnfs mount. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:157 2024-03-05 20:39:35.860 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:36.753 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.892s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:36.754 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Mounted 192.168.18.245:/volume1/ISO using nfs. _mount_nfs /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:152 2024-03-05 20:39:36.755 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.756 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS variable secure_file_permissions setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:432 2024-03-05 20:39:36.757 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-03-05 20:39:36.757 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS secure file operations setting is: false set_nas_security_options /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:458 2024-03-05 20:39:36.758 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration. 2024-03-05 20:39:36.759 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Loading shares from /etc/cinder/nfs_shares. _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:597 2024-03-05 20:39:36.760 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] shares loaded: {'192.168.18.245:/volume1/ISO': None} _load_shares_config /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:629 2024-03-05 20:39:36.761 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Already mounted: /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 mount /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:105 2024-03-05 20:39:36.761 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted /var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.762 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:37.458 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.696s execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:37.460 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
On Tue, Mar 5, 2024 at 12:55 PM Satish Patel <satish.txt@gmail.com> wrote:
Thank you for detailed information, I would like to use NFS and see how it works. My end goal is to use iSCSI for cinder volume service.
Do you think switching to NFS v4 will fix the problem in my case. My setup is super simple but somehow it just doesn't like NFS and cinder-volume service is showing down even all the permission and configs are correct.
On Fri, Mar 1, 2024 at 9:19 AM <smooney@redhat.com> wrote:
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :) for cidner unless its with a hardware san there are may better options
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com>
wrote:
> Folks, > > I am configuring NFS for the cinder backend but somehow it doesn't go > well. I am running kolla-ansible with the 2023.1 release. > > cinder.conf > > [DEFAULT] > enabled_backends = volumes-ssd,volumes-nfs > > [volumes-nfs] > volume_driver = cinder.volume.drivers.nfs.NfsDriver > volume_backend_name = volumes-nfs > nfs_shares_config = /etc/cinder/nfs_shares > nfs_snapshot_support = True > nas_secure_file_permissions = False > nas_secure_file_operations = False > > Inside the cinder_volume docker container I can see it mounts NFS > automatically and directory permissions is also cinder:cinder also I
am
> able to write on NFS share also so it's not a permission issue also. > > $ docker exec -it cinder_volume mount | grep nfs > 192.168.18.245:/volume1/NFS on > /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs > (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=19 2.16 > 8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245) nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
> > But service is still showing down. > >> cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down
> 2024-02-28T04:13:11.000000 | > > In logs I am seeing these 3 lines but then no activity in logs even after > restarting the container so that is very strange. > > 2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None > req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount > 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not supported > : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error > while running command. > 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None > req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file > permissions mode will be 666 (allowing other/world read & write access). > This is considered an insecure NAS environment. Please see > https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html > for information on a secure NFS configuration. > 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None > req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file > operations will be run as root: allowing root level access at the storage > backend. This is considered an insecure NAS environment. Please see > https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html > for information on a secure NAS configuration. > > Has anyone configured NFS and noticed any behavior? > > > >
Hi Oliver, Thank you so much for sharing the configuration. I have a quick question related to how computer nodes mount NFS shares? Does cinder instruct compute nodes to mount NFS share or kolla-ansible push NFS share point configuration to all the compute nodes. Just trying to understand the logic behind the logic. Thanks! On Wed, Mar 6, 2024 at 1:00 PM Oliver Weinmann <oliver.weinmann@me.com> wrote:
Hi Satish,
I use NFSv3 and this is my config:
cat > /etc/kolla/config/cinder/nfs_shares << EOF #HOST:SHARE 192.168.2.2:/odroidxu4/openstack_volumes -o nfsvers=3
#Enable Cinder NFS Backend sed -i 's/^#enable_cinder_backend_nfs:.*/enable_cinder_backend_nfs: “yes"/g' /etc/kolla/globals.yml
#Enable Cinder Backup NFS Backend sed -i 's/^#cinder_backup_driver:.*/cinder_backup_driver: "nfs"/g' /etc/kolla/globals.yml sed -i 's?^#cinder_backup_share:.*?cinder_backup_share: "192.168.2.2: /odroidxu4/openstack_backup"?g' /etc/kolla/globals.yml sed -i 's/^#cinder_backup_mount_options_nfs:.*/cinder_backup_mount_options_nfs: "vers=3"/g' /etc/kolla/globals.yml
Hope this helps.
Cheers, Oliver
On 6. Mar 2024, at 16:36, smooney@redhat.com wrote:
On Tue, 2024-03-05 at 16:48 -0500, Satish Patel wrote:
Hi Sean,
I have stupid question, How does vm running on compute nodes talk to NFS block volume? Do I need to mount NFS on compute nodes and if yes then at what location?
yes we mount the nfs share on the compute node and then the voluem is just a qcow or raw file on that share the location of the mount point is specifed by
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.nfs... it defaults too $state_path/mnt
which is by default /var/lib/nova/mnt so the volume will be something like /var/lib/nova/mnt/<cinder volume uuid>/<volume uuid>.qcow
the exact location is specifived by cinder in the connection info as part of the volume attachment.
because of how snapshots are implented in this case the <volume uuid>.qcow file may have multiple addtional backing files for the various volume snapshots so there may be multiple files within that host volume mount.
if you want to play with this in a test env you can use the nfs devstack plugin https://github.com/openstack/devstack-plugin-nfs to spine this up in vm and then compare to your actul env. or you can look at the logs form the devstack-plugin-nfs-tempest-full
https://zuul.openstack.org/builds?job_name=devstack-plugin-nfs-tempest-full
if we look at the nova logs form that job
https://78efdeb5d53dd83161de-d5860347ad41e6939aca030b514b1ef7.ssl.cf5.rackcd...
we see the nfs mount ends up looking like this
Mar 06 13:44:04.314787 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.volume.mount [None req-06796a19-47f0- 482d-9af7-4f2758161bde tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] [instance: bbc16e80-ebf2-40f2-afa5-f947e0af096b] _HostMountState.mount(fstype=nfs, export=localhost:/srv/nfs1, vol_name=volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1, /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57, options=[]) generation 0 {{(pid=85429) mount /opt/stack/nova/nova/virt/libvirt/volume/mount.py:288}}
so /srv/nfs1 on the nfs server is mounted at /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57
the libvirt xml for this volume is generated as so
Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: DEBUG nova.virt.libvirt.guest [None req-ba5c8b84-0e2c-4b45- 8235-60954aed3426 tempest-TaggedAttachmentsTest-673386850 tempest-TaggedAttachmentsTest-673386850-project-member] detach device xml: <disk type="file" device="disk"> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <driver name="qemu" type="raw" cache="none" io="native"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <alias name="ua-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <source
file="/opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57/volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <target dev="vdb" bus="virtio"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <serial>5e87db18-0cfe-46ff-b34d-85f4606c1fa1</serial> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: <address type="pci" domain="0x0000" bus="0x00" slot="0x08" function="0x0"/> Mar 06 13:44:10.638487 np0036975774 nova-compute[85429]: </disk>
so within the /opt/stack/data/nova/mnt/896fb15da6036b68a917322e72ebfe57 mount there is a raw file called volume-5e87db18-0cfe-46ff-b34d-85f4606c1fa1
this was form tempest-TaggedAttachmentsTest-673386850-project-member
so that is why if you look in the logs you see the vm does not have this volume innnitall and then we attach it in the job.
from qemu's perspective this volume is just a local file, that local file just happens to be on an nfs filesystem rahter then a local disk.
On Tue, Mar 5, 2024 at 3:49 PM Satish Patel <satish.txt@gmail.com> wrote:
Hello Sean,
I've configured NFS v4 but still I am seeing cinder-volume@nfs service is down and nothing interesting in logs (I have turned on DEBUG also). What could be the problem? I have just found out that this is not the 2023.1 release but the Zed release of openstack.
2024-03-05 20:39:35.855 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 32 in 0.697s execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:35.857 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] 'sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332' failed. Not Retrying. execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:473 2024-03-05 20:39:35.858 150 ERROR os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to mount 192.168.18.245:/volume1/ISO, reason: mount.nfs: Protocol not supported : oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2024-03-05 20:39:35.859 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Failed to do pnfs mount. _mount_nfs
/var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:157 2024-03-05 20:39:35.860 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245:/volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:36.753 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 192.168.18.245: /volume1/ISO /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.892s execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:36.754 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Mounted 192.168.18.245:/volume1/ISO using nfs. _mount_nfs
/var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:152 2024-03-05 20:39:36.755 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.756 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS variable secure_file_permissions setting is: false set_nas_security_options
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:432 2024-03-05 20:39:36.757 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write access). This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NFS configuration. 2024-03-05 20:39:36.757 150 DEBUG cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] NAS secure file operations setting is: false set_nas_security_options
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/nfs.py:458 2024-03-05 20:39:36.758 150 WARNING cinder.volume.drivers.nfs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] The NAS file operations will be run as root: allowing root level access at the storage backend. This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html for information on a secure NAS configuration. 2024-03-05 20:39:36.759 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Loading shares from /etc/cinder/nfs_shares. _load_shares_config
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:597 2024-03-05 20:39:36.760 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] shares loaded: {'192.168.18.245:/volume1/ISO': None} _load_shares_config
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:629 2024-03-05 20:39:36.761 150 DEBUG os_brick.remotefs.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Already mounted: /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 mount
/var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/remotefs/remotefs.py:105 2024-03-05 20:39:36.761 150 DEBUG cinder.volume.drivers.remotefs [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Available shares ['192.168.18.245:/volume1/ISO'] _ensure_shares_mounted
/var/lib/kolla/venv/lib/python3.10/site-packages/cinder/volume/drivers/remotefs.py:358 2024-03-05 20:39:36.762 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384 2024-03-05 20:39:37.458 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332" returned: 0 in 0.696s execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:422 2024-03-05 20:39:37.460 150 DEBUG oslo_concurrency.processutils [None req-51d22088-acf4-4dc6-a858-828cc3eb9394 - - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 execute
/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:384
On Tue, Mar 5, 2024 at 12:55 PM Satish Patel <satish.txt@gmail.com> wrote:
Thank you for detailed information, I would like to use NFS and see how it works. My end goal is to use iSCSI for cinder volume service.
Do you think switching to NFS v4 will fix the problem in my case. My setup is super simple but somehow it just doesn't like NFS and cinder-volume service is showing down even all the permission and configs are correct.
On Fri, Mar 1, 2024 at 9:19 AM <smooney@redhat.com> wrote:
On Fri, 2024-03-01 at 09:10 -0500, Satish Patel wrote:
It Looks like everyone hates NFS and nobody uses it :)
for cidner unless its with a hardware san there are may better options
for better or worse a non 0 numbner of peole decide to put nova's /var/lib/nova/instances directory on NFS shares instead of using somehting like ceph
On Tue, Feb 27, 2024 at 11:31 PM Satish Patel <satish.txt@gmail.com>
wrote:
Folks,
I am configuring NFS for the cinder backend but somehow it doesn't go well. I am running kolla-ansible with the 2023.1 release.
cinder.conf
[DEFAULT] enabled_backends = volumes-ssd,volumes-nfs
[volumes-nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver volume_backend_name = volumes-nfs nfs_shares_config = /etc/cinder/nfs_shares nfs_snapshot_support = True nas_secure_file_permissions = False nas_secure_file_operations = False
Inside the cinder_volume docker container I can see it mounts NFS automatically and directory permissions is also cinder:cinder also I
am
able to write on NFS share also so it's not a permission issue also.
$ docker exec -it cinder_volume mount | grep nfs 192.168.18.245:/volume1/NFS on /var/lib/cinder/mnt/1ec32c051aa5520a1ff679ce879da332 type nfs
(rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=19 2.16
8.18.245,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.18.245) nfs v3 should not be used with nova instances and when used for cinder voume there are some know bugs or feature partiy gaps like live extend. qemu recommends 4.2 as a minium version to mitigate the massive locking issues with v3 and for some other feature such as spares file support. nfs is not a bad idea in general, its fine to use with manial but putting block storage on an NFS share is generally a bad idea so its not a great fit for cinder/novas usage. for services like glance or manila its fine.
But service is still showing down.
cinder-volume | os-ctrl2@volumes-nfs | nova | enabled | down
2024-02-28T04:13:11.000000 |
In logs I am seeing these 3 lines but then no activity in logs even
after
restarting the container so that is very strange.
2024-02-28 04:13:12.995 153 ERROR os_brick.remotefs.remotefs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] Failed to mount 192.168.18.245:/volume1/NFS, reason: mount.nfs: Protocol not
supported
: oslo_concurrency.processutils.ProcessExecutionError: Unexpected
error
while running command. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file permissions mode will be 666 (allowing other/world read & write
access).
This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NFS configuration. 2024-02-28 04:13:13.501 153 WARNING cinder.volume.drivers.nfs [None req-6bcb8eab-6aa6-4c36-9a7a-ed673c39dcbc - - - - - -] The NAS file operations will be run as root: allowing root level access at the
storage
backend. This is considered an insecure NAS environment. Please see
https://docs.openstack.org/cinder/latest/admin/blockstorage-nfs-backend.html
for information on a secure NAS configuration.
Has anyone configured NFS and noticed any behavior?
participants (3)
-
Oliver Weinmann
-
Satish Patel
-
smooney@redhat.com