[yoga][cinder] Cinder NFS backend: Compute service cannot access volume file (UID/GID problem)

Felix Hüttner felix.huettner at mail.schwarz
Mon Oct 24 07:33:06 UTC 2022


Sorry, no idea about that, for us the group also has write permissions

--
Felix Huettner

From: 박경원 <park0kyung0won at dgist.ac.kr>
Sent: Monday, October 24, 2022 9:24 AM
To: Felix Hüttner <felix.huettner at mail.schwarz>; openstack-discuss at lists.openstack.org
Subject: RE: RE: [yoga][cinder] Cinder NFS backend: Compute service cannot access volume file (UID/GID problem)




Hello Felix



Thank you very much for kind reply

Do I also need to change permission setting of volume file in /var/lib/nova/mnt/... ?



By default its:



drwxr-x---  2 64061 64061   11 Oct 24 04:19 99c4f7e8b15983b65e20cb7d37db899f



group has only read and execute permission, no write permission




---------- 원본 메일 ----------
보낸사람: "Felix Hüttner" <felix.huettner at mail.schwarz<mailto:felix.huettner at mail.schwarz>>
받는사람: "park0kyung0won at dgist.ac.kr<mailto:park0kyung0won at dgist.ac.kr>" <park0kyung0won at dgist.ac.kr<mailto:park0kyung0won at dgist.ac.kr>>, "openstack-discuss at lists.openstack.org<mailto:openstack-discuss at lists.openstack.org>" <openstack-discuss at lists.openstack.org<mailto:openstack-discuss at lists.openstack.org>>
날짜: 2022-10-24 (월) 16:12:48
제목: RE: [yoga][cinder] Cinder NFS backend: Compute service cannot access volume file (UID/GID problem)


Hi,

we are solving this issue for us by creating a “cinder” group on all hypervisors with the same gid (64061 in your case).
Then we add the nova user to the cinder group and we are fine afterwards.

You might need set “dynamic_ownership = 0" In your libvirt qemu.conf
--
Felix Huettner

From: 박경원 <park0kyung0won at dgist.ac.kr<mailto:park0kyung0won at dgist.ac.kr>>
Sent: Monday, October 24, 2022 7:21 AM
To: openstack-discuss at lists.openstack.org<mailto:openstack-discuss at lists.openstack.org>
Subject: [yoga][cinder] Cinder NFS backend: Compute service cannot access volume file (UID/GID problem)




Hi

I'm trying to setup cinder-volume service with NFS backend



When I create a new VM instance with a volume from web UI, cinder-volume service on storage node creates volume file just fine

But I get the following error on compute node and instance fails to spawn.



2022-10-24 02:14:25.347 402789 ERROR nova.compute.manager [req-47ec9fb1-9daa-4c24-8673-538797a217cc 8769cfaf608349bd9fbb36f92b188fe3 e1e8e8397cde49899b00d09dec76b29e - default default] [instance: 5acb1dc3-0685-4980-977b-b6dfff6dfb45] Instance failed to spawn: libvirt.libvirtError: internal error: process exited while connecting to monitor: 2022-10-24T02:14:24.819644Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/var/lib/nova/mnt/99c4f7e8b15983b65e20cb7d37db899f/volume-8f478992-dde3-4c20-9005-61cd34eacf30","aio":"native","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}: Could not open '/var/lib/nova/mnt/99c4f7e8b15983b65e20cb7d37db899f/volume-8f478992-dde3-4c20-9005-61cd34eacf30': Permission denied



I've added appropriate configs to apparmor profile. (Using Ubuntu 22.04) Apparmor isn't blocking this access.

While the instance is spawning, I've checked ownership of the volume file on compute node:



root at compute-node:/var/lib/nova/mnt$ ls -al



total 17

drwxr-xr-x  3 nova  nova  4096 Oct 24 04:19 .

drwxr-xr-x 12 nova  nova  4096 Oct 24 02:14 ..

drwxr-x---  2 64061 64061   11 Oct 24 04:19 99c4f7e8b15983b65e20cb7d37db899f



It seems like cinder user on storage node creates volume file with UID/GID of 64061 (cinder user's UID/GID)

But nova user on compute node has UID/GID of 64060, therefore cannot open volume file(/var/lib/nova/mnt/99c4f7e8b15983b65e20cb7d37db899f/volume-8f478992-dde3-4c20-9005-61cd34eacf30)



Should I manually set the UID/GID of nova user on compute node to 64061, so both nova user on compute node and cinder user on storage node would have the same UID/GID?

Feels like this duct taping isn't a proper solution. Did I miss something?



Thank you

Diese E Mail enthält möglicherweise vertrauliche Inhalte und ist nur für die Verwertung durch den vorgesehenen Empfänger bestimmt. Sollten Sie nicht der vorgesehene Empfänger sein, setzen Sie den Absender bitte unverzüglich in Kenntnis und löschen diese E Mail. Hinweise zum Datenschutz finden Sie hier<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.datenschutz.schwarz%2F&data=05%7C01%7C%7Cdbcc4147df274982baff08dab590c98d%7Cd04f47175a6e4b98b3f96918e0385f4c%7C0%7C0%7C638021930711150089%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=eKucCbqueUYdNQXN4I9yy8eZIJqT5ajX8Yyvejoxf6o%3D&reserved=0>.

Diese E Mail enthält möglicherweise vertrauliche Inhalte und ist nur für die Verwertung durch den vorgesehenen Empfänger bestimmt. Sollten Sie nicht der vorgesehene Empfänger sein, setzen Sie den Absender bitte unverzüglich in Kenntnis und löschen diese E Mail. Hinweise zum Datenschutz finden Sie hier<https://www.datenschutz.schwarz>.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20221024/1e1fbfeb/attachment-0001.htm>


More information about the openstack-discuss mailing list