[Openstack] cinder nfs setup (havana/rdo)

Dimitri Maziuk dmaziuk at bmrb.wisc.edu
Thu Feb 13 20:31:52 UTC 2014


On 02/13/2014 12:33 PM, Dimitri Maziuk wrote:
> Hi all,
> 
> I've a centos 6 rdo setup with
> - controller node w/ everything (compute running but no instances),
> - compute node (active),
> - nfs servers for cinder backend
> -- basically it started as all in one, then I added the big raid nfs
> server and lotsa cores/ram compute server.
> 
> The guests are all running off bootable cinder volumes.

Now after a few tweaks and restarts:

On the compute node:

df:
> Filesystem      Size  Used Avail Use% Mounted on
...
> hydra:/cinder    19T  320G   18T   2% /var/lib/nova/mnt/ae28c17218ca3a56249d470541875348

ls:
> var/lib/nova/mnt/:
> total 8
> drwxr-xr-x 2 qemu qemu 4096 Feb 12 20:38 ae28c17218ca3a56249d470541875348

What should the ownership be: nova? qemu?

>  ls -l /var/lib/nova/mnt/ae28c17218ca3a56249d470541875348
> total 334495748
> -rw-rw-rw- 1 qemu   qemu   107374182400 Feb  4 15:27 volume-073e04ce-1238-4092-913d-6bf6500cd0cf
> -rw-rw-rw- 1 qemu   qemu    42949672960 Feb 13 13:29 volume-16e0f0bb-98a8-43f7-9f5e-b58b9f3a71c7
> -rw-rw-rw- 1 qemu   qemu     5368709120 Jan  9 18:00 volume-1b5e308f-ffcf-4773-8e22-f0a5a44a0b67
> -rw-rw-rw- 1 qemu   qemu    10737418240 Jan 27 17:00 volume-40e3f301-045f-4fc4-9501-9d0199da98f1
> -rw-rw-rw- 1 nobody nobody  10737418240 Feb 13 13:19 volume-513a3590-06a2-46fb-baa9-73fe45902f98
> -rw-rw-rw- 1 qemu   qemu    10737418240 Jan  8 13:12 volume-5660e2c1-db70-42d1-abc3-81cd8799e1a3
> -rw-rw-rw- 1 qemu   qemu    42949672960 Dec 26 13:52 volume-657dbaad-8cb4-48b9-a6db-4b81ec54ca96
> -rw-rw-rw- 1 nobody nobody  10737418240 Feb 13 13:18 volume-75b6a899-2da3-4638-9f74-352645b8b030
> -rw-rw-rw- 1 qemu   qemu    10737418240 Feb  4 15:23 volume-79532454-1305-4d1e-adb0-9d421dda9192
> -rw-rw-rw- 1 qemu   qemu    10737418240 Feb  5 17:05 volume-87878899-a8a8-433e-ae9c-f94ff810796a
> -rw-rw-rw- 1 qemu   qemu     8589934592 Feb  5 17:05 volume-8d36c6c4-61e5-461d-9824-c7ba49a86e1e
> -rw-rw-rw- 1 qemu   qemu     8589934592 Feb 12 12:29 volume-97bf456a-5afe-4c77-bd42-146f54698967
> -rw-rw-rw- 1 qemu   qemu    32212254720 Jan  9 17:59 volume-bcc3a83c-f54a-48e2-b3b4-df1df912fdd7
> -rw-rw-rw- 1 qemu   qemu     8589934592 Feb 12 20:01 volume-c4c24384-8835-493b-b044-343e8dec7b13
> -rw-rw-rw- 1 nobody nobody  10737418240 Feb 13 13:11 volume-e84d5b9c-19ce-42c7-9012-2f42f4961660
> -rw-rw-rw- 1 qemu   qemu    10737418240 Feb  5 13:06 volume-f3d56bff-79ed-4b6b-a433-0e988b438194

I did run chown qemu:qemu * on the nfs server prior to restarts, so
something in cinder or nova (?) is changing some of the volumes to
nobody:nobody -- those volumes become either unbootable or mounted
read-only in the guest.

Any ideas what the permissions should be and who's messing them up?
Should I exportfs with all_squash and anonuid/gid set to qemu? Or nova?

Right now half the guests become unbootable after server restarts.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 255 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140213/0c6fbf37/attachment.sig>


More information about the Openstack mailing list