[Openstack-operators] Configuring NFS for Grizzly - Cinder

Jaren Janke fractil at gmail.com
Thu Apr 18 19:48:30 UTC 2013


I use open-solaris (open-indiana) & ZFS pools for my OpenStack instance
storage via NFS. This setup provides high performance NFS for many reasons,
one of which is RAM & SSD read/write caching. It also provides sound data
integrity/backup with ZFS snap-shots.

My OpenStack instance pools are accessible only by the controller/compute
nodes.

I change the instances storage location to the NFS mount point in
/etc/nova/nova.conf
instances_path=</your/mount/point>

You can also mount your share to the default instance storage location
(/var/lib/nova/instances) as Joe mentioned.

I force NFS version 3 (vers=3) as one of the mount parameters in the NFS
mount declaration in /etc/fstab because NFS version 4 has some known bugs
related to permissions & NFS client permission changes. When a VM is
created the nova user initially drops the disk, kernel, and ramdisk files,
then changes the ownership to libvirt. This chown process does not occur
properly using NFS version 4.

I have been experimenting with NFS version 4 by changing the Nobody-User
and Nobody-Group mapping from nobody & nogroup to root & root. It has
worked well so far, but does raise security concerns. I won't use this work
around in production until I can confirm/verify/eliminate any unintended
security vulnerabilities.


On Thu, Apr 18, 2013 at 12:20 PM, Joe Topjian <joe.topjian at cybera.ca> wrote:

> Hi Mitch,
>
> To use NFS for instance storage, just mount an NFS export under
> /var/lib/nova/instances. This is the default directory where instances are
> stored. As long as nova has write access to the mounted directory, that
> should be it.
>
> Cinder takes no part in instance storage so there's no actual driver.
> There is an NFS driver for volumes, though. This is described here:
>
> http://docs.openstack.org/trunk/openstack-block-storage/admin/content/NFS-driver.html
>
> I hope that clarifies things for you.
>
> Joe
>
>
> On Thu, Apr 18, 2013 at 9:26 AM, Mitch Anderson <mitch at metauser.net>wrote:
>
>> I'm new to openstack and was wanting to use NFS for my vm storage.  I see
>> mention of the driver but have yet to figure out where services need to be
>> configured for it all to work.
>>
>> Is there any documentation out there for configuring NFS storage in a
>> multi node environment?  Do just the compute nodes need to be able to mount
>> it?  Do I still configure the cinder-api and cinder-scheduler on the
>> controller?
>>
>> My reference at this point has been this doc:
>> https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
>>
>> But I'm at the point of installing cinder, and its doing everything on
>> the controller with the loop driver.  And my controller doesn't have enough
>> disk space for more than a few VM's... and I'd like to be able to spin up
>> enough to max out my two compute nodes.
>>
>> Any guides or general information would be much appreciated.
>>
>> -Mitch
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> --
> Joe Topjian
> Systems Administrator
> Cybera Inc.
>
> www.cybera.ca
>
> Cybera is a not-for-profit organization that works to spur and support
> innovation, for the economic benefit of Alberta, through the use
> of cyberinfrastructure.
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130418/ed46a16b/attachment.html>


More information about the OpenStack-operators mailing list