Hello again,
Documentation for this is being added here:
https://review.opendev.org/c/openstack/manila/+/911632
Would welcome your feedback!
Thanks!
On Wed, Mar 6, 2024 at 3:15 PM Goutham Pacha Ravi
<gouthampravi@gmail.com> wrote:
>
> On Wed, Mar 6, 2024 at 2:32 PM wodel youchi <wodel.youchi@gmail.com> wrote:
> >
> > Hi,
> >
> > Thanks for your help.
> >
> > I've created an NFS service on my ceph cluster with haproxy and keepalived, the VIP is 20.1.0.201
> > I've created this keyring for the Manila client :
> > [client.manila]
> > key = AQCra99lC3DT*************88Iv3SId1w==
> > caps mgr = "allow rw"
> > caps mon = "allow r"
> > caps osd = "allow rw pool=.nfs" <------------------------ Should I keep this line
>
> No, it isn't necessary.
> the CephFS/NFS driver (using this user/keyring) doesn't directly
> manipulate anything in the ".nfs" pool. So you can remove this osd
> capability
>
> >
> > Should I keep the last line about the .nfs pool?
> >
> >
> > I added the NFS driver to manila service using the documentation
> > cat /etc/kolla/manila-api/manila.conf
> > ...
> > [cephfsnfs1]
> > driver_handles_share_servers = False
> > share_backend_name = CEPHFSNFS1
> > share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
> > cephfs_protocol_helper_type = NFS
> > cephfs_conf_path = /etc/ceph/ceph.conf
> > cephfs_auth_id = manila
> > cephfs_cluster_name = ceph
> > cephfs_filesystem_name = cephfs
> > cephfs_ganesha_server_is_remote = False
> > cephfs_ganesha_server_ip = 20.1.0.201
> > ganesha_rados_store_enable = True
> > ganesha_rados_store_pool_name = .nfs <----------------------------------- Is this line correct, should I keep it?
>
> No, remove "cephfs_ganesha_server_is_remote",
> "cephfs_ganesha_server_ip", "ganesha_rados_store_enable",
> "ganesha_rados_store_pool_name" options and replace them with this
> option
>
> "cephfs_nfs_cluster_id=<name/id of the NFS service created via cephadm>"
>
> for example, if you created a cluster like this: "ceph nfs cluster
> create mycephfsnfscluster ...", the ID would be "mycephfsnfscluster"
>
>
> >
> >
> > I've created an NFS share and it seems it has been created
> > Openstack
> > (2023.2) [deployer@rcdndeployer2 ~]$ openstack share export location list cephnfsshare4
> > +--------------------------------------+--------------------------------------------------------------------------------------------------------+-----------+
> > | ID | Path | Preferred |
> > +--------------------------------------+--------------------------------------------------------------------------------------------------------+-----------+
> > | 4c5f5c4b-8308-40e5-9ee9-afed9f99a257 | 20.1.0.201:/volumes/_nogroup/bf98708e-69a4-4544-902b-fe8bd18d0d99/909330f4-9287-4af7-95eb-44f66e6e14dc | False |
> > +--------------------------------------+--------------------------------------------------------------------------------------------------------+-----------+
> >
> >
> >
> > Ceph
> > [root@controllera ~]# ceph fs subvolume ls cephfs
> > [
> > {
> > "name": "2bc1651d-a52b-44cc-b50d-7eab9fab35ed"
> > },
> > {
> > "name": "bf98708e-69a4-4544-902b-fe8bd18d0d99"
> > }
> > ]
> > [root@controllera ~]# ceph fs subvolume getpath cephfs bf98708e-69a4-4544-902b-fe8bd18d0d99
> > /volumes/_nogroup/bf98708e-69a4-4544-902b-fe8bd18d0d99/909330f4-9287-4af7-95eb-44f66e6e14dc
> >
> > I gave it an access right, but I got an error
> > Openstack
> > (2023.2) [deployer@rcdndeployer2 ~]$ openstack share access list cephnfsshare4
> > +--------------------------------------+-------------+------------+--------------+-------+------------+----------------------------+----------------------------+
> > | ID | Access Type | Access To | Access Level | State | Access Key | Created At | Updated At |
> > +--------------------------------------+-------------+------------+--------------+-------+------------+----------------------------+----------------------------+
> > | 8472c761-1d8f-4730-91a6-864f2c9cf85d | ip | 20.1.11.29 | rw | error | None | 2024-03-05T17:23:57.186732 | 2024-03-05T17:23:59.261406 |
> > +--------------------------------------+-------------+------------+--------------+-------+------------+----------------------------+----------------------------+
> >
>
> After you've made the configuration changes above, restart your
> manila-share manager service, delete and re-apply this rule and try
> your mount commands again
>
> >
> > When I tried to mount the share I got several problems
> >
> > From the client VM (IP 20.1.11.26 /16)
> >
> > - Mounting using the volume path didnt't work I got :
> > root@alicer9manila ~]# mount -v -t nfs4 20.1.0.201:/volumes/_nogroup/bf98708e-69a4-4544-902b-fe8bd18d0d99/909330f4-9287-4af7-95eb-44f66e6e14dc /mnt/nfscephfs/
> > mount.nfs4: timeout set for Wed Mar 6 22:21:12 2024
> > mount.nfs4: trying text-based options 'vers=4.2,addr=20.1.0.201,clientaddr=20.1.11.29'
> > mount.nfs4: mount(2): No such file or directory
> > mount.nfs4: mounting 20.1.0.201:/volumes/_nogroup/bf98708e-69a4-4544-902b-fe8bd18d0d99/909330f4-9287-4af7-95eb-44f66e6e14dc failed, reason given by server: No such file or directory
> >
> > So I used the root directory
> > [root@alicer9manila ~]# mount -v -t nfs4 20.1.0.201:/ /mnt/nfscephfs/
> > mount.nfs4: timeout set for Wed Mar 6 22:22:13 2024
> > mount.nfs4: trying text-based options 'vers=4.2,addr=20.1.0.201,clientaddr=20.1.11.29'
> >
> >
> >
> > - df -h : don't show the share
> > [root@alicer9manila ~]# df -h
> > Filesystem Size Used Avail Use% Mounted on
> > devtmpfs 4.0M 0 4.0M 0% /dev
> > tmpfs 882M 0 882M 0% /dev/shm
> > tmpfs 353M 5.0M 348M 2% /run
> > /dev/vda5 19G 1.5G 18G 8% /
> > /dev/vda2 994M 271M 724M 28% /boot
> > /dev/vda1 100M 7.0M 93M 7% /boot/efi
> > tmpfs 177M 0 177M 0% /run/user/1000
> >
> > - mount shows that there is a mounted share
> > [root@alicer9manila ~]# mount | grep nfs
> > rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> > 20.1.0.201:/ on /mnt/nfscephfs type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=20.1.11.29,local_lock=none,addr=20.1.0.201)
> >
> >
> > - When trying to create a file on the mount-point I got : FS Read Only
> > [root@alicer9manila ~]# cd /mnt/nfscephfs/
> > [root@alicer9manila nfscephfs]# touch file
> > touch: cannot touch 'file': Read-only file system
> >
> >
> >
> > How can I debug this?
> >
> >
> > Regards.
> >
> > Le mer. 6 mars 2024 à 16:36, Carlos Silva <ces.eduardo98@gmail.com> a écrit :
> >>
> >>
> >>
> >> Em dom., 3 de mar. de 2024 às 08:49, wodel youchi <wodel.youchi@gmail.com> escreveu:
> >>>
> >>> Hi,
> >>>
> >>> I am having trouble understanding the documentation.
> >>>
> >>> Kolla-ansible documentation about manila and cephfs-nfs say this :
> >>> Prerequisites
> >>>
> >>> 3.0 or later versions of NFS-Ganesha.
> >>> NFS client installed in the guest.
> >>> Network connectivity between your Ceph cluster’s public network and NFS-Ganesha server. <-------------------------- NFS Ganesha is an external service.
> >>
> >> Yes, it is an external service.
> >> We introduced the ability to use the cephadm deployed ganesha a couple of releases ago, and started recommending it, but some docs still have to be updated, which is the case for this doc.
> >> Some of the steps are not necessary anymore and the amount of things you'd need to do in the setup is smaller.
> >>>
> >>> Network connectivity between your NFS-Ganesha server and the manila guest.
> >>>
> >>> Ceph documentation says, that ceph can create an NFS-Ganesha cluster, which simplifies the deployment, but...there are some things that are clear :
> >>> - When ceph create the NFS cluster, it create a new pool, in my deployment it is called ".nfs"
> >>> - Manila using cephfs-nfs needs a pool to save the export configuration, which is called for reference : <ganesha_rados_store_pool_name>
> >>> Is it the same pool???
> >>
> >> You don't need to set this option using a ganesha deployed by cephadm, or even create the pool as suggested. That will be already done.
> >>>
> >>>
> >>> - The Ganesha configuration says : For a fresh setup, make sure to create the Ganesha export index object as an empty object before starting the Ganesha server.
> >>> echo | sudo rados -p ${GANESHA_RADOS_STORE_POOL_NAME} put ganesha-export-index -
> >>> - If its the .nfs pool, did ceph put the ganesha-export-index in it already or should I do it?
> >>
> >> No need for this ganesha export index part either, it will already be figured out in this case.
> >>>
> >>>
> >>>
> >>> Thanks.
> >>>
> >>> Le ven. 1 mars 2024 à 22:04, Carlos Silva <ces.eduardo98@gmail.com> a écrit :
> >>>>
> >>>> Hello,
> >>>>
> >>>> Em sex., 1 de mar. de 2024 às 14:09, wodel youchi <wodel.youchi@gmail.com> escreveu:
> >>>>>
> >>>>> Hi,
> >>>>>
> >>>>> I am trying to configure Manila with cephsfs NFS as backend.
> >>>>> My first problem is with installing and configuring nfs ganesha :
> >>>>> - Do I have to deploy nfs ganesha from ceph?
> >>>>
> >>>> Yeah, I think the most viable option at this point would be to deploy ganesha using cephadm.
> >>>> The Manila Ceph NFS driver supports it and you'll get good benefits using this approach.
> >>>> You can find some guidance on how to deploy NFS Ganesha with cephadm this doc [0].
> >>>>>
> >>>>> - What about the /etc/ganesha/ganesha.conf config file?
> >>>>
> >>>> This Manila documentation [1] can be useful for understanding the config files.
> >>>>>
> >>>>>
> >>>>> Could someone give me some steps to follow...
> >>>>>
> >>>>> Regards.
> >>>>
> >>>>
> >>>> [0] https://docs.ceph.com/en/latest/cephadm/services/nfs/
> >>>> [1] https://docs.openstack.org/manila/latest/contributor/ganesha.html#nfs-ganesha-configuration