On Thu, Mar 7, 2024 at 2:38 PM wodel youchi <wodel.youchi@gmail.com> wrote:Indeed I am not using Reef version I am using Quincy, I've followed kolla's documentation.Should I use the default mode with NFS?Yes; you can use Ceph NFS without the Ingress service with Quincy. Without an ingress service, client traffic reaches the NFS-Ganesha server/s directly and export rules (manila access rules) are properly enforced. You will need an ingress service if you want to have a unified ingress to your NFS Ganesha clusterCan I upgrade Ceph to Reef on openstack 2023.2 without issues?We recently began testing Reef with 2023.2: https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/909255and we haven't seen any issues.Since Reef is relatively new, we are slowly adding upstream devstack based test jobs for older versions of OpenStack. However, this is being tested and known to work elsewhere. For example,Red Hat intends to support the use of Ceph's Reef release with OpenStack 2023.1; you'll see test jobs running against Red Hat's installer: https://github.com/openstack-k8s-operators/manila-operator that already do this.
On Thu, Mar 7, 2024, 20:53 Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:On Thu, Mar 7, 2024 at 5:37 AM wodel youchi <wodel.youchi@gmail.com> wrote:Hi,I redid the configuration as mentioned and I used my nfs cluster ID.The share was created, the access right does not show the error status and df -h shows the nfs mount, but still the filesystem is read-only....[root@controllera ~]# ceph nfs cluster create mynfs "3 controllera controllerb controllerc" --ingress --virtual_ip 20.1.0.201 --ingress-mode haproxy-protocol[root@controllera ~]# ceph orch ls | grep nfs
ingress.nfs.mynfs 20.1.0.201:2049,9049 6/6 5m ago 4d controllera;controllerb;controllerc;count:3
nfs.mynfs ?:12049 3/3 5m ago 4d controllera;controllerb;controllerc;count:3cat manila.conf.....[cephfsnfs1]
driver_handles_share_servers = False
share_backend_name = CEPHFSNFS1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_protocol_helper_type = NFS
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_filesystem_name = cephfs
cephfs_nfs_cluster_id = mynfs[root@rcdndeployer2 ~]# cat /etc/kolla2023dot2/config/manila/ceph.client.manila.keyring
[client.manila]
key = AQCra99lC3DTAhAA5Wk+trH9dc/1TIIv3SId1w==
caps mgr = "allow rw"
caps mon = "allow r"(2023.2) [deployer@rcdndeployer2 ~]$ openstack share list
+--------------------------------------+------------------+------+-------------+-----------+-----------+------------------+----------------------------------+-------------------+
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+------------------+------+-------------+-----------+-----------+------------------+----------------------------------+-------------------+
| 9b25edbb-4aa3-401e-89d1-6f1f130e62c3 | cephnativeshare1 | 3 | CEPHFS | available | False | cephfsnativetype | controllerb@cephfsnative1#cephfs | nova |
| 9aa19f2a-7b7e-4274-b22b-5a2a76e0f4a2 | cephnfsshare4 | 4 | NFS | available | False | cephfsnfstype | controllera@cephfsnfs1#cephfs | nova |
+--------------------------------------+------------------+------+-------------+-----------+-----------+------------------+----------------------------------+-------(2023.2) [deployer@rcdndeployer2 ~]$ openstack share export location list cephnfsshare4
+--------------------------------------+--------------------------------------------------------------------------------------------------------+-----------+
| ID | Path | Preferred |
+--------------------------------------+--------------------------------------------------------------------------------------------------------+-----------+
| 8c614f22-c221-45f9-960d-b85b8b3de94c | 20.1.0.201:/volumes/_nogroup/f7a9b751-f013-4a8b-8ee9-24da834c6da7/fba335fe-7122-4b2c-a955-c3884ce79e42 | False | <-------- my storage net 20.1.0.0/16
| 31ed7a75-0537-46d5-9b0b-1d2a1336b10b | 20.3.0.23:/volumes/_nogroup/f7a9b751-f013-4a8b-8ee9-24da834c6da7/fba335fe-7122-4b2c-a955-c3884ce79e42 | False | <-------- my api net (I don't know why the share was exported on this network ????)
+--------------------------------------+--------------------------------------------------------------------------------------------------------+-----------+Hmm, this is strange; I don't know where this extra export path is coming from; your configuration looks fine. Could you please report a bug to https://bugs.launchpad.net/manila
i'd like to see the manila-share manager log file; could you enable debug=True and please attach it to the bug?(2023.2) [deployer@rcdndeployer2 ~]$ openstack share access list cephnfsshare4
+--------------------------------------+-------------+------------+--------------+--------+------------+----------------------------+----------------------------+
| ID | Access Type | Access To | Access Level | State | Access Key | Created At | Updated At |
+--------------------------------------+-------------+------------+--------------+--------+------------+----------------------------+----------------------------+
| ccdbfba5-691a-4591-93ee-cef30facc5d5 | ip | 20.1.11.29 | rw | active | None | 2024-03-07T13:19:29.199046 | 2024-03-07T13:19:29.487724 |
+--------------------------------------+-------------+------------+--------------+--------+------------+----------------------------+----------------------------+From the client machine[root@alicer9manila ~]# mount -t nfs4 20.1.0.201:/volumes/_nogroup/f7a9b751-f013-4a8b-8ee9-24da834c6da7/fba335fe-7122-4b2c-a955-c3884ce79e42 /mnt/nfscephfs/[root@alicer9manila ~]# df -h | grep nfs
20.1.0.201:/volumes/_nogroup/f7a9b751-f013-4a8b-8ee9-24da834c6da7/fba335fe-7122-4b2c-a955-c3884ce79e42 4.0G 0 4.0G 0% /mnt/nfscephfs[root@alicer9manila ~]# touch /mnt/nfscephfs/file
touch: cannot touch '/mnt/nfscephfs/file': Read-only file system
What version of Ceph are you using? I ask because the ingress service needs to be setup with "haproxy_protocol" as the ingress mode; and this is only available in Reef.
If you're using an older version of Ceph, you cannot use the ingress service; if you do, manila's access control rules will not work because the NFS service only sees the ingress's internal IP address instead of the eventual client IP address.
Any ideas?Regards.Le jeu. 7 mars 2024 à 02:02, Goutham Pacha Ravi <gouthampravi@gmail.com> a écrit :Hello again,
Documentation for this is being added here:
https://review.opendev.org/c/openstack/manila/+/911632
Would welcome your feedback!
Thanks!
On Wed, Mar 6, 2024 at 3:15 PM Goutham Pacha Ravi
<gouthampravi@gmail.com> wrote:
>
> On Wed, Mar 6, 2024 at 2:32 PM wodel youchi <wodel.youchi@gmail.com> wrote:
> >
> > Hi,
> >
> > Thanks for your help.
> >
> > I've created an NFS service on my ceph cluster with haproxy and keepalived, the VIP is 20.1.0.201
> > I've created this keyring for the Manila client :
> > [client.manila]
> > key = AQCra99lC3DT*************88Iv3SId1w==
> > caps mgr = "allow rw"
> > caps mon = "allow r"
> > caps osd = "allow rw pool=.nfs" <------------------------ Should I keep this line
>
> No, it isn't necessary.
> the CephFS/NFS driver (using this user/keyring) doesn't directly
> manipulate anything in the ".nfs" pool. So you can remove this osd
> capability
>
> >
> > Should I keep the last line about the .nfs pool?
> >
> >
> > I added the NFS driver to manila service using the documentation
> > cat /etc/kolla/manila-api/manila.conf
> > ...
> > [cephfsnfs1]
> > driver_handles_share_servers = False
> > share_backend_name = CEPHFSNFS1
> > share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
> > cephfs_protocol_helper_type = NFS
> > cephfs_conf_path = /etc/ceph/ceph.conf
> > cephfs_auth_id = manila
> > cephfs_cluster_name = ceph
> > cephfs_filesystem_name = cephfs
> > cephfs_ganesha_server_is_remote = False
> > cephfs_ganesha_server_ip = 20.1.0.201
> > ganesha_rados_store_enable = True
> > ganesha_rados_store_pool_name = .nfs <----------------------------------- Is this line correct, should I keep it?
>
> No, remove "cephfs_ganesha_server_is_remote",
> "cephfs_ganesha_server_ip", "ganesha_rados_store_enable",
> "ganesha_rados_store_pool_name" options and replace them with this
> option
>
> "cephfs_nfs_cluster_id=<name/id of the NFS service created via cephadm>"
>
> for example, if you created a cluster like this: "ceph nfs cluster
> create mycephfsnfscluster ...", the ID would be "mycephfsnfscluster"
>
>
> >
> >
> > I've created an NFS share and it seems it has been created
> > Openstack
> > (2023.2) [deployer@rcdndeployer2 ~]$ openstack share export location list cephnfsshare4
> > +--------------------------------------+--------------------------------------------------------------------------------------------------------+-----------+
> > | ID | Path | Preferred |
> > +--------------------------------------+--------------------------------------------------------------------------------------------------------+-----------+
> > | 4c5f5c4b-8308-40e5-9ee9-afed9f99a257 | 20.1.0.201:/volumes/_nogroup/bf98708e-69a4-4544-902b-fe8bd18d0d99/909330f4-9287-4af7-95eb-44f66e6e14dc | False |
> > +--------------------------------------+--------------------------------------------------------------------------------------------------------+-----------+
> >
> >
> >
> > Ceph
> > [root@controllera ~]# ceph fs subvolume ls cephfs
> > [
> > {
> > "name": "2bc1651d-a52b-44cc-b50d-7eab9fab35ed"
> > },
> > {
> > "name": "bf98708e-69a4-4544-902b-fe8bd18d0d99"
> > }
> > ]
> > [root@controllera ~]# ceph fs subvolume getpath cephfs bf98708e-69a4-4544-902b-fe8bd18d0d99
> > /volumes/_nogroup/bf98708e-69a4-4544-902b-fe8bd18d0d99/909330f4-9287-4af7-95eb-44f66e6e14dc
> >
> > I gave it an access right, but I got an error
> > Openstack
> > (2023.2) [deployer@rcdndeployer2 ~]$ openstack share access list cephnfsshare4
> > +--------------------------------------+-------------+------------+--------------+-------+------------+----------------------------+----------------------------+
> > | ID | Access Type | Access To | Access Level | State | Access Key | Created At | Updated At |
> > +--------------------------------------+-------------+------------+--------------+-------+------------+----------------------------+----------------------------+
> > | 8472c761-1d8f-4730-91a6-864f2c9cf85d | ip | 20.1.11.29 | rw | error | None | 2024-03-05T17:23:57.186732 | 2024-03-05T17:23:59.261406 |
> > +--------------------------------------+-------------+------------+--------------+-------+------------+----------------------------+----------------------------+
> >
>
> After you've made the configuration changes above, restart your
> manila-share manager service, delete and re-apply this rule and try
> your mount commands again
>
> >
> > When I tried to mount the share I got several problems
> >
> > From the client VM (IP 20.1.11.26 /16)
> >
> > - Mounting using the volume path didnt't work I got :
> > root@alicer9manila ~]# mount -v -t nfs4 20.1.0.201:/volumes/_nogroup/bf98708e-69a4-4544-902b-fe8bd18d0d99/909330f4-9287-4af7-95eb-44f66e6e14dc /mnt/nfscephfs/
> > mount.nfs4: timeout set for Wed Mar 6 22:21:12 2024
> > mount.nfs4: trying text-based options 'vers=4.2,addr=20.1.0.201,clientaddr=20.1.11.29'
> > mount.nfs4: mount(2): No such file or directory
> > mount.nfs4: mounting 20.1.0.201:/volumes/_nogroup/bf98708e-69a4-4544-902b-fe8bd18d0d99/909330f4-9287-4af7-95eb-44f66e6e14dc failed, reason given by server: No such file or directory
> >
> > So I used the root directory
> > [root@alicer9manila ~]# mount -v -t nfs4 20.1.0.201:/ /mnt/nfscephfs/
> > mount.nfs4: timeout set for Wed Mar 6 22:22:13 2024
> > mount.nfs4: trying text-based options 'vers=4.2,addr=20.1.0.201,clientaddr=20.1.11.29'
> >
> >
> >
> > - df -h : don't show the share
> > [root@alicer9manila ~]# df -h
> > Filesystem Size Used Avail Use% Mounted on
> > devtmpfs 4.0M 0 4.0M 0% /dev
> > tmpfs 882M 0 882M 0% /dev/shm
> > tmpfs 353M 5.0M 348M 2% /run
> > /dev/vda5 19G 1.5G 18G 8% /
> > /dev/vda2 994M 271M 724M 28% /boot
> > /dev/vda1 100M 7.0M 93M 7% /boot/efi
> > tmpfs 177M 0 177M 0% /run/user/1000
> >
> > - mount shows that there is a mounted share
> > [root@alicer9manila ~]# mount | grep nfs
> > rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> > 20.1.0.201:/ on /mnt/nfscephfs type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=20.1.11.29,local_lock=none,addr=20.1.0.201)
> >
> >
> > - When trying to create a file on the mount-point I got : FS Read Only
> > [root@alicer9manila ~]# cd /mnt/nfscephfs/
> > [root@alicer9manila nfscephfs]# touch file
> > touch: cannot touch 'file': Read-only file system
> >
> >
> >
> > How can I debug this?
> >
> >
> > Regards.
> >
> > Le mer. 6 mars 2024 à 16:36, Carlos Silva <ces.eduardo98@gmail.com> a écrit :
> >>
> >>
> >>
> >> Em dom., 3 de mar. de 2024 às 08:49, wodel youchi <wodel.youchi@gmail.com> escreveu:
> >>>
> >>> Hi,
> >>>
> >>> I am having trouble understanding the documentation.
> >>>
> >>> Kolla-ansible documentation about manila and cephfs-nfs say this :
> >>> Prerequisites
> >>>
> >>> 3.0 or later versions of NFS-Ganesha.
> >>> NFS client installed in the guest.
> >>> Network connectivity between your Ceph cluster’s public network and NFS-Ganesha server. <-------------------------- NFS Ganesha is an external service.
> >>
> >> Yes, it is an external service.
> >> We introduced the ability to use the cephadm deployed ganesha a couple of releases ago, and started recommending it, but some docs still have to be updated, which is the case for this doc.
> >> Some of the steps are not necessary anymore and the amount of things you'd need to do in the setup is smaller.
> >>>
> >>> Network connectivity between your NFS-Ganesha server and the manila guest.
> >>>
> >>> Ceph documentation says, that ceph can create an NFS-Ganesha cluster, which simplifies the deployment, but...there are some things that are clear :
> >>> - When ceph create the NFS cluster, it create a new pool, in my deployment it is called ".nfs"
> >>> - Manila using cephfs-nfs needs a pool to save the export configuration, which is called for reference : <ganesha_rados_store_pool_name>
> >>> Is it the same pool???
> >>
> >> You don't need to set this option using a ganesha deployed by cephadm, or even create the pool as suggested. That will be already done.
> >>>
> >>>
> >>> - The Ganesha configuration says : For a fresh setup, make sure to create the Ganesha export index object as an empty object before starting the Ganesha server.
> >>> echo | sudo rados -p ${GANESHA_RADOS_STORE_POOL_NAME} put ganesha-export-index -
> >>> - If its the .nfs pool, did ceph put the ganesha-export-index in it already or should I do it?
> >>
> >> No need for this ganesha export index part either, it will already be figured out in this case.
> >>>
> >>>
> >>>
> >>> Thanks.
> >>>
> >>> Le ven. 1 mars 2024 à 22:04, Carlos Silva <ces.eduardo98@gmail.com> a écrit :
> >>>>
> >>>> Hello,
> >>>>
> >>>> Em sex., 1 de mar. de 2024 às 14:09, wodel youchi <wodel.youchi@gmail.com> escreveu:
> >>>>>
> >>>>> Hi,
> >>>>>
> >>>>> I am trying to configure Manila with cephsfs NFS as backend.
> >>>>> My first problem is with installing and configuring nfs ganesha :
> >>>>> - Do I have to deploy nfs ganesha from ceph?
> >>>>
> >>>> Yeah, I think the most viable option at this point would be to deploy ganesha using cephadm.
> >>>> The Manila Ceph NFS driver supports it and you'll get good benefits using this approach.
> >>>> You can find some guidance on how to deploy NFS Ganesha with cephadm this doc [0].
> >>>>>
> >>>>> - What about the /etc/ganesha/ganesha.conf config file?
> >>>>
> >>>> This Manila documentation [1] can be useful for understanding the config files.
> >>>>>
> >>>>>
> >>>>> Could someone give me some steps to follow...
> >>>>>
> >>>>> Regards.
> >>>>
> >>>>
> >>>> [0] https://docs.ceph.com/en/latest/cephadm/services/nfs/
> >>>> [1] https://docs.openstack.org/manila/latest/contributor/ganesha.html#nfs-ganesha-configuration