Hello community, Trying to setup the Manila version of Caracal with NFS-Ganesha backend. Ganesha version 4.0 is deployed using ceph orch in a ceph cluster version 17. Backend configuration in manila.conf [ganesha_nfs] driver_handles_share_servers = False share_backend_name = ganesha_nfs Share_driver = manila.share.drivers.cephfs.driver.CephFSDriver cephfs_protocol_helper_type = NFS cephfs_conf_path = /etc/ceph/ceph.conf cephfs_auth_id = manila cephfs_cluster_name = ceph cephfs_filesystem_name = {{ cephfs_filesystem_name }} cephfs_nfs_cluster_id = {{ cephfs_nfs_cluster_id }} I see the subvolume being created in ceph ceph fs subvolume ls <volume_name> but no export ceph nfs export ls <cluster_id> what am I doing wrong? what am I missing? in the manila-share logs I see a request to create a subvolume, but I don't see a request to create an export, for some reason manila doesn't even try to create an export
Hey Mikhail! Thanks for reaching out. Apparently your configuration is correct and the subvolume was created properly. However, the NFS exports will only be created when you apply an access rule in Manila. Please use the `$ openstack share access create` to create the access rule (RW or RO) for your clients, and then the NFS export should show up. Regards, carloss Em sex., 21 de mar. de 2025 às 12:35, Mikhail Okhrimenko <m.okhrimenko@pin-up.tech> escreveu:
Hi Carlos! Thank you for your reply. I appreciate it. I tested what you described, it works, thanks! But I encountered another problem. I created 2 instances with addresses 10.0.185.60 and 10.0.185.61. If I create an access rule for each instance `$ openstack share access create share-01 ip 10.0.185.60 --access-level rw` `$ openstack share access create share-01 ip 10.0.185.61 --access-level rw` Then when mounting the share and then when creating a file I get the error: "Read-only file system" at the same time I see that everything is ok on the ceph side, the clients have access `$ ceph nfs export info nfs-cluster /volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4` { "export_id": 1, "path": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4", "cluster_id": "nfs-cluster", "pseudo": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4", "access_type": "RO", "squash": "none", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "nfs-cluster.1", "fs_name": "ceph" }, "clients": [ { "addresses": [ "10.0.185.61", "10.0.185.60" ], "access_type": "rw", "squash": "none" } ] } But if I create a rule for 0.0.0.0 and delete previously created rules, then the share is mounted in rw mode and from the ceph side only the list of clients differs `$ ceph nfs export info nfs-cluster /volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4` { "export_id": 1, "path": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4", "cluster_id": "nfs-cluster", "pseudo": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4", "access_type": "RO", "squash": "none", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "nfs-cluster.1", "fs_name": "ceph" }, "clients": [ { "addresses": [ "0.0.0.0" ], "access_type": "rw", "squash": "none" } ] } Alos if I use 10.0.185.0/24 net in access rule it is not working in rw mode { "export_id": 1, "path": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4", "cluster_id": "nfs-cluster", "pseudo": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4", "access_type": "RO", "squash": "none", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "nfs-cluster.1", "fs_name": "ck-ceph" }, "clients": [ { "addresses": [ "10.0.185.0/24" ], "access_type": "rw", "squash": "none" } ] } Regards, Mikhail On Fri, Mar 21, 2025 at 8:42 PM Carlos Silva <ces.eduardo98@gmail.com> wrote:
I deployed the cluster with this command: `$ ceph nfs cluster create nfs-cluster --placement=3 --ingress --virtual-ip <my_virtual_ip>` `$ ceph nfs cluster info nfs-cluster` { "nfs-cluster": { "virtual_ip": "<my_virtual_ip>", "backend": [ { "hostname": "ceph-node-01", "ip": "ceph-node-01-ip", "port": 12049 }, { "hostname": "ceph-node-02", "ip": "ceph-node-02-ip", "port": 12049 }, { "hostname": "ceph-node-03", "ip": "ceph-node-03-ip", "port": 12049 } ], "port": 2049, "monitor_port": 9049 } } In the output of the `ps axuf | grep haproxy` command I see that haproxy has been deployed. Is this the "haproxy-protocol" mode you're asking about? On Sat, Apr 5, 2025 at 12:18 AM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:
On Fri, Apr 4, 2025 at 11:40 PM Mikhail Okhrimenko <m.okhrimenko@pin-up.tech> wrote:
When using the ingress service, you’ll need to also include “—ingress-mode haproxy-protocol”. Doing this allows enabling the HAProxy PROXY protocol headers in the HAProxy, conveying the client addresses to the NFS-Ganesha servers. Without that enabled, the client addresses are only visible to the HAProxy service (part of the Ceph Ingress service); and will not be forwarded to NFS Ganesha servers. This means that Manila access rules (export client restrictions) will not work. You can delete and recreate your nfs service, or just your ingress service, and set “—ingress-mode” appropriately. No changes are necessary on Manila — Goutham
I'm using the quincy version of ceph and according to the documentation I only have the option to use --ingress_mode default or keepalive-only https://docs.ceph.com/en/quincy/mgr/nfs/#ingress --ingress-mode haproxy-protocol appeared in the reef version, if I understand correctly https://docs.ceph.com/en/reef/mgr/nfs/#ingress What should I do now? The only solution is to update to the reef version? On Sat, Apr 5, 2025 at 6:37 PM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:
On Sat, Apr 5, 2025 at 11:49 PM Mikhail Okhrimenko <m.okhrimenko@pin-up.tech> wrote:
Yeah, if you’re planning to use the ingress service with client restrictions, you’ll need to use Ceph Reef or greater. Ceph Quincy went EOL at the beginning of the year, so it might be prudent to upgrade.
participants (3)
-
Carlos Silva
-
Goutham Pacha Ravi
-
Mikhail Okhrimenko