Hi Carlos!
Thank you for your reply. I appreciate it.

I tested what you described, it works, thanks! But I encountered another problem.
I created 2 instances with addresses 10.0.185.60 and 10.0.185.61. If I create an access rule for each instance
`$ openstack share access create share-01 ip 10.0.185.60 --access-level rw`
`$ openstack share access create share-01 ip 10.0.185.61 --access-level rw`

Then when mounting the share and then when creating a file I get the error:  "Read-only file system"

at the same time I see that everything is ok on the ceph side, the clients have access

`$ ceph nfs export info nfs-cluster /volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4`
{
    "export_id": 1,
    "path": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
    "cluster_id": "nfs-cluster",
    "pseudo": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
    "access_type": "RO",
    "squash": "none",
    "security_label": true,
    "protocols": [
        4
    ],
    "transports": [
        "TCP"
    ],
    "fsal": {
        "name": "CEPH",
        "user_id": "nfs-cluster.1",
        "fs_name": "ceph"
    },
    "clients": [
        {
            "addresses": [
                "10.0.185.61",
                "10.0.185.60"
            ],
            "access_type": "rw",
            "squash": "none"
        }
    ]
}

But if I create a rule for 0.0.0.0 and delete previously created rules, then the share is mounted in rw mode and from the ceph side only the list of clients differs

`$ ceph nfs export info nfs-cluster /volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4`
{
  "export_id": 1,
  "path": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
  "cluster_id": "nfs-cluster",
  "pseudo": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
  "access_type": "RO",
  "squash": "none",
  "security_label": true,
  "protocols": [
    4
  ],
  "transports": [
    "TCP"
  ],
  "fsal": {
    "name": "CEPH",
    "user_id": "nfs-cluster.1",
    "fs_name": "ceph"
  },
  "clients": [
    {
      "addresses": [
        "0.0.0.0"
      ],
      "access_type": "rw",
      "squash": "none"
    }
  ]
}

Alos if I use 10.0.185.0/24 net in access rule it is not working in rw mode

{

  "export_id": 1,

  "path": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",

  "cluster_id": "nfs-cluster",

  "pseudo": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",

  "access_type": "RO",

  "squash": "none",

  "security_label": true,

  "protocols": [

    4

  ],

  "transports": [

    "TCP"

  ],

  "fsal": {

    "name": "CEPH",

    "user_id": "nfs-cluster.1",

    "fs_name": "ck-ceph"

  },

  "clients": [

    {

      "addresses": [

        "10.0.185.0/24"

      ],

      "access_type": "rw",

      "squash": "none"

    }

  ]

}


Regards,
Mikhail

On Fri, Mar 21, 2025 at 8:42 PM Carlos Silva <ces.eduardo98@gmail.com> wrote:
Hey Mikhail! Thanks for reaching out.

Apparently your configuration is correct and the subvolume was created properly. However, the NFS exports will only be created when you apply an access rule in Manila.
Please use the `$ openstack share access create` to create the access rule (RW or RO) for your clients, and then the NFS export should show up.

Regards,
carloss


Em sex., 21 de mar. de 2025 às 12:35, Mikhail Okhrimenko <m.okhrimenko@pin-up.tech> escreveu:
Hello community,

Trying to setup the Manila version of Caracal with NFS-Ganesha backend. Ganesha version 4.0 is deployed using ceph orch in a ceph cluster version 17.
Backend configuration in manila.conf
[ganesha_nfs]
driver_handles_share_servers = False
share_backend_name = ganesha_nfs
Share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_protocol_helper_type = NFS
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_filesystem_name = {{ cephfs_filesystem_name }}
cephfs_nfs_cluster_id = {{ cephfs_nfs_cluster_id }}

I see the subvolume being created in ceph
ceph fs subvolume ls <volume_name>

but no export
ceph nfs export ls <cluster_id>

what am I doing wrong? what am I missing?

in the manila-share logs I see a request to create a subvolume, but I don't see a request to create an export, for some reason manila doesn't even try to create an export