I deployed the cluster with this command:
`$ ceph nfs cluster create nfs-cluster --placement=3 --ingress --virtual-ip <my_virtual_ip>``$ ceph nfs cluster info nfs-cluster`
{
"nfs-cluster": {
"virtual_ip": "<my_virtual_ip>",
"backend": [
{
"hostname": "ceph-node-01",
"ip": "ceph-node-01-ip",
"port": 12049
},
{
"hostname": "ceph-node-02",
"ip": "ceph-node-02-ip",
"port": 12049
},
{
"hostname": "ceph-node-03",
"ip": "ceph-node-03-ip",
"port": 12049
}
],
"port": 2049,
"monitor_port": 9049
}
}In the output of the `ps axuf | grep haproxy` command I see that haproxy has been deployed. Is this the "haproxy-protocol" mode you're asking about?
On Sat, Apr 5, 2025 at 12:18 AM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:Do you have the Ceph Ingress service deployed, front-ending the Ceph
NFS cluster? If yes, is it using the "haproxy-protocol" mode?
On Fri, Apr 4, 2025 at 8:00 AM Mikhail Okhrimenko
<m.okhrimenko@pin-up.tech> wrote:
>
> Hi Carlos!
> Thank you for your reply. I appreciate it.
>
> I tested what you described, it works, thanks! But I encountered another problem.
> I created 2 instances with addresses 10.0.185.60 and 10.0.185.61. If I create an access rule for each instance
> `$ openstack share access create share-01 ip 10.0.185.60 --access-level rw`
> `$ openstack share access create share-01 ip 10.0.185.61 --access-level rw`
>
> Then when mounting the share and then when creating a file I get the error: "Read-only file system"
>
> at the same time I see that everything is ok on the ceph side, the clients have access
>
> `$ ceph nfs export info nfs-cluster /volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4`
> {
> "export_id": 1,
> "path": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
> "cluster_id": "nfs-cluster",
> "pseudo": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
> "access_type": "RO",
> "squash": "none",
> "security_label": true,
> "protocols": [
> 4
> ],
> "transports": [
> "TCP"
> ],
> "fsal": {
> "name": "CEPH",
> "user_id": "nfs-cluster.1",
> "fs_name": "ceph"
> },
> "clients": [
> {
> "addresses": [
> "10.0.185.61",
> "10.0.185.60"
> ],
> "access_type": "rw",
> "squash": "none"
> }
> ]
> }
>
> But if I create a rule for 0.0.0.0 and delete previously created rules, then the share is mounted in rw mode and from the ceph side only the list of clients differs
>
> `$ ceph nfs export info nfs-cluster /volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4`
> {
> "export_id": 1,
> "path": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
> "cluster_id": "nfs-cluster",
> "pseudo": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
> "access_type": "RO",
> "squash": "none",
> "security_label": true,
> "protocols": [
> 4
> ],
> "transports": [
> "TCP"
> ],
> "fsal": {
> "name": "CEPH",
> "user_id": "nfs-cluster.1",
> "fs_name": "ceph"
> },
> "clients": [
> {
> "addresses": [
> "0.0.0.0"
> ],
> "access_type": "rw",
> "squash": "none"
> }
> ]
> }
>
> Alos if I use 10.0.185.0/24 net in access rule it is not working in rw mode
>
> {
>
> "export_id": 1,
>
> "path": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
>
> "cluster_id": "nfs-cluster",
>
> "pseudo": "/volumes/_nogroup/9183a3a8-e00c-409e-9c25-1d678642c14b/26c8f40e-9d3d-41e0-8237-f274cff18de4",
>
> "access_type": "RO",
>
> "squash": "none",
>
> "security_label": true,
>
> "protocols": [
>
> 4
>
> ],
>
> "transports": [
>
> "TCP"
>
> ],
>
> "fsal": {
>
> "name": "CEPH",
>
> "user_id": "nfs-cluster.1",
>
> "fs_name": "ck-ceph"
>
> },
>
> "clients": [
>
> {
>
> "addresses": [
>
> "10.0.185.0/24"
>
> ],
>
> "access_type": "rw",
>
> "squash": "none"
>
> }
>
> ]
>
> }
>
>
> Regards,
> Mikhail
>
> On Fri, Mar 21, 2025 at 8:42 PM Carlos Silva <ces.eduardo98@gmail.com> wrote:
>>
>> Hey Mikhail! Thanks for reaching out.
>>
>> Apparently your configuration is correct and the subvolume was created properly. However, the NFS exports will only be created when you apply an access rule in Manila.
>> Please use the `$ openstack share access create` to create the access rule (RW or RO) for your clients, and then the NFS export should show up.
>>
>> Regards,
>> carloss
>>
>>
>> Em sex., 21 de mar. de 2025 às 12:35, Mikhail Okhrimenko <m.okhrimenko@pin-up.tech> escreveu:
>>>
>>> Hello community,
>>>
>>> Trying to setup the Manila version of Caracal with NFS-Ganesha backend. Ganesha version 4.0 is deployed using ceph orch in a ceph cluster version 17.
>>> Backend configuration in manila.conf
>>> [ganesha_nfs]
>>> driver_handles_share_servers = False
>>> share_backend_name = ganesha_nfs
>>> Share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
>>> cephfs_protocol_helper_type = NFS
>>> cephfs_conf_path = /etc/ceph/ceph.conf
>>> cephfs_auth_id = manila
>>> cephfs_cluster_name = ceph
>>> cephfs_filesystem_name = {{ cephfs_filesystem_name }}
>>> cephfs_nfs_cluster_id = {{ cephfs_nfs_cluster_id }}
>>>
>>> I see the subvolume being created in ceph
>>> ceph fs subvolume ls <volume_name>
>>>
>>> but no export
>>> ceph nfs export ls <cluster_id>
>>>
>>> what am I doing wrong? what am I missing?
>>>
>>> in the manila-share logs I see a request to create a subvolume, but I don't see a request to create an export, for some reason manila doesn't even try to create an export