Hi Paul, My responses are inline. On Fri, Sep 15, 2023 at 6:44 AM Paul Browne <pfb29.cam@gmail.com> wrote:
Hello to the list,
In OpenStack releases prior to Wallaby, in CephFS native driver Manila backend config there existed a configuration directive cephfs_volume_path_prefix ;
# DEPRECATED: The prefix of the cephfs volume path. (string value) # This option is deprecated for removal since Wallaby. # Its value may be silently ignored in the future. # Reason: This option is not used starting with the Nautilus release # of Ceph. #cephfs_volume_path_prefix = /volumes
This volume path prefix was usable (and very useful) to set different prefixes in different backends, pointing to different pools in the same backend Ceph cluster e.g. pools backed by storage devices of different characteristics/technology or Ceph CRUSH rule, etc
The pool selection would be done in CephFS via the use of file layout fattributes, file layout inheritance ensuring that data created in sub-directories ends up in the correct Ceph pool.
e.g. two backends using the same Ceph cluster but two different path prefixes hitting different Ceph pools
[cephfsnative1] driver_handles_share_servers = False share_backend_name = CEPHFS1 share_driver = manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path = /etc/ceph/ceph.conf cephfs_auth_id = manila cephfs_cluster_name = ceph cephfs_volume_path_prefix = /volumes-staging cephfs_volume_mode = 777 cephfs_enable_snapshots = False
[cephfsnative1_ec] driver_handles_share_servers = False share_backend_name = CEPHFS1_EC share_driver = manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path = /etc/ceph/ceph.conf cephfs_auth_id = manila cephfs_cluster_name = ceph cephfs_volume_path_prefix = /volumes-ec-staging cephfs_volume_mode = 777 cephfs_enable_snapshots = False
However, since Wallaby this config directive looks to have been deprecated and the default of using a path prefix of only /volumes is possible.Trying to use any other prefix in backend configs is ignored.
Would anyone on the list know why this option was deprecated in Manila code, or was this forced on Manila by upstream Ceph as of Nautilus?
Is there a way to get back to an equivalent functionality?
Currently using only a default path of /volumes means we have lost all flexibility in defining Manila CephFS share data placement using the native CephFS driver.
Possibly using share group types+share groups and some pre-created paths in the root CephFS could get to something like equivalency?
But these paths would need to correspond to the share group UUID, which will only be known after the share group has been created.
So not all that flexible a path, since it requires an interaction between users to communicate the share group ID and Ceph admins to set the correct file layout policy. Putting the path prefix in the backend type removed all of that in a nicely transparent way.
Having just prototyped this, it will work for setting a desired file layout on a pre-defined share group UUID path in the root CephFS, though it's not really ideal or sustainable to be able to do this for dynamically created share groups by users or automation...
The CephFS driver started using Ceph's "mgr/volumes" API to create and delete CephFS subvolumes (manila shares) in the wallaby release [1]. The manila driver configuration option that you point out was removed as part of this change. Prior to this change, the driver used a "ceph_volume_client" python interface. This interface is gone since the Ceph Pacific release. Functionally, we expected nothing significant to change during this transition, but we lost some customizability like the option that you point to. Now "/volumes" is hard coded in the subvolume paths [2]. Ceph Pacific was the first Ceph release where having multiple CephFS fileystems in a cluster was fully supported. I'm wondering if using multiple file systems would allow you to retain the customizability you're seeking. The difference in segregation would not be apparent in the export path; but each CephFS filesystem would have its own dedicated data/metadata pools, and separate MDS daemons on the cluster. So you'll be able to achieve the provisioning separation that you're seeking, and more customizability of OSD/pool characteristics. The CephFS driver in manila can only work with one CephFS filesystem at a time though (configuration option "cephfs_filesystem_name") [3]. So, just like you're doing currently, you can define multiple CephFS backends, each with its own "cephfs_filesystem_name". (For added security, you can manipulate "cephfs_auth_id" and have a dedicated driver client user for each backend.) I've copied Patrick and Venky from the CephFS community here. If there are other options besides this, or if you have other questions, they might be able to help. Thanks, Goutham [1] https://review.opendev.org/c/openstack/manila/+/779619 [2] https://docs.ceph.com/en/latest/cephfs/fs-volumes/#fs-subvolumes [3] https://opendev.org/openstack/manila/src/branch/stable/wallaby/manila/share/...
Thanks in advance for any advice, Paul Browne