[openstack-dev] [Manila] CephFS native driver

Shinobu Kinjo skinjo at redhat.com
Wed Sep 30 23:58:51 UTC 2015


Is there any plan to merge those branches to master?
Or is there anything needs to be done more?

Shinobu

----- Original Message -----
From: "Ben Swartzlander" <ben at swartzlander.org>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Saturday, September 26, 2015 9:27:58 AM
Subject: Re: [openstack-dev] [Manila] CephFS native driver

On 09/24/2015 09:49 AM, John Spray wrote:
> Hi all,
>
> I've recently started work on a CephFS driver for Manila.  The (early)
> code is here:
> https://github.com/openstack/manila/compare/master...jcsp:ceph

Awesome! This is something that's been talking about for quite some time 
and I'm pleased to see progress on making it a reality.

> It requires a special branch of ceph which is here:
> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>
> This isn't done yet (hence this email rather than a gerrit review),
> but I wanted to give everyone a heads up that this work is going on,
> and a brief status update.
>
> This is the 'native' driver in the sense that clients use the CephFS
> client to access the share, rather than re-exporting it over NFS.  The
> idea is that this driver will be useful for anyone who has such
> clients, as well as acting as the basis for a later NFS-enabled
> driver.

This makes sense, but have you given thought to the optimal way to 
provide NFS semantics for those who prefer that? Obviously you can pair 
the existing Manila Generic driver with Cinder running on ceph, but I 
wonder how that wound compare to some kind of ganesha bridge that 
translates between NFS and cephfs. It that something you've looked into?

> The export location returned by the driver gives the client the Ceph
> mon IP addresses, the share path, and an authentication token.  This
> authentication token is what permits the clients access (Ceph does not
> do access control based on IP addresses).
>
> It's just capable of the minimal functionality of creating and
> deleting shares so far, but I will shortly be looking into hooking up
> snapshots/consistency groups, albeit for read-only snapshots only
> (cephfs does not have writeable shapshots).  Currently deletion is
> just a move into a 'trash' directory, the idea is to add something
> later that cleans this up in the background: the downside to the
> "shares are just directories" approach is that clearing them up has a
> "rm -rf" cost!

All snapshots are read-only... The question is whether you can take a 
snapshot and clone it into something that's writable. We're looking at 
allowing for different kinds of snapshot semantics in Manila for Mitaka. 
Even if there's no create-share-from-snapshot functionality a readable 
snapshot is still useful and something we'd like to enable.

The deletion issue sounds like a common one, although if you don't have 
the thing that cleans them up in the background yet I hope someone is 
working on that.

> A note on the implementation: cephfs recently got the ability (not yet
> in master) to restrict client metadata access based on path, so this
> driver is simply creating shares by creating directories within a
> cluster-wide filesystem, and issuing credentials to clients that
> restrict them to their own directory.  They then mount that subpath,
> so that from the client's point of view it's like having their own
> filesystem.  We also have a quota mechanism that I'll hook in later to
> enforce the share size.

So quotas aren't enforced yet? That seems like a serious issue for any 
operator except those that want to support "infinite" size shares. I 
hope that gets fixed soon as well.

> Currently the security here requires clients (i.e. the ceph-fuse code
> on client hosts, not the userspace applications) to be trusted, as
> quotas are enforced on the client side.  The OSD access control
> operates on a per-pool basis, and creating a separate pool for each
> share is inefficient.  In the future it is expected that CephFS will
> be extended to support file layouts that use RADOS namespaces, which
> are cheap, such that we can issue a new namespace to each share and
> enforce the separation between shares on the OSD side.

I think it will be important to document all of these limitations. I 
wouldn't let them stop you from getting the driver done, but if I was a 
deployer I'd want to know about these details.

> However, for many people the ultimate access control solution will be
> to use a NFS gateway in front of their CephFS filesystem: it is
> expected that an NFS-enabled cephfs driver will follow this native
> driver in the not-too-distant future.

Okay this answers part of my above question, but how to you expect the 
NFS gateway to work? Ganesha has been used successfully in the past.

> This will be my first openstack contribution, so please bear with me
> while I come up to speed with the submission process.  I'll also be in
> Tokyo for the summit next month, so I hope to meet other interested
> parties there.

Welcome and I look forward you meeting you in Tokyo!

-Ben


> All the best,
> John
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list