[openstack-dev] [Manila] Ceph native driver for manila
greg at gregs42.com
Wed Mar 4 17:56:50 UTC 2015
On Wed, Mar 4, 2015 at 7:03 AM, Csaba Henk <chenk at redhat.com> wrote:
> ----- Original Message -----
>> From: "Danny Al-Gaaf" <danny.al-gaaf at bisect.de>
>> To: "Csaba Henk" <chenk at redhat.com>, "OpenStack Development Mailing List (not for usage questions)"
>> <openstack-dev at lists.openstack.org>
>> Cc: ceph-devel at vger.kernel.org
>> Sent: Wednesday, March 4, 2015 3:26:52 PM
>> Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila
>> Am 04.03.2015 um 15:12 schrieb Csaba Henk:
>> > ----- Original Message -----
>> >> From: "Danny Al-Gaaf" <danny.al-gaaf at bisect.de> To: "OpenStack
>> >> Development Mailing List (not for usage questions)"
>> >> <openstack-dev at lists.openstack.org>, ceph-devel at vger.kernel.org
>> >> Sent: Sunday, March 1, 2015 3:07:36 PM Subject: Re:
>> >> [openstack-dev] [Manila] Ceph native driver for manila
>> > ...
>> >> For us security is very critical, as the performance is too. The
>> >> first solution via ganesha is not what we prefer (to use CephFS
>> >> via p9 and NFS would not perform that well I guess). The second
>> >> solution, to use
>> > Can you please explain that why does the Ganesha based stack
>> > involve 9p? (Maybe I miss something basic, but I don't know.)
>> Sorry, seems that I mixed it up with the p9 case. But the performance
>> is may still an issue if you use NFS on top of CephFS (incl. all the
>> VM layer involved within this setup).
>> For me the question with all these NFS setups is: why should I use NFS
>> on top on CephFS? What is the right to exist of CephFS in this case? I
>> would like to use CephFS directly or via filesystem passthrough.
> That's a good question. Or indeed, two questions:
> 1. Why to use NFS?
> 2. Why does the NFS export of Ceph need to involve CephFS?
> As of "why NFS" -- it's probably a good selling point that it's
> standard filesystem export technology and the tenants can remain
> backend-unaware as long as the backend provides NFS export.
> We are working on the Ganesha library --
> with the aim to make it easy to create Ganesha based drivers. So if you have
> already an FSAL, you can get at an NFS exporting driver almost for free (with a
> modest amount of glue code). So you could consider making such a driver for
> Ceph, to satisfy customers who demand NFS access, even if there is a native
> driver which gets the limelight.
> (See commits implementing this under "Work Items" of the BP -- one is the
> actual Ganesha library and the other two show how it can be hooked in, by the
> example of the Gluster driver. At the moment flat network (share-server-less)
> drivers are supported.)
> As of why CephFS was the technology chosen for implementing the Ceph FSAL for
> Ganesha, that's something I'd also like to know. I have the following naive
> question in mind: "Would it not have been better to implement Ceph FSAL with
> something »closer to« Ceph?", and I have three actual questions about it:
> - does this question make sense in this form, and if not, how to amend?
> - I'm asking the question itself, or the amended version of it.
> - If the answer is yes, is there a chance someone would create an alternative
> Ceph FSAL on that assumed closer-to-Ceph technology?
I don't understand. What "closer-to-Ceph" technology do you want than
native use of the libcephfs library? Are you saying to use raw RADOS
to provide storage instead of CephFS?
In that case, it doesn't make a lot of sense: CephFS is how you
provide a real filesystem in the Ceph ecosystem. I suppose if you
wanted to create a lighter-weight pseudo-filesystem you could do so
(somebody is building a "RadosFS", I think from CERN?) but then it's
not interoperable with other stuff.
More information about the OpenStack-dev