<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Mar 3, 2015 at 12:51 AM, Luis Pabon <span dir="ltr"><<a href="mailto:lpabon@redhat.com" target="_blank">lpabon@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">What is the status on virtfs? I am not sure if it is being maintained. Does anyone know?<br></blockquote><div><br></div><div>The last i knew its not maintained.<br></div><div>Also for what its worth, p9 won't work for windows guest (unless there is a p9 driver for windows ?) if that is part of your usecase/scenario ?<br></div><div><br>Last but not the least, p9/virtfs would expose a p9 mount , not a ceph mount to VMs, which means if there are cephfs specific mount options they may not work<br><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
- Luis<br>
<div><div class="h5"><br>
----- Original Message -----<br>
From: "Danny Al-Gaaf" <<a href="mailto:danny.al-gaaf@bisect.de">danny.al-gaaf@bisect.de</a>><br>
To: "OpenStack Development Mailing List (not for usage questions)" <<a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a>>, <a href="mailto:ceph-devel@vger.kernel.org">ceph-devel@vger.kernel.org</a><br>
Sent: Sunday, March 1, 2015 9:07:36 AM<br>
Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila<br>
<br>
Am 27.02.2015 um 01:04 schrieb Sage Weil:<br>
> [sorry for ceph-devel double-post, forgot to include<br>
> openstack-dev]<br>
><br>
> Hi everyone,<br>
><br>
> The online Ceph Developer Summit is next week[1] and among other<br>
> things we'll be talking about how to support CephFS in Manila. At<br>
> a high level, there are basically two paths:<br>
<br>
We discussed the CephFS Manila topic also on the last Manila Midcycle<br>
Meetup (Kilo) [1][2]<br>
<br>
> 2) Native CephFS driver<br>
><br>
> As I currently understand it,<br>
><br>
> - The driver will set up CephFS auth credentials so that the guest<br>
> VM can mount CephFS directly - The guest VM will need access to the<br>
> Ceph network. That makes this mainly interesting for private<br>
> clouds and trusted environments. - The guest is responsible for<br>
> running 'mount -t ceph ...'. - I'm not sure how we provide the auth<br>
> credential to the user/guest...<br>
<br>
The auth credentials need to be handled currently by a application<br>
orchestration solution I guess. I see currently no solution on the<br>
Manila layer level atm.<br></div></div></blockquote><div><br></div><div>There were some discussion in the past in Manila community on guest auto mount<br></div><div>but i guess nothing was conclusive there.<br><br></div><div>Appln orchestration can be achived by having tenant specific VM images with creds <br></div><div>pre-loaded or have the creds injected via cloud-init too should work ?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
<br>
If Ceph would provide OpenStack Keystone authentication for<br>
rados/cephfs instead of CephX, it could be handled via app orch easily.<br>
<br>
> This would perform better than an NFS gateway, but there are<br>
> several gaps on the security side that make this unusable currently<br>
> in an untrusted environment:<br>
><br>
> - The CephFS MDS auth credentials currently are _very_ basic. As<br>
> in, binary: can this host mount or it cannot. We have the auth cap<br>
> string parsing in place to restrict to a subdirectory (e.g., this<br>
> tenant can only mount /tenants/foo), but the MDS does not enforce<br>
> this yet. [medium project to add that]<br>
><br>
> - The same credential could be used directly via librados to access<br>
> the data pool directly, regardless of what the MDS has to say about<br>
> the namespace. There are two ways around this:<br>
><br>
> 1- Give each tenant a separate rados pool. This works today.<br>
> You'd set a directory policy that puts all files created in that<br>
> subdirectory in that tenant's pool, then only let the client access<br>
> those rados pools.<br>
><br>
> 1a- We currently lack an MDS auth capability that restricts which<br>
> clients get to change that policy. [small project]<br>
><br>
> 2- Extend the MDS file layouts to use the rados namespaces so that<br>
> users can be separated within the same rados pool. [Medium<br>
> project]<br>
><br>
> 3- Something fancy with MDS-generated capabilities specifying which<br>
> rados objects clients get to read. This probably falls in the<br>
> category of research, although there are some papers we've seen<br>
> that look promising. [big project]<br>
><br>
> Anyway, this leads to a few questions:<br>
><br>
> - Who is interested in using Manila to attach CephFS to guest VMs?<br></div></div></blockquote><div><br></div><div>I didn't get this question... Goal of manila is to provision shared FS to VMs<br></div><div>so everyone interested in using CephFS would be interested to attach ( 'guess you meant mount?)<br></div><div>CephFS to VMs, no ?<br><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
> - What use cases are you interested? - How important is security in<br>
> your environment?<br></div></div></blockquote><div><br></div><div>NFS-Ganesha based service VM approach (for network isolation) in Manila is still<br> under works, afaik.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
<br>
As you know we (Deutsche Telekom) are may interested to provide shared<br>
filesystems via CephFS to VMs instead of e.g. via NFS. We can<br>
provide/discuss use cases at CDS.<br>
<br>
For us security is very critical, as the performance is too. The first<br>
solution via ganesha is not what we prefer (to use CephFS via p9 and<br>
NFS would not perform that well I guess). The second solution, to use<br>
CephFS directly to the VM would be a bad solution from the security<br>
point of view since we can't expose the Ceph public network directly<br>
to the VMs to prevent all the security issues we discussed already.<br></div></div></blockquote><div><br></div><div>Is there any place the security issues are captured for the case where VMs access<br></div><div>CephFS directly ? I was curious to understand. IIUC Neutron provides private and public<br></div><div>networks and for VMs to access external CephFS network, the tenant private network needs <br></div><div>to be bridged/routed to the external provider network and there are ways neturon achives it.<br><br></div><div>Are you saying that this approach of neutron is insecure ?<br><br></div><div>thanx,<br></div><div>deepak<br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
<br>
We discussed during the Midcycle a third option:<br>
<br>
Mount CephFS directly on the host system and provide the filesystem to<br>
the VMs via p9/virtfs. This need nova integration (I will work on a<br>
POC patch for this) to setup libvirt config correctly for virtfs. This<br>
solve the security issue and the auth key distribution for the VMs,<br>
but it may introduces performance issues due to virtfs usage. We have<br>
to check what the specific performance impact will be. Currently this<br>
is the preferred solution for our use cases.<br>
<br>
What's still missing in this solution is user/tenant/subtree<br>
separation as in the 2th option. But this is needed anyway for CephFS<br>
in general.<br>
<br>
Danny<br>
<br>
[1] <a href="https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup" target="_blank">https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup</a><br>
[2] <a href="https://etherpad.openstack.org/p/manila-meetup-winter-2015" target="_blank">https://etherpad.openstack.org/p/manila-meetup-winter-2015</a><br>
<br>
</div></div>--<br>
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in<br>
the body of a message to <a href="mailto:majordomo@vger.kernel.org">majordomo@vger.kernel.org</a><br>
More majordomo info at <a href="http://vger.kernel.org/majordomo-info.html" target="_blank">http://vger.kernel.org/majordomo-info.html</a><br>
<div class="HOEnZb"><div class="h5"><br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>