[openstack-dev] Management of NAS (NFS/CIFS shares) in OpenStack
John Griffith
john.griffith at solidfire.com
Mon Nov 26 15:36:39 UTC 2012
On Sun, Nov 25, 2012 at 11:52 PM, Michael Chapman <
michael.chapman at anu.edu.au> wrote:
> I'm interested if folks that have expressed the desire for NFS support
> have looked at what was introduced in Folsom?
> I wasn't even aware it existed. I've looked at it for about 5 minutes so
> this may be off the mark, but it appears to be making a file on an nfs
> share and then exposing that as a block device. If that's correct, then
> it's orthogonal to the use cases presented in this thread.
>
> Also I'd like to know if there's any way that it could be
> improved/enhanced to better fit their needs?
> There are several features that are being examined here that I can see:
>
> 1. Integrating with an existing distributed filesystem such as GPFS, cxfs
> or Lustre. In this case I think the driver would be pointed at a directory,
> and subfolders of that directory are then exported to VMs via NFS. This
> isn't possible with any of the existing drivers, but wouldn't be difficult
> to add to the new netapp NFS one.
> 2. Allowing VMs to share a filesystem. This can be done by mounting a
> block device through Cinder and then running NFS on top of it, but there's
> both a performance overhead and a management overhead for users there. This
> appears to be what the new NFS drivers accomplish by exporting slices of
> LVM or slices of a netapp appliance.
> 3. Using an existing NFS appliance to serve block device volumes. This is
> addressed by the Folsom NFS driver from what I can tell.
>
> I'd be more interested in going with option 1 at least for Grizzly, and
> then depending on the participation and feed-back we could decide that it's
> not enough and we need to do option 2 for the F release
> Do you mean the H release?
>
> - Michael
>
>
> On Sun, Nov 25, 2012 at 6:26 AM, John Griffith <
> john.griffith at solidfire.com> wrote:
>
>>
>>
>>
>> On Thu, Nov 22, 2012 at 12:29 AM, Blair Bethwaite <
>> blair.bethwaite at gmail.com> wrote:
>>
>>> (Apologies if this doesn't format well - I copied and pasted from the
>>> HTML archive as I haven't received the associated digest yet.)
>>>
>>> Hi Trey,
>>>
>>> On Thu Nov 22 03:33:20 UTC 2012, "Trey Duskin" <trey at maldivica.com>
>>> wrote:
>>> > Forgive the ignorant question, but why is Cinder the only option for
>>> the
>>> > backing for "file system as a service" when there is also Swift? The
>>> > blueprint that Netapp wrote up for this mentioned Swift would not be
>>> > suitable, but did not explain why.
>>>
>>> I don't think they made that thinking very clear in the blueprint, but I
>>> understand where they are coming from, and Michael Chapman already summed
>>> it up nicely:
>>> "block storage and network shares are both instances of granting
>>> individual VMs access to slices of storage resources, and as such belong
>>> under a single project". Object storage is fundamentally different because
>>> it is a model that has evolved specifically to cater for content
>>> download/upload over the Internet, as opposed to storage access within a
>>> cloud deployment. Additionally, when you start thinking about the API
>>> operations you'd want for volumes and shares you start to see a lot of
>>> similarities, not so much for object storage.
>>>
>>> > I don't know much about the Cinder
>>> > features and limitations, but in the use case of sharing and persisting
>>> > large datasets among compute instances, it seems to me Swift would
>>> provide
>>> > the needed scalability and durability.
>>>
>>> It doesn't...
>>>
>>> It certainly does provide scalability and durability for persisting
>>> large datasets - you can tar things up and use it like a tape backup - but
>>> only for applications that understand or can easily be made to understand
>>> object storage.
>>>
>>> It does sharing in part, but not efficiently and certainly nothing like
>>> a file-system based share which multiple clients can coordinate on.
>>>
>>> To illustrate, here are a couple of use-cases we want to be able to
>>> fulfil which we can't (without serious back-injury as a result of jumping
>>> through hoops...) at the moment:
>>> * Run an _existing_ Apache site from a guest that serves mostly static
>>> content with occasional changes (seems great for Swift right, except...),
>>> where the content is largely file-based hierarchical datasets currently
>>> sitting on a HSM system and too large to fit into the available ephemeral
>>> disks or volumes.
>>> * Host visualisation services/desktop for large datasets produced on a
>>> local HPC facility and sitting on its' GPFS. This requires much better
>>> bandwidth and latency than we can get using sshfs or NFS into the guest,
>>> attempting to do it over HTTP with Swift isn't going to get us that.
>>>
>>> --
>>> Cheers,
>>> ~Blairo
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> Hey Everyone,
>>
>> So first off, the question is not whether NAS functionality is useful or
>> something that some folks would like to have. The question really is
>> whether it belongs in Cinder or not, my stance has been no, however I think
>> if we have enough interest and we have folks that want to implement and
>> support it then we can move forward and see how things go.
>>
>> One question I have is whether or not anybody has looked at the NFS
>> "adapter" that was already added in Folsom? It seems that it might
>> actually address at least some of the needs that have been raised here? To
>> be honest I had hoped to use that as a guage for real interest/demand for
>> NAS support and I haven't seen any mention of it anywhere (patches, bugs,
>> tests, questions etc).
>>
>> Going forward, I have some concerns about the patch and the approach
>> that's being taken to implement this currently. Maybe they're unwarranted,
>> but I am concerned about mixing in NAS api, manager and rpc in to the
>> existing volume code. It seems we could organize this better and save some
>> confusion and quite frankly maintenance head-ache if we went about this a
>> bit differently. I'm also curious about intent on how this is gated and
>> tested?
>>
>> The way I see it there are a couple possible approaches that can be taken
>> on this:
>>
>> 1. Continue with the NFS driver approach that we started on:
>> This isn't the greatest option in terms of feature sets, however I think
>> with a bit more attention and feed-back from folks this could be viable.
>> The concept would be that we abstract out the NAS specifics in the driver,
>> and from the perspective of the API and the rest of the Cinder project we
>> just treat at as a volume as we do with everything else today. To improve
>> upon what's there today, some of the first steps would be creating a new
>> connection type, and there would be some work needed on the Nova side to
>> consume a Network Share. I think this can be done fairly cleanly, it
>> doesn't solve the testing problems, but on the other hand it doesn't touch
>> as much of the core Cinder code so some of my concerns there are addressed.
>>
>> 2. Go all out with NAS support in Cinder as a separate service:
>> So this is probably the "right" answer, rather than wedge NAS support in
>> to all of the existing Cinder code, the idea would be to make it
>> a separate Cinder service. The nice thing about this is you can run both
>> Block Storage and NAS on the same Cinder node at the same time to deal with
>> gating etc. This would look similar to what Nova-Volume looked like inside
>> of Nova, there would be clear separation in the API's, Managers, Drivers
>> etc. This is a bit more work that option 1, but if there really is a
>> demand for a robust NAS service this would be the way to go about it in my
>> opinion. Not only does it provide a bit of freedom, but it also provides a
>> good architecture for separation if anybody is ever interested in doing the
>> work to start an independent NAS project.
>>
>> So, from my perspective I'd be interested in feed-back regarding option
>> 1. I'm interested if folks that have expressed the desire for NFS support
>> have looked at what was introduced in Folsom? Also I'd like to know if
>> there's any way that it could be improved/enhanced to better fit their
>> needs? I'd be more interested in going with option 1 at least for Grizzly,
>> and then depending on the participation and feed-back we could decide that
>> it's not enough and we need to do option 2 for the F release, or perhaps
>> another option altogether may become evident.
>>
>> Thanks,
>> John
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Michael Chapman
> *Cloud Computing Services*
> ANU Supercomputer Facility
> Room 318, Leonard Huxley Building (#56), Mills Road
> The Australian National University
> Canberra ACT 0200 Australia
> Tel: *+61 2 6125 7106*
> Web: http://nci.org.au
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >>Do you mean the H release?
Yes... sorry, that's the second time in a week I've done that. I do know
the alphabet, really. :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121126/109d5146/attachment.html>
More information about the OpenStack-dev
mailing list