[Openstack] Why GlusterFS should not be integrated with OpenStack

John Mark Walker johnmark at johnmark.org
Tue Sep 10 13:49:55 UTC 2013


Jeff Darcy, GlusterFS developer, responded here:
https://news.ycombinator.com/item?id=6359851

I'm not sure what the OP was trying to achieve if he didn't completely
understand what he was talking about.

Pasting Jeff's comments:

GlusterFS developer here. The OP is extremely misleading, so I'll try to
set the record straight.

(1) Granted, snapshots (volume or file level) aren't implemented yet. OTOH,
there are two projects for file-level snapshots that are far enough along
to have patches in either the main review queue or the community forge.
Volume-level snapshots are a little further behind. Unsurprisingly,
snapshots in a distributed filesystem are hard, and we're determined to get
them right before we foist some half-baked result on users and risk losing
their data.

(2) The author seems very confused about the relationship between bricks
(storage units) and servers used for mounting. The mount server is used *
once* to fetch a configuration, then the client connects directly to the
bricks. There is no need to specify all of the bricks on the mount command;
one need only specify enough servers - two or three - to handle one being
down *at mount time*. RRDNS can also help here.

(3) Lack of support for login/password authentication. This has not been
true in the I/O path since forever; it only affects the CLI, which should
only be run from the servers themselves (or similarly secure hosts) anyway.
It should not be run from arbitrary hosts. Adding full SSL-based auth is
already an accepted feature for GlusterFS 3.5 and some of the patches are
already in progress. Other management interfaces already have stronger auth.

(4) Volumes can be mounted R/W from many locations. This is actually a
strength, since volumes are files. Unlike some alternatives, GlusterFS
provides true multi-protocol access - not just different silos for
different interfaces within the same infrastructure but the *same
data*accessible via (deep breath) native protocol, NFS, SMB, Swift,
Cinder,
Hadoop FileSystem API, or raw C API. It's up to the cloud infrastructure
(e.g. Nova) not to mount the same block-storage device from multiple
locations, *just as with every alternative*.

(5) What's even more damning than what the author says is what the author
doesn't say. There are benefits to having full POSIX semantics so that
hundreds of thousands of programs and scripts that don't speak other
storage APIs can use the data. There are benefits to having the same data
available through many protocols. There are benefits to having data that's
shared at a granularity finer than whole-object GET and PUT, with familiar
permissions and ACLs. There are benefits to having a system where any new
feature - e.g. georeplication, erasure coding, deduplication - immediately
becomes available across all access protocols. Every performance comparison
I've seen vs. obvious alternatives has either favored GlusterFS or revealed
cheating (e.g. buffering locally or throwing away O_SYNC) by the
competitor. Or both. Of course, the OP has already made up his mind so he
doesn't mention any of this.

It's perfectly fine that the author prefers something else. He mentions
Ceph. I love Ceph. I also love XtreemFS, which hardly anybody seems to know
about and that's a shame. We're all on the same side, promoting open-source
horizontally scalable filesystems vs. worse alternatives - proprietary
storage, non-scalable storage, storage that can't be mounted and used in
familiar ways by normal users. When we've won that battle we can fight over
the spoils. ;) The point is that *even for a Cinder use case* the author's
preferences might not apply to anyone else, and they certainly don't apply
to many of the more general use cases that all of these systems are
designed to support.



On Tue, Sep 10, 2013 at 9:15 AM, Diego Parrilla Santamaría <
diego.parrilla.santamaria at gmail.com> wrote:

> You are describing the problems of using a shared filesystem backend for
> cinder, instead of using a driver with direct connection at block-device
> level.
>
> It has improved a lot in the last 18 months or so, specially if you want
> to use as shared storage for your VMs.
>
> Seems the snapshotting feature is on the way:
>
> https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/glusterfs.py
>
> But the killer feature is the direct access from QEMU to Gluster using
> libgfapi. It seems it has been added in Havana and it's in master branch
> since mid August:
> https://review.openstack.org/#/c/39498/
>
> If I had to consider a scalable storage solution for an Openstack
> deployment for the next 10 years, I would consider Gluster.
>
> Cheers
> Diego
>
>
>
>
>
>  --
> Diego Parrilla
> <http://www.stackops.com/>*CEO*
> *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 |
> skype:diegoparrilla*
> * <http://www.stackops.com/>
> *
>
> *
>
>
>
> On Tue, Sep 10, 2013 at 2:36 PM, Maciej Gałkiewicz <macias at shellycloud.com
> > wrote:
>
>> Hello
>>
>> For everyone looking for some info regarding GlusterFS and Openstack
>> integration I suggest my blog post:
>>
>> https://shellycloud.com/blog/2013/09/why-glusterfs-should-not-be-implemented-with-openstack
>>
>> regards
>> --
>> Maciej Gałkiewicz
>> Shelly Cloud Sp. z o. o., Sysadmin
>> http://shellycloud.com/, macias at shellycloud.com
>> KRS: 0000440358 REGON: 101504426
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130910/0e377c0d/attachment.html>


More information about the Openstack mailing list