[Openstack-operators] Advice about distributed FS for glance/nova

Abel Lopez alopgeek at gmail.com
Thu Nov 28 12:19:04 UTC 2013


+1 for Ceph, you get additional benefits by using qcow2 images with rbd
backend for glance.

On Thursday, November 28, 2013, Daniel Ankers wrote:

> Alvise,
>
>
> On 28 November 2013 08:44, Alvise Dorigo <alvise.dorigo at pd.infn.it<javascript:_e({}, 'cvml', 'alvise.dorigo at pd.infn.it');>
> > wrote:
>
>> My requirements are:
>>  - extendability (new disk array transparently added without any outage)
>>  - high performance on big files (glance and nova usually manage big
>> files)
>>  - POSIX / mount as a normal filesystem with unique naming space
>>  - easy to install/manage (and supported on RHEL 6.x)
>>
>> Any advice or report on your experience would be very appreciated.
>>
>
> You may be including this as part of the "easy to install/manage" part,
> but I'd strongly recommend adding "hitless upgrades" to that list.
> This isn't so much an issue with older systems like NFS, but newer systems
> like Gluster and Ceph are moving forward at a tremendous rate.  When the
> time comes to do an upgrade you are probably not going to want to have to
> remove the whole shared storage in order to do it.
>
> I'm still working on my Openstack cluster, but I have a bit of experience
> in other environments with Gluster and NFS.  It is non-trivial to do NFS
> without a single point of failure but apart from that it is very well
> documented, well understood and mature.  Gluster is completely fault
> tolerant but I have seen issues with the self-healing it does where virtual
> machines will be locked from writing to their virtual disks for over 2
> minutes at a time.  Upgrades to Gluster can be scary, and if you make use
> of replication then it is non-trivial to ensure that the appropriate
> replication pairs are formed especially following a failure (for example,
> if you have 2 racks of machines then you might want all the bricks from
> rack 1 to be replicated to servers in rack 2.)  You can get full support
> from RedHat for Gluster which may be an advantage over other FSs.
>
> Regards,
> Dan
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20131128/8800b61c/attachment.html>


More information about the OpenStack-operators mailing list