[Openstack-operators] Shared storage HA question

Jacob Godin jacobgodin at gmail.com
Thu Jul 25 09:48:55 UTC 2013


Hi all,

Will try and get some time to do that Windows testing today. I haven't
attempted that one, have just used benchmarking tools.

Razique, I think the moosefs client does some local buffering? I'm not 100%
on that, but I know that it isn't truly synchronous. Moosefs doesn't do
direct writes and then wait for a response from each replica. Gluster does
this, making it slower, but technically more reliable.

Again, this is my understanding from reading about MFS, you would probably
know better than me :)
On Jul 25, 2013 6:39 AM, "Razique Mahroua" <razique.mahroua at gmail.com>
wrote:

> I'll try the boot from volume feature and see if it makes any significant
> difference. (fingers crossed) - I'm wondering though two things :
>
> - How come I never had such issues (to validate) with MooseFS while it's
> the mfsmount is FUSE-based?
> - Does that mean the issue is not si much about Gluster, rather all
> FUSE-based FS?
>
> regards,
>
> *Razique Mahroua** - **Nuage & Co*
> razique.mahroua at gmail.com
> Tel : +33 9 72 37 94 15
>
>
> Le 25 juil. 2013 à 09:22, Sylvain Bauza <sylvain.bauza at bull.net> a écrit :
>
>  Hi Denis,
>
> As per my short testings, I would assume anything but FUSE-mounted would
> match your needs.
>
> On the performance glance, here is what I could suggest :
>  - if using GlusterFS, wait for this BP [1] to be implemented. I do agree
> with Razique on the issues you could face with GlusterFS, this is mainly
> due to the Windows caching system mixed with QCOW2 copy-on-write images
> relying on a FUSE mountpoint.
>  - if using Ceph, use RADOS to boot from Cinder volumes. Don't use FUSE
> mountpoint, again.
>
> -Sylvain
>
> [1] https://blueprints.launchpad.net/nova/+spec/glusterfs-native-support
>
>
>
> Le 25/07/2013 08:16, Denis Loshakov a écrit :
>
> So, first i'm going to try Ceph.
> Thanks for advices and lets RTFM begin :)
>
> On 24.07.2013 23:18, Razique Mahroua wrote:
>
> +1 :)
>
>
> Le 24 juil. 2013 à 21:08, Joe Topjian <joe.topjian at cybera.ca
> <mailto:joe.topjian at cybera.ca> <joe.topjian at cybera.ca>> a écrit :
>
> Hi Jacob,
>
> Are you using SAS or SSD drives for Gluster? Also, do you have one
> large Gluster volume across your entire cloud or is it broke up into a
> few different ones? I've wondered if there's a benefit to doing the
> latter so distribution activity is isolated to only a few nodes. The
> downside to that, of course, is you're limited to what compute nodes
> instances can migrate to.
>
> I use Gluster for instance storage in all of my "controlled"
> environments like internal and sandbox clouds, but I'm hesitant to
> introduce it into production environments as I've seen the same issues
> that Razique describes -- especially with Windows instances. My guess
> is due to how NTFS writes to disk.
>
> I'm curious if you could report the results of the following test: in
> a Windows instance running on Gluster, copy a 3-4gb file to it from
> the local network so it comes in at a very high speed. When I do this,
> the first few gigs come in very fast, but then slows to a crawl and
> the Gluster processes on all nodes spike.
>
> Thanks,
> Joe
>
>
>
> On Wed, Jul 24, 2013 at 12:37 PM, Jacob Godin <jacobgodin at gmail.com
> <mailto:jacobgodin at gmail.com> <jacobgodin at gmail.com>> wrote:
>
>     Oh really, you've done away with Gluster all together? The fast
>     backbone is definitely needed, but I would think that was the case
>     with any distributed filesystem.
>
>     MooseFS looks promising, but apparently it has a few reliability
>     problems.
>
>
>     On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua
>     <razique.mahroua at gmail.com <mailto:razique.mahroua at gmail.com><razique.mahroua at gmail.com>>
> wrote:
>
>         :-)
>         Actually I had to remove all my instances running on it
>         (especially the windows ones), yah unfortunately my network
>         backbone wasn't fast enough to support the load induced by GFS
>         - especially the numerous operations performed by the
>         self-healing agents :(
>
>         I'm currently considering MooseFS, it has the advantage to
>         have a pretty long list of companies using it in production
>
>         take care
>
>
>         Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin at gmail.com
>         <mailto:jacobgodin at gmail.com> <jacobgodin at gmail.com>> a écrit :
>
>         A few things I found were key for I/O performance:
>
>          1. Make sure your network can sustain the traffic. We are
>             using a 10G backbone with 2 bonded interfaces per node.
>          2. Use high speed drives. SATA will not cut it.
>          3. Look into tuning settings. Razique, thanks for sending
>             these along to me a little while back. A couple that I
>             found were useful:
>               * KVM cache=writeback (a little risky, but WAY faster)
>               * Gluster write-behind-window-size (set to 4MB in our
>                 setup)
>               * Gluster cache-size (ideal values in our setup were
>                 96MB-128MB)
>
>         Hope that helps!
>
>
>
>         On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua
>         <razique.mahroua at gmail.com
>         <mailto:razique.mahroua at gmail.com> <razique.mahroua at gmail.com>>
> wrote:
>
>             I had much performance issues myself with Windows
>             instances, and I/O demanding instances. Make sure it fits
>             your env. first before deploying it in production
>
>             Regards,
>             Razique
>
>             *Razique Mahroua** - **Nuage & Co*
>             razique.mahroua at gmail.com <mailto:razique.mahroua at gmail.com><razique.mahroua at gmail.com>
>             Tel : +33 9 72 37 94 15 <tel:%2B33%209%2072%2037%2094%2015>
>
>             <NUAGECO-LOGO-Fblan_petit.jpg>
>
>             Le 24 juil. 2013 à 16:25, Jacob Godin
>             <jacobgodin at gmail.com <mailto:jacobgodin at gmail.com><jacobgodin at gmail.com>>
> a
>             écrit :
>
>             Hi Denis,
>
>             I would take a look into GlusterFS with a distributed,
>             replicated volume. We have been using it for several
>             months now, and it has been stable. Nova will need to
>             have the volume mounted to its instances directory
>             (default /var/lib/nova/instances), and Cinder has direct
>             support for Gluster as of Grizzly I believe.
>
>
>
>             On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov
>             <dloshakov at gmail.com <mailto:dloshakov at gmail.com><dloshakov at gmail.com>>
> wrote:
>
>                 Hi all,
>
>                 I have issue with creating shared storage for
>                 Openstack. Main idea is to create 100% redundant
>                 shared storage from two servers (kind of network
>                 RAID from two servers).
>                 I have two identical servers with many disks inside.
>                 What solution can any one provide for such schema? I
>                 need shared storage for running VMs (so live
>                 migration can work) and also for cinder-volumes.
>
>                 One solution is to install Linux on both servers and
>                 use DRBD + OCFS2, any comments on this?
>                 Also I heard about Quadstor software and it can
>                 create network RAID and present it via iSCSI.
>
>                 Thanks.
>
>                 P.S. Glance uses swift and is setuped on another servers
>
>                 _________________________________________________
>                 OpenStack-operators mailing list
>                 OpenStack-operators at lists.__openstack.org
>                 <mailto:OpenStack-operators at lists.openstack.org><OpenStack-operators at lists.openstack.org>
>
> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
>
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>
>
>             _______________________________________________
>             OpenStack-operators mailing list
>             OpenStack-operators at lists.openstack.org
>             <mailto:OpenStack-operators at lists.openstack.org><OpenStack-operators at lists.openstack.org>
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
>
>     _______________________________________________
>     OpenStack-operators mailing list
>     OpenStack-operators at lists.openstack.org
>     <mailto:OpenStack-operators at lists.openstack.org><OpenStack-operators at lists.openstack.org>
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> --
> Joe Topjian
> Systems Architect
> Cybera Inc.
>
> www.cybera.ca <http://www.cybera.ca/> <http://www.cybera.ca/>
>
> Cybera is a not-for-profit organization that works to spur and support
> innovation, for the economic benefit of Alberta, through the use
> of cyberinfrastructure.
>
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130725/357b0eb6/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NUAGECO-LOGO-Fblan_petit.jpg
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130725/357b0eb6/attachment-0001.jpg>


More information about the OpenStack-operators mailing list