<p dir="ltr">Hi all,</p>
<p dir="ltr">Will try and get some time to do that Windows testing today. I haven't attempted that one, have just used benchmarking tools.</p>
<p dir="ltr">Razique, I think the moosefs client does some local buffering? I'm not 100% on that, but I know that it isn't truly synchronous. Moosefs doesn't do direct writes and then wait for a response from each replica. Gluster does this, making it slower, but technically more reliable.</p>

<p dir="ltr">Again, this is my understanding from reading about MFS, you would probably know better than me :)</p>
<div class="gmail_quote">On Jul 25, 2013 6:39 AM, "Razique Mahroua" <<a href="mailto:razique.mahroua@gmail.com">razique.mahroua@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word">I'll try the boot from volume feature and see if it makes any significant difference. (fingers crossed) - I'm wondering though two things :<div><br><div>- How come I never had such issues (to validate) with MooseFS while it's the mfsmount is FUSE-based?</div>
<div>- Does that mean the issue is not si much about Gluster, rather all FUSE-based FS? </div><div><div><div><br></div><div>regards,</div><div><br></div><div><div>
<span style="border-spacing:0px;text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:'Lucida Grande';word-spacing:0px"><span style="font-weight:normal;font-family:Helvetica"><b style="color:rgb(19,112,138)">Razique Mahroua</b></span><span style="font-weight:normal;font-family:Helvetica;color:rgb(19,112,138)"><b> - </b></span><span style="font-family:Helvetica"><span style="font-weight:normal;font-family:Helvetica"><b style="color:rgb(19,112,138)">Nuage & Co</b></span><span style="border-collapse:separate;font-family:Helvetica;font-style:normal;font-variant:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;border-spacing:0px;font-size:medium"><span style="border-spacing:0px;text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><span style="border-collapse:separate;font-variant:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;border-spacing:0px"><div style="font-style:normal;font-size:medium;font-family:Helvetica;font-weight:normal">
<font color="#13708a"><a href="mailto:razique.mahroua@gmail.com" target="_blank">razique.mahroua@gmail.com</a></font></div><div style="font-style:normal;font-size:medium;font-family:Helvetica"><font color="#13708a">Tel : <a href="tel:%2B33%209%2072%2037%2094%2015" value="+33972379415" target="_blank">+33 9 72 37 94 15</a></font></div>
</span></span></span></span></span><br style="text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;text-transform:none;font-size:medium;white-space:normal;font-family:Arial;word-spacing:0px">
<span><img height="125" width="125" src="cid:0A2450C8-6A0D-42D0-8035-743CAD564432@fabrique.lan"></span>
</div>
<br><div><div>Le 25 juil. 2013 à 09:22, Sylvain Bauza <<a href="mailto:sylvain.bauza@bull.net" target="_blank">sylvain.bauza@bull.net</a>> a écrit :</div><br><blockquote type="cite">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    <div>Hi Denis,<br>
      <br>
      As per my short testings, I would assume anything but FUSE-mounted
      would match your needs.<br>
      <br>
      On the performance glance, here is what I could suggest : <br>
       - if using GlusterFS, wait for this BP [1] to be implemented. I
      do agree with Razique on the issues you could face with GlusterFS,
      this is mainly due to the Windows caching system mixed with QCOW2
      copy-on-write images relying on a FUSE mountpoint. <br>
       - if using Ceph, use RADOS to boot from Cinder volumes. Don't use
      FUSE mountpoint, again.<br>
      <br>
      -Sylvain<br>
      <br>
      [1]
      
      <a href="https://blueprints.launchpad.net/nova/+spec/glusterfs-native-support" target="_blank">https://blueprints.launchpad.net/nova/+spec/glusterfs-native-support</a><br>
      <br>
      <br>
      <br>
      Le 25/07/2013 08:16, Denis Loshakov a écrit :<br>
    </div>
    <blockquote type="cite">So,
      first i'm going to try Ceph.
      <br>
      Thanks for advices and lets RTFM begin :)
      <br>
      <br>
      On <a href="tel:24.07.2013%2023" value="+12407201323" target="_blank">24.07.2013 23</a>:18, Razique Mahroua wrote:
      <br>
      <blockquote type="cite">+1 :)
        <br>
        <br>
        <br>
        Le 24 juil. 2013 à 21:08, Joe Topjian <<a href="mailto:joe.topjian@cybera.ca" target="_blank">joe.topjian@cybera.ca</a>
        <br>
        <a href="mailto:joe.topjian@cybera.ca" target="_blank"><mailto:joe.topjian@cybera.ca></a>> a écrit :
        <br>
        <br>
        <blockquote type="cite">Hi Jacob,
          <br>
          <br>
          Are you using SAS or SSD drives for Gluster? Also, do you have
          one
          <br>
          large Gluster volume across your entire cloud or is it broke
          up into a
          <br>
          few different ones? I've wondered if there's a benefit to
          doing the
          <br>
          latter so distribution activity is isolated to only a few
          nodes. The
          <br>
          downside to that, of course, is you're limited to what compute
          nodes
          <br>
          instances can migrate to.
          <br>
          <br>
          I use Gluster for instance storage in all of my "controlled"
          <br>
          environments like internal and sandbox clouds, but I'm
          hesitant to
          <br>
          introduce it into production environments as I've seen the
          same issues
          <br>
          that Razique describes -- especially with Windows instances.
          My guess
          <br>
          is due to how NTFS writes to disk.
          <br>
          <br>
          I'm curious if you could report the results of the following
          test: in
          <br>
          a Windows instance running on Gluster, copy a 3-4gb file to it
          from
          <br>
          the local network so it comes in at a very high speed. When I
          do this,
          <br>
          the first few gigs come in very fast, but then slows to a
          crawl and
          <br>
          the Gluster processes on all nodes spike.
          <br>
          <br>
          Thanks,
          <br>
          Joe
          <br>
          <br>
          <br>
          <br>
          On Wed, Jul 24, 2013 at 12:37 PM, Jacob Godin
          <<a href="mailto:jacobgodin@gmail.com" target="_blank">jacobgodin@gmail.com</a>
          <br>
          <a href="mailto:jacobgodin@gmail.com" target="_blank"><mailto:jacobgodin@gmail.com></a>> wrote:
          <br>
          <br>
              Oh really, you've done away with Gluster all together? The
          fast
          <br>
              backbone is definitely needed, but I would think that was
          the case
          <br>
              with any distributed filesystem.
          <br>
          <br>
              MooseFS looks promising, but apparently it has a few
          reliability
          <br>
              problems.
          <br>
          <br>
          <br>
              On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua
          <br>
              <<a href="mailto:razique.mahroua@gmail.com" target="_blank">razique.mahroua@gmail.com</a>
          <a href="mailto:razique.mahroua@gmail.com" target="_blank"><mailto:razique.mahroua@gmail.com></a>> wrote:
          <br>
          <br>
                  :-)
          <br>
                  Actually I had to remove all my instances running on
          it
          <br>
                  (especially the windows ones), yah unfortunately my
          network
          <br>
                  backbone wasn't fast enough to support the load
          induced by GFS
          <br>
                  - especially the numerous operations performed by the
          <br>
                  self-healing agents :(
          <br>
          <br>
                  I'm currently considering MooseFS, it has the
          advantage to
          <br>
                  have a pretty long list of companies using it in
          production
          <br>
          <br>
                  take care
          <br>
          <br>
          <br>
                  Le 24 juil. 2013 à 16:40, Jacob Godin
          <<a href="mailto:jacobgodin@gmail.com" target="_blank">jacobgodin@gmail.com</a>
          <br>
                  <a href="mailto:jacobgodin@gmail.com" target="_blank"><mailto:jacobgodin@gmail.com></a>> a écrit :
          <br>
          <br>
          <blockquote type="cite">        A few things I found were key
            for I/O performance:
            <br>
            <br>
                     1. Make sure your network can sustain the traffic.
            We are
            <br>
                        using a 10G backbone with 2 bonded interfaces
            per node.
            <br>
                     2. Use high speed drives. SATA will not cut it.
            <br>
                     3. Look into tuning settings. Razique, thanks for
            sending
            <br>
                        these along to me a little while back. A couple
            that I
            <br>
                        found were useful:
            <br>
                          * KVM cache=writeback (a little risky, but WAY
            faster)
            <br>
                          * Gluster write-behind-window-size (set to 4MB
            in our
            <br>
                            setup)
            <br>
                          * Gluster cache-size (ideal values in our
            setup were
            <br>
                            96MB-128MB)
            <br>
            <br>
                    Hope that helps!
            <br>
            <br>
            <br>
            <br>
                    On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua
            <br>
                    <<a href="mailto:razique.mahroua@gmail.com" target="_blank">razique.mahroua@gmail.com</a>
            <br>
                    <a href="mailto:razique.mahroua@gmail.com" target="_blank"><mailto:razique.mahroua@gmail.com></a>> wrote:
            <br>
            <br>
                        I had much performance issues myself with
            Windows
            <br>
                        instances, and I/O demanding instances. Make
            sure it fits
            <br>
                        your env. first before deploying it in
            production
            <br>
            <br>
                        Regards,
            <br>
                        Razique
            <br>
            <br>
                        *Razique Mahroua** - **Nuage & Co*
            <br>
                        <a href="mailto:razique.mahroua@gmail.com" target="_blank">razique.mahroua@gmail.com</a>
            <a href="mailto:razique.mahroua@gmail.com" target="_blank"><mailto:razique.mahroua@gmail.com></a>
            <br>
                        Tel : <a href="tel:%2B33%209%2072%2037%2094%2015" value="+33972379415" target="_blank">+33 9 72 37 94 15</a>
            <tel:%2B33%209%2072%2037%2094%2015>
            <br>
            <br>
                        <NUAGECO-LOGO-Fblan_petit.jpg>
            <br>
            <br>
                        Le 24 juil. 2013 à 16:25, Jacob Godin
            <br>
                        <<a href="mailto:jacobgodin@gmail.com" target="_blank">jacobgodin@gmail.com</a>
            <a href="mailto:jacobgodin@gmail.com" target="_blank"><mailto:jacobgodin@gmail.com></a>> a
            <br>
                        écrit :
            <br>
            <br>
            <blockquote type="cite">            Hi Denis,
              <br>
              <br>
                          I would take a look into GlusterFS with a
              distributed,
              <br>
                          replicated volume. We have been using it for
              several
              <br>
                          months now, and it has been stable. Nova will
              need to
              <br>
                          have the volume mounted to its instances
              directory
              <br>
                          (default /var/lib/nova/instances), and Cinder
              has direct
              <br>
                          support for Gluster as of Grizzly I believe.
              <br>
              <br>
              <br>
              <br>
                          On Wed, Jul 24, 2013 at 11:11 AM, Denis
              Loshakov
              <br>
                          <<a href="mailto:dloshakov@gmail.com" target="_blank">dloshakov@gmail.com</a>
              <a href="mailto:dloshakov@gmail.com" target="_blank"><mailto:dloshakov@gmail.com></a>> wrote:
              <br>
              <br>
                              Hi all,
              <br>
              <br>
                              I have issue with creating shared storage
              for
              <br>
                              Openstack. Main idea is to create 100%
              redundant
              <br>
                              shared storage from two servers (kind of
              network
              <br>
                              RAID from two servers).
              <br>
                              I have two identical servers with many
              disks inside.
              <br>
                              What solution can any one provide for such
              schema? I
              <br>
                              need shared storage for running VMs (so
              live
              <br>
                              migration can work) and also for
              cinder-volumes.
              <br>
              <br>
                              One solution is to install Linux on both
              servers and
              <br>
                              use DRBD + OCFS2, any comments on this?
              <br>
                              Also I heard about Quadstor software and
              it can
              <br>
                              create network RAID and present it via
              iSCSI.
              <br>
              <br>
                              Thanks.
              <br>
              <br>
                              P.S. Glance uses swift and is setuped on
              another servers
              <br>
              <br>
                             
              _________________________________________________
              <br>
                              OpenStack-operators mailing list
              <br>
                              <a href="mailto:OpenStack-operators@lists.__openstack.org" target="_blank">OpenStack-operators@lists.__openstack.org</a>
              <br>
                             
              <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank"><mailto:OpenStack-operators@lists.openstack.org></a>
              <br>
                             
<a href="http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators" target="_blank">http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators</a><br>
                             
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank"><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators></a><br>
              <br>
              <br>
                         
              _______________________________________________
              <br>
                          OpenStack-operators mailing list
              <br>
                          <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a>
              <br>
                         
              <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank"><mailto:OpenStack-operators@lists.openstack.org></a>
              <br>
                         
              <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>
              <br>
            </blockquote>
            <br>
            <br>
          </blockquote>
          <br>
          <br>
          <br>
              _______________________________________________
          <br>
              OpenStack-operators mailing list
          <br>
              <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a>
          <br>
              <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank"><mailto:OpenStack-operators@lists.openstack.org></a>
          <br>
             
          <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>
          <br>
          <br>
          <br>
          <br>
          <br>
          --
          <br>
          Joe Topjian
          <br>
          Systems Architect
          <br>
          Cybera Inc.
          <br>
          <br>
          <a href="http://www.cybera.ca/" target="_blank">www.cybera.ca</a> <a href="http://www.cybera.ca/" target="_blank"><http://www.cybera.ca/></a>
          <br>
          <br>
          Cybera is a not-for-profit organization that works to spur and
          support
          <br>
          innovation, for the economic benefit of Alberta, through the
          use
          <br>
          of cyberinfrastructure.
          <br>
        </blockquote>
        <br>
        <br>
        <br>
        _______________________________________________
        <br>
        OpenStack-operators mailing list
        <br>
        <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a>
        <br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>
        <br>
        <br>
      </blockquote>
      <br>
      _______________________________________________
      <br>
      OpenStack-operators mailing list
      <br>
      <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a>
      <br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>
      <br>
    </blockquote>
    <br>
  </div>

</blockquote></div><br></div></div></div></div></div><br>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div>