<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Hi Denis,<br>
      <br>
      As per my short testings, I would assume anything but FUSE-mounted
      would match your needs.<br>
      <br>
      On the performance glance, here is what I could suggest : <br>
       - if using GlusterFS, wait for this BP [1] to be implemented. I
      do agree with Razique on the issues you could face with GlusterFS,
      this is mainly due to the Windows caching system mixed with QCOW2
      copy-on-write images relying on a FUSE mountpoint. <br>
       - if using Ceph, use RADOS to boot from Cinder volumes. Don't use
      FUSE mountpoint, again.<br>
      <br>
      -Sylvain<br>
      <br>
      [1]
      <meta http-equiv="content-type" content="text/html;
        charset=ISO-8859-1">
      <a
href="https://blueprints.launchpad.net/nova/+spec/glusterfs-native-support">https://blueprints.launchpad.net/nova/+spec/glusterfs-native-support</a><br>
      <br>
      <br>
      <br>
      Le 25/07/2013 08:16, Denis Loshakov a écrit :<br>
    </div>
    <blockquote cite="mid:51F0C2CA.9050002@gmail.com" type="cite">So,
      first i'm going to try Ceph.
      <br>
      Thanks for advices and lets RTFM begin :)
      <br>
      <br>
      On 24.07.2013 23:18, Razique Mahroua wrote:
      <br>
      <blockquote type="cite">+1 :)
        <br>
        <br>
        <br>
        Le 24 juil. 2013 à 21:08, Joe Topjian <<a class="moz-txt-link-abbreviated" href="mailto:joe.topjian@cybera.ca">joe.topjian@cybera.ca</a>
        <br>
        <a class="moz-txt-link-rfc2396E" href="mailto:joe.topjian@cybera.ca"><mailto:joe.topjian@cybera.ca></a>> a écrit :
        <br>
        <br>
        <blockquote type="cite">Hi Jacob,
          <br>
          <br>
          Are you using SAS or SSD drives for Gluster? Also, do you have
          one
          <br>
          large Gluster volume across your entire cloud or is it broke
          up into a
          <br>
          few different ones? I've wondered if there's a benefit to
          doing the
          <br>
          latter so distribution activity is isolated to only a few
          nodes. The
          <br>
          downside to that, of course, is you're limited to what compute
          nodes
          <br>
          instances can migrate to.
          <br>
          <br>
          I use Gluster for instance storage in all of my "controlled"
          <br>
          environments like internal and sandbox clouds, but I'm
          hesitant to
          <br>
          introduce it into production environments as I've seen the
          same issues
          <br>
          that Razique describes -- especially with Windows instances.
          My guess
          <br>
          is due to how NTFS writes to disk.
          <br>
          <br>
          I'm curious if you could report the results of the following
          test: in
          <br>
          a Windows instance running on Gluster, copy a 3-4gb file to it
          from
          <br>
          the local network so it comes in at a very high speed. When I
          do this,
          <br>
          the first few gigs come in very fast, but then slows to a
          crawl and
          <br>
          the Gluster processes on all nodes spike.
          <br>
          <br>
          Thanks,
          <br>
          Joe
          <br>
          <br>
          <br>
          <br>
          On Wed, Jul 24, 2013 at 12:37 PM, Jacob Godin
          <<a class="moz-txt-link-abbreviated" href="mailto:jacobgodin@gmail.com">jacobgodin@gmail.com</a>
          <br>
          <a class="moz-txt-link-rfc2396E" href="mailto:jacobgodin@gmail.com"><mailto:jacobgodin@gmail.com></a>> wrote:
          <br>
          <br>
              Oh really, you've done away with Gluster all together? The
          fast
          <br>
              backbone is definitely needed, but I would think that was
          the case
          <br>
              with any distributed filesystem.
          <br>
          <br>
              MooseFS looks promising, but apparently it has a few
          reliability
          <br>
              problems.
          <br>
          <br>
          <br>
              On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua
          <br>
              <<a class="moz-txt-link-abbreviated" href="mailto:razique.mahroua@gmail.com">razique.mahroua@gmail.com</a>
          <a class="moz-txt-link-rfc2396E" href="mailto:razique.mahroua@gmail.com"><mailto:razique.mahroua@gmail.com></a>> wrote:
          <br>
          <br>
                  :-)
          <br>
                  Actually I had to remove all my instances running on
          it
          <br>
                  (especially the windows ones), yah unfortunately my
          network
          <br>
                  backbone wasn't fast enough to support the load
          induced by GFS
          <br>
                  - especially the numerous operations performed by the
          <br>
                  self-healing agents :(
          <br>
          <br>
                  I'm currently considering MooseFS, it has the
          advantage to
          <br>
                  have a pretty long list of companies using it in
          production
          <br>
          <br>
                  take care
          <br>
          <br>
          <br>
                  Le 24 juil. 2013 à 16:40, Jacob Godin
          <<a class="moz-txt-link-abbreviated" href="mailto:jacobgodin@gmail.com">jacobgodin@gmail.com</a>
          <br>
                  <a class="moz-txt-link-rfc2396E" href="mailto:jacobgodin@gmail.com"><mailto:jacobgodin@gmail.com></a>> a écrit :
          <br>
          <br>
          <blockquote type="cite">        A few things I found were key
            for I/O performance:
            <br>
            <br>
                     1. Make sure your network can sustain the traffic.
            We are
            <br>
                        using a 10G backbone with 2 bonded interfaces
            per node.
            <br>
                     2. Use high speed drives. SATA will not cut it.
            <br>
                     3. Look into tuning settings. Razique, thanks for
            sending
            <br>
                        these along to me a little while back. A couple
            that I
            <br>
                        found were useful:
            <br>
                          * KVM cache=writeback (a little risky, but WAY
            faster)
            <br>
                          * Gluster write-behind-window-size (set to 4MB
            in our
            <br>
                            setup)
            <br>
                          * Gluster cache-size (ideal values in our
            setup were
            <br>
                            96MB-128MB)
            <br>
            <br>
                    Hope that helps!
            <br>
            <br>
            <br>
            <br>
                    On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua
            <br>
                    <<a class="moz-txt-link-abbreviated" href="mailto:razique.mahroua@gmail.com">razique.mahroua@gmail.com</a>
            <br>
                    <a class="moz-txt-link-rfc2396E" href="mailto:razique.mahroua@gmail.com"><mailto:razique.mahroua@gmail.com></a>> wrote:
            <br>
            <br>
                        I had much performance issues myself with
            Windows
            <br>
                        instances, and I/O demanding instances. Make
            sure it fits
            <br>
                        your env. first before deploying it in
            production
            <br>
            <br>
                        Regards,
            <br>
                        Razique
            <br>
            <br>
                        *Razique Mahroua** - **Nuage & Co*
            <br>
                        <a class="moz-txt-link-abbreviated" href="mailto:razique.mahroua@gmail.com">razique.mahroua@gmail.com</a>
            <a class="moz-txt-link-rfc2396E" href="mailto:razique.mahroua@gmail.com"><mailto:razique.mahroua@gmail.com></a>
            <br>
                        Tel : +33 9 72 37 94 15
            <tel:%2B33%209%2072%2037%2094%2015>
            <br>
            <br>
                        <NUAGECO-LOGO-Fblan_petit.jpg>
            <br>
            <br>
                        Le 24 juil. 2013 à 16:25, Jacob Godin
            <br>
                        <<a class="moz-txt-link-abbreviated" href="mailto:jacobgodin@gmail.com">jacobgodin@gmail.com</a>
            <a class="moz-txt-link-rfc2396E" href="mailto:jacobgodin@gmail.com"><mailto:jacobgodin@gmail.com></a>> a
            <br>
                        écrit :
            <br>
            <br>
            <blockquote type="cite">            Hi Denis,
              <br>
              <br>
                          I would take a look into GlusterFS with a
              distributed,
              <br>
                          replicated volume. We have been using it for
              several
              <br>
                          months now, and it has been stable. Nova will
              need to
              <br>
                          have the volume mounted to its instances
              directory
              <br>
                          (default /var/lib/nova/instances), and Cinder
              has direct
              <br>
                          support for Gluster as of Grizzly I believe.
              <br>
              <br>
              <br>
              <br>
                          On Wed, Jul 24, 2013 at 11:11 AM, Denis
              Loshakov
              <br>
                          <<a class="moz-txt-link-abbreviated" href="mailto:dloshakov@gmail.com">dloshakov@gmail.com</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:dloshakov@gmail.com"><mailto:dloshakov@gmail.com></a>> wrote:
              <br>
              <br>
                              Hi all,
              <br>
              <br>
                              I have issue with creating shared storage
              for
              <br>
                              Openstack. Main idea is to create 100%
              redundant
              <br>
                              shared storage from two servers (kind of
              network
              <br>
                              RAID from two servers).
              <br>
                              I have two identical servers with many
              disks inside.
              <br>
                              What solution can any one provide for such
              schema? I
              <br>
                              need shared storage for running VMs (so
              live
              <br>
                              migration can work) and also for
              cinder-volumes.
              <br>
              <br>
                              One solution is to install Linux on both
              servers and
              <br>
                              use DRBD + OCFS2, any comments on this?
              <br>
                              Also I heard about Quadstor software and
              it can
              <br>
                              create network RAID and present it via
              iSCSI.
              <br>
              <br>
                              Thanks.
              <br>
              <br>
                              P.S. Glance uses swift and is setuped on
              another servers
              <br>
              <br>
                             
              _________________________________________________
              <br>
                              OpenStack-operators mailing list
              <br>
                              <a class="moz-txt-link-abbreviated" href="mailto:OpenStack-operators@lists.__openstack.org">OpenStack-operators@lists.__openstack.org</a>
              <br>
                             
              <a class="moz-txt-link-rfc2396E" href="mailto:OpenStack-operators@lists.openstack.org"><mailto:OpenStack-operators@lists.openstack.org></a>
              <br>
                             
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators">http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators</a><br>
                             
<a class="moz-txt-link-rfc2396E" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators"><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators></a><br>
              <br>
              <br>
                         
              _______________________________________________
              <br>
                          OpenStack-operators mailing list
              <br>
                          <a class="moz-txt-link-abbreviated" href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a>
              <br>
                         
              <a class="moz-txt-link-rfc2396E" href="mailto:OpenStack-operators@lists.openstack.org"><mailto:OpenStack-operators@lists.openstack.org></a>
              <br>
                         
              <a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>
              <br>
            </blockquote>
            <br>
            <br>
          </blockquote>
          <br>
          <br>
          <br>
              _______________________________________________
          <br>
              OpenStack-operators mailing list
          <br>
              <a class="moz-txt-link-abbreviated" href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a>
          <br>
              <a class="moz-txt-link-rfc2396E" href="mailto:OpenStack-operators@lists.openstack.org"><mailto:OpenStack-operators@lists.openstack.org></a>
          <br>
             
          <a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>
          <br>
          <br>
          <br>
          <br>
          <br>
          --
          <br>
          Joe Topjian
          <br>
          Systems Architect
          <br>
          Cybera Inc.
          <br>
          <br>
          <a class="moz-txt-link-abbreviated" href="http://www.cybera.ca">www.cybera.ca</a> <a class="moz-txt-link-rfc2396E" href="http://www.cybera.ca/"><http://www.cybera.ca/></a>
          <br>
          <br>
          Cybera is a not-for-profit organization that works to spur and
          support
          <br>
          innovation, for the economic benefit of Alberta, through the
          use
          <br>
          of cyberinfrastructure.
          <br>
        </blockquote>
        <br>
        <br>
        <br>
        _______________________________________________
        <br>
        OpenStack-operators mailing list
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a>
        <br>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>
        <br>
        <br>
      </blockquote>
      <br>
      _______________________________________________
      <br>
      OpenStack-operators mailing list
      <br>
      <a class="moz-txt-link-abbreviated" href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a>
      <br>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>
      <br>
    </blockquote>
    <br>
  </body>
</html>