<div dir="ltr">I also have limited experience with Ceph and rados bench - but it looks like you're setting the number of "worker threads" to only 1?  (-t 1)<div><br></div><div>I think the default is 16, and most storage distributed storage systems designed for concurrency are going to do a bit better if you exercise more concurrent workers... so you might try turning that up until you see some diminishing returns.  Be sure to watch for resource contention on the load generating server.</div><div><br></div><div>-Clay</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 29, 2014 at 4:49 AM, Pasquale Porreca <span dir="ltr"><<a href="mailto:pasquale.porreca@dektech.com.au" target="_blank">pasquale.porreca@dektech.com.au</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    Hello<br>
    <br>
    I have no experience with Ceph and this specific benchmark tool,
    anyway I have experience with several other performance benchmark
    tools and file systems and I can say it always happen to have very
    very low performance results when the file size is too small (i.e.
    < 1MB). <br>
    <br>
    My suspect is that benchmark tools are not reliable for file size so
    small, since the time to write is so small that the overhead
    introduced by the test itself is not at all negligible.<br>
    <br>
    I saw that the default
    object size for rados is 4 MB, did you try your test without the
    option "-b 512"? I think the results should be different for several
    order of magnitude.<br>
    <br>
    BR<div><div class="h5"><br>
    <br>
    <div>On 09/27/14 17:14, Timur Nurlygayanov
      wrote:<br>
    </div>
    </div></div><blockquote type="cite"><div><div class="h5">
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>Hello all,<br>
                <br>
              </div>
              I installed OpenStack with Glance + Ceph OSD with
              replication factor 2 and now I can see the write
              operations are extremly slow.</div>
            <div>For example, I can see only 0.04 MB/s write speed when
              I run rados bench with 512b blocks:<br>
            </div>
            <div><br>
            </div>
            rados bench -p test 60 write --no-cleanup -t 1 -b 512<br>
            <br>
             Maintaining 1 concurrent writes of 512 bytes for up to 60
            seconds or 0 objects<br>
             Object prefix: benchmark_data_node-17.domain.tld_15862<br>
               sec Cur ops   started  finished    avg MB/s     cur
            MB/s       last lat          avg lat<br>
                 0       0         0         0             
            0                0                   -                   0<br>
                 1       1        83        82            0.0400341  
            0.0400391      0.008465       0.0120985<br>
                 2       1       169       168          0.0410111   
            0.0419922      0.080433       0.0118995<br>
                 3       1       240       239          0.0388959   
            0.034668       0.008052       0.0125385<br>
                 4       1       356       355          0.0433309  
            0.0566406      0.00837         0.0112662<br>
                 5       1       472       471          0.0459919  
            0.0566406      0.008343       0.0106034<br>
                 6       1       550       549          0.0446735  
            0.0380859      0.036639       0.0108791<br>
                 7       1       581       580          0.0404538  
            0.0151367      0.008614       0.0120654<br>
            <br>
            <br>
            <div>My test environment configuration:<br>
            </div>
            <div>Hardware servers with 1Gb network interfaces, 64Gb RAM
              and 16 CPU cores per node, HDDs WDC WD5003ABYX-01WERA0.<br>
            </div>
            OpenStack with 1 controller, 1 compute and 2 ceph nodes
            (ceph on separate nodes).<br>
            CentOS 6.5, kernel 2.6.32-431.el6.x86_64.<br>
            <br>
          </div>
          I tested several config options for optimizations, like in
          /etc/ceph/ceph.conf:<br>
          <br>
        </div>
        [default]<br>
        ...<br>
        osd_pool_default_pg_num = 1024<br>
        osd_pool_default_pgp_num = 1024<br>
        osd_pool_default_flag_hashpspool = true<br>
        ...<br>
        [osd]<br>
        osd recovery max active = 1<br>
        osd max backfills = 1<br>
        filestore max sync interval = 30<br>
        filestore min sync interval = 29<br>
        filestore flusher = false<br>
        filestore queue max ops = 10000<br>
        filestore op threads = 16<br>
        osd op threads = 16<br>
        ...<br>
        [client]<br>
        rbd_cache = true<br>
        rbd_cache_writethrough_until_flush = true<br>
        <div><br>
        </div>
        <div>and in /etc/cinder/cinder.conf:<br>
          <br>
        </div>
        <div>[DEFAULT]<br>
        </div>
        <div>volume_tmp_dir=/tmp<br>
        </div>
        <div><br>
          but in the result performance was increased only on ~30 % and
          it not looks like huge success.<br>
          <br>
        </div>
        <div>Non-default mount options and TCP optimization <span lang="en"><span>increase
              the speed</span> <span>in about</span> <span>1%</span></span>:<br>
          <br>
          [root@node-17 ~]# mount | grep ceph<br>
          /dev/sda4 on /var/lib/ceph/osd/ceph-0 type xfs
          (rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0)<br>
          <br>
          [root@node-17 ~]# cat /etc/sysctl.conf<br>
          net.core.rmem_max = 16777216<br>
          net.core.wmem_max = 16777216<br>
          net.ipv4.tcp_rmem = 4096 87380 16777216<br>
          net.ipv4.tcp_wmem = 4096 65536 16777216<br>
          net.ipv4.tcp_window_scaling = 1<br>
          net.ipv4.tcp_timestamps = 1<br>
          net.ipv4.tcp_sack = 1<br>
        </div>
        <div>
          <div><br>
          </div>
          <div><br>
            Do we have other ways to significantly improve CEPH storage
            performance?<br>
            <div>
              <div>
                <div>Any feedback and comments are welcome!<br>
                  <br>
                </div>
                <div>Thank you!<br>
                  <br>
                  <br>
                </div>
                <div>-- <br>
                  <div dir="ltr"><font color="#888888"><font color="#888888"><br>
                      </font></font>
                    <div style="font-family:arial;font-size:small">Timur,</div>
                    <div style="font-family:arial;font-size:small">QA
                      Engineer</div>
                    <div style="font-family:arial;font-size:small">OpenStack
                      Projects</div>
                    <div style="font-family:arial;font-size:small">Mirantis
                      Inc</div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
      <br>
      <fieldset></fieldset>
      <br>
      </div></div><pre>_______________________________________________
OpenStack-dev mailing list
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><span class="HOEnZb"><font color="#888888">
</font></span></pre><span class="HOEnZb"><font color="#888888">
    </font></span></blockquote><span class="HOEnZb"><font color="#888888">
    <br>
    <pre cols="72">-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile <a href="tel:%2B39%203394823805" value="+393394823805" target="_blank">+39 3394823805</a>
Skype paskporr</pre>
  </font></span></div>

<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>