<div dir="ltr"><div><div><div><div>Hello all,<br><br></div>I installed OpenStack with Glance + Ceph OSD with replication factor 2 and now I can see the write operations are extremly slow.</div><div></div><div>For example, I can see only 0.04 MB/s write speed when I run rados bench with 512b blocks:<br></div><div><br></div>rados bench -p test 60 write --no-cleanup -t 1 -b 512<br><br> Maintaining 1 concurrent writes of 512 bytes for up to 60 seconds or 0 objects<br> Object prefix: benchmark_data_node-17.domain.tld_15862<br> sec Cur ops started finished avg MB/s cur MB/s last lat avg lat<br> 0 0 0 0 0 0 - 0<br> 1 1 83 82 0.0400341 0.0400391 0.008465 0.0120985<br> 2 1 169 168 0.0410111 0.0419922 0.080433 0.0118995<br> 3 1 240 239 0.0388959 0.034668 0.008052 0.0125385<br> 4 1 356 355 0.0433309 0.0566406 0.00837 0.0112662<br> 5 1 472 471 0.0459919 0.0566406 0.008343 0.0106034<br> 6 1 550 549 0.0446735 0.0380859 0.036639 0.0108791<br> 7 1 581 580 0.0404538 0.0151367 0.008614 0.0120654<br><br><br><div>My test environment configuration:<br></div><div>Hardware servers with 1Gb network interfaces, 64Gb RAM and 16 CPU cores per node, HDDs WDC WD5003ABYX-01WERA0.<br></div>OpenStack with 1 controller, 1 compute and 2 ceph nodes (ceph on separate nodes).<br>CentOS 6.5, kernel 2.6.32-431.el6.x86_64.<br><br></div>I tested several config options for optimizations, like in /etc/ceph/ceph.conf:<br><br></div>[default]<br>...<br>osd_pool_default_pg_num = 1024<br>osd_pool_default_pgp_num = 1024<br>osd_pool_default_flag_hashpspool = true<br>...<br>[osd]<br>osd recovery max active = 1<br>osd max backfills = 1<br>filestore max sync interval = 30<br>filestore min sync interval = 29<br>filestore flusher = false<br>filestore queue max ops = 10000<br>filestore op threads = 16<br>osd op threads = 16<br>...<br>[client]<br>rbd_cache = true<br>rbd_cache_writethrough_until_flush = true<br><div><br></div><div>and in /etc/cinder/cinder.conf:<br><br></div><div>[DEFAULT]<br></div><div>volume_tmp_dir=/tmp<br></div><div><br>but in the result performance was increased only on ~30 % and it not looks like huge success.<br><br></div><div>Non-default mount options and TCP optimization <span id="result_box" class="" lang="en"><span class="">increase the speed</span> <span class="">in about</span> <span class="">1%</span></span>:<br><br>[root@node-17 ~]# mount | grep ceph<br>/dev/sda4 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0)<br><br>[root@node-17 ~]# cat /etc/sysctl.conf<br>net.core.rmem_max = 16777216<br>net.core.wmem_max = 16777216<br>net.ipv4.tcp_rmem = 4096 87380 16777216<br>net.ipv4.tcp_wmem = 4096 65536 16777216<br>net.ipv4.tcp_window_scaling = 1<br>net.ipv4.tcp_timestamps = 1<br>net.ipv4.tcp_sack = 1<br></div><div><div><br></div><div><br>Do we have other ways to significantly improve CEPH storage performance?<br><div><div><div>Any feedback and comments are welcome!<br><br></div><div>Thank you!<br><br><br></div><div>-- <br><div dir="ltr"><font color="#888888"><font color="#888888"><br></font></font><div style="font-family:arial;font-size:small">Timur,</div><div style="font-family:arial;font-size:small">QA Engineer</div><div style="font-family:arial;font-size:small">OpenStack Projects</div><div style="font-family:arial;font-size:small">Mirantis Inc</div></div>
</div></div></div></div></div></div>