[Openstack-operators] Openstack Ceph Backend and Performance Information Sharing
Vahric Muhtaryan
vahric at doruk.net.tr
Thu Feb 16 16:26:01 UTC 2017
Hello All ,
For a long time we are testing Ceph from Firefly to Kraken , tried to
optimise many things which are very very common I guess like test tcmalloc
version 2.1 , 2,4 , jemalloc , setting debugs 0/0 , op_tracker and such
others and I believe with out hardware we almost reach to end of the road.
Some vendor tests mixed us a lot like samsung
http://www.samsung.com/semiconductor/support/tools-utilities/All-Flash-Array
-Reference-Design/downloads/Samsung_NVMe_SSDs_and_Red_Hat_Ceph_Storage_CS_20
160712.pdf , DELL Dell PowerEdge R730xd Performance and Sizing Guide for
Red Hat
<https://www.google.com.tr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&
uact=8&ved=0ahUKEwiA4Z28_pTSAhXCJZoKHSYVD0AQFggeMAA&url=http%3A%2F%2Fen.comm
unity.dell.com%2Ftechcenter%2Fcloud%2Fm%2Fdell_cloud_resources%2F20442913%2F
download&usg=AFQjCNGGADYZkbABD_GZ8YMct4E19KSAXA&sig2=YZCEHMq7tnXSpVydMDacIg>
and from intel
http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2015/201508
13_S303E_Zhang.pdf
At the end using 3 replica (Actually most of vendors are testing with 2 but
I believe that its very very wrong way to do because when some of failure
happen you should wait 300 sec which is configurable but from blogs we
understaood that sometimes OSDs can be down and up again because of that I
believe very important to set related number but we do not want instances
freeze ) with config below with 4K , random and fully write only .
I red a lot about OSD and OSD process eating huge CPU , yes it is and we are
very well know that we couldn¹t get total of iOPS capacity of each raw SSD
drives.
My question is , can you pls share almost same or closer config or any
config test or production results ? Key is write, not %70 of read % 30 write
or full read things
Hardware :
6 x Node
Each Node Have :
2 Socker CPU 1.8 GHZ each and total 16 core
3 SSD + 12 HDD (SSDs are in journal mode 4 HDD to each SSD)
Raid Cards Configured Raid 0
We did not see any performance different with JBOD mode of raid card because
of that continued with raid 0
Also raid card write back cache is used because its adding extra IOPS too !
Achieved IOPS : 35 K (Single Client)
We tested up to 10 Clients which ceph fairly share this usage like almost 4K
for each
Test Command : fio --randrepeat=1 --ioengine=libaio --direct=1
--gtod_reduce=1 --name=test --filename=test --bs=4k iodepth=256 --size=1G
--numjobs=8 --readwrite=randwrite group_reporting
Regards
Vahric Muhtaryan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170216/4c1a8f4b/attachment.html>
More information about the OpenStack-operators
mailing list