[Openstack-operators] Openstack and Ceph

Joseph Bajin josephbajin at gmail.com
Mon Feb 20 01:07:32 UTC 2017


Another question is what type of SSD's are you using.  There is a big
difference between not just vendors of SSD's but the size of them as their
internals make a big difference on how the OS interacts with them.

This link is still very usage today:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/



On Fri, Feb 17, 2017 at 12:54 PM, Alex Hübner <alex at hubner.net.br> wrote:

> Are these nodes connected to a dedicated or a shared (in the sense there
> are other workloads running) network switches? How fast (1G, 10G or faster)
> are the interfaces? Also, how much RAM are you using? There's a rule of
> thumb that says you should dedicate at least 1GB of RAM for each 1 TB of
> raw disk space. How the clients are consuming the storage? Are they virtual
> machines? Are you using iSCSI to connect those? Are these clients the same
> ones you're testing against your regular SAN storage and are they
> positioned in a similar fashion (ie: over a steady network channel)? What
> Ceph version are you using?
>
> Finally, replicas are normally faster than erasure coding, so you're good
> on this. It's *never* a good idea to enable RAID cache, even when it
> apparently improves IOPS (the magic of Ceph relies on the cluster, it's
> network and the number of nodes, don't approach the nodes as if they where
> isolate storage servers). Also, RAID0 should only be used as a last resort
> for the cases the disk controller doesn't offer JBOD mode.
>
> []'s
> Hubner
>
> On Fri, Feb 17, 2017 at 7:19 AM, Vahric Muhtaryan <vahric at doruk.net.tr>
> wrote:
>
>> Hello All ,
>>
>> First thanks for your answers . Looks like everybody is ceph lover :)
>>
>> I believe that you already made some tests and have some results because
>> of until now we used traditional storages like IBM V7000 or XIV or Netapp
>> or something we are very happy to get good iops and also provide same
>> performance to all instances until now.
>>
>> We saw that each OSD eating a lot of cpu and when multiple client try to
>> get same performance from ceph its looks like not possible , ceph is
>> sharing all things with clients and we can not reach hardware raw iops
>> capacity with ceph. For example each SSD can do 90K iops we have three on
>> each node and have 6 nodes means we should get better results then what we
>> have now !
>>
>> Could you pls share your hardware configs , iops test and advise our
>> expectations correct or not ?
>>
>> We are using Kraken , almost all debug options are set 0/0 , we modified
>> op_Tracker or some other ops based configs too !
>>
>> Our Hardware
>>
>> 6 x Node
>> Each Node Have :
>> 2 Socket Intel(R) Xeon(R) CPU E5-2630L v3 @ 1.80GHz each and total 16
>> core and HT enabled
>> 3 SSD + 12 HDD (SSDs are in journal mode 4 HDD to each SSD)
>> Each disk configured Raid 0 (We did not see any performance different
>> with JBOD mode of raid card because of that continued with raid 0 )
>> Also raid card write back cache is used because its adding extra IOPS too
>> !
>>
>> Our Test
>>
>> Its %100 random and write
>> Ceph pool is configured 3 replica set. (we did not use 2 because at the
>> failover time all system stacked and we couldn’t imagine great tunning
>> about it because some of reading said that under high load OSDs can be down
>> and up again we should care about this too ! )
>>
>> Test Command : fio --randrepeat=1 --ioengine=libaio --direct=1
>> --gtod_reduce=1 --name=test --filename=test --bs=4k —iodepth=256 --size=1G
>> --numjobs=8 --readwrite=randwrite —group_reporting
>>
>> Achieved IOPS : 35 K (Single Client)
>> We tested up to 10 Clients which ceph fairly share this usage like almost
>> 4K for each
>>
>> Thanks
>> Regards
>> Vahric Muhtaryan
>>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170219/7a98aceb/attachment.html>


More information about the OpenStack-operators mailing list