[ceph-users] Suggestion to build ceph storage

Satish Patel satish.txt at gmail.com
Sun Jun 19 14:21:51 UTC 2022



Sent from my iPhone

> On Jun 18, 2022, at 10:47 AM, Anthony D'Atri <anthony.datri at gmail.com> wrote:
> 
> Please do not CC the list.

It was my mistake, sorry about that. 
> 
>> 15 Total servers and each server has a 12x18TB HDD (spinning disk) . We
>> understand SSD/NvME would be best fit but it's way out of budget.
> 
> NVMe SSDs can be surprisingly competitive when you consider IOPS/$, density, and the cost of the HBA you don’t need.

Yes totally and I have only single slot to mount one NvME on motherboard. Let’s say I want to put single M.2 NvME then what size I would go for ? 
> 
>> Ceph recommends using a faster disk for wal/db if the data disk is slow and
>> in my case I do have a slower disk for data.
>> 
>> Question:
>> 1. Let's say if i want to put a NvME disk for wal/db then what size i
>> should buy.
> 
> Since you specify 12xHDD, you’re thinking an add-in PCI card?  Or do you have rear bays?

I have raid controller connected to all drivers in side box. 
> 
> 
>> 2. Do I need to partition wal/db for each OSD or just a single
>> partition can share for all OSDs?
> 
> Each OSD.

What size of partition I should create for each 18TB OSD? 

> 
>> 3. Can I put the OS on the same disk where the wal/db is going to sit ?
> 
> Bad idea.

I can understand but I have redundancy of 3 copy just in case of failure. And OS disk doesn’t hammer correct?
> 
>> (This way i don't need to spend extra money for extra disk)
> 
> Boot drives are cheap.

I have no extra slot left that is why planning to share but I can explore more. 
> 
>> 
>> Any suggestions you have for this kind of storage would be much
>> appreciated.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users at ceph.io
>> To unsubscribe send an email to ceph-users-leave at ceph.io
> 



More information about the openstack-discuss mailing list