Using ceph for openstack storage

Chris Morgan mihalis68 at gmail.com
Mon Nov 1 12:45:57 UTC 2021


VMs and OSDs on the same node (“hyperconverged”) is not a good idea in our experience. We used to run that way but moved to splitting nodes into either compute or storage. One of our older hyperconverged clusters OOM killed a VM only last week because ceph used up more memory than when the VM was scheduled. You also have different procedures to make a node safe for compute role than for storage. It’s tedious to have to worry about both when needing to take a node down for maintenance.

Chris Morgan

Sent from my iPhone

> On Nov 1, 2021, at 5:36 AM, A Monster <amonster369 at gmail.com> wrote:
> 
> 
> Thank you for your response.
> 
> Sadly, I'm talking about actual production, but I'm very limited in terms of hardware.
> 
> I was thinking about using RAID for controller node as data redundancy, because I had the idea of maximizing the number of nova compute nodes, 
> So basically i thought off using a controller with the following services ( Nova, Neutron, Keystone, Horizon, Glance, Cinder and Swift).
> Following the configuration you suggested, I would have :
> - 3 Controllers that are also Ceph Monitors
> - 9 Nova compute nodes and Ceph OSDs   
> 
> My questions are :
> - having multiple Ceph monitors is for the sake of redundancy or does it have a performance goal ?
> - Combining Ceph OSD and Nova compute wouldn't have performance drawbacks or damage the integrity of data stored in each node.
> 
> wouldn't it be better is this case to use two separate servers for swift and glance and use RAID for data redundancy instead of using Ceph SDS.
> 
> Thank you very much for you time.
> 



More information about the openstack-discuss mailing list