Re: Using ceph for openstack storage
Thank you for your response. Sadly, I'm talking about actual production, but I'm very limited in terms of hardware. I was thinking about using RAID for controller node as data redundancy, because I had the idea of maximizing the number of nova compute nodes, So basically i thought off using a controller with the following services ( Nova, Neutron, Keystone, Horizon, Glance, Cinder and Swift). Following the configuration you suggested, I would have : - 3 Controllers that are also Ceph Monitors - 9 Nova compute nodes and Ceph OSDs My questions are : - having multiple Ceph monitors is for the sake of redundancy or does it have a performance goal ? - Combining Ceph OSD and Nova compute wouldn't have performance drawbacks or damage the integrity of data stored in each node. wouldn't it be better is this case to use two separate servers for swift and glance and use RAID for data redundancy instead of using Ceph SDS. Thank you very much for you time.
On 11/1/21 10:29 AM, A Monster wrote:
Thank you for your response.
Sadly, I'm talking about actual production, but I'm very limited in terms of hardware.
I was thinking about using RAID for controller node as data redundancy, because I had the idea of maximizing the number of nova compute nodes, So basically i thought off using a controller with the following services ( Nova, Neutron, Keystone, Horizon, Glance, Cinder and Swift). Following the configuration you suggested, I would have : - 3 Controllers that are also Ceph Monitors - 9 Nova compute nodes and Ceph OSDs
My questions are : - having multiple Ceph monitors is for the sake of redundancy or does it have a performance goal ? - Combining Ceph OSD and Nova compute wouldn't have performance drawbacks or damage the integrity of data stored in each node.
wouldn't it be better is this case to use two separate servers for swift and glance and use RAID for data redundancy instead of using Ceph SDS.
Thank you very much for you time.
Hi, RAID will protect you from only a single type of failure on your controllers. RAID is *not* a good idea at all for Ceph or Swift (it will slow down things, and wont help much with redundancy). If you need, for example, to upgrade the operating system (for example because of a kernel security fix), you will have to restart your controllers, meaning there's going to be API down time. If you set the CEPH Mon on the controllers, then you will have the issue with Ceph Mon not being reachable during the upgrade, meaning you may end up with stuck I/O on all of your VMs. Of course, combining Ceph OSD and Nova compute is less nice than having a dedicated cluster (especially: busy VMs may slow down your Ceph and increase latency). But considering your constraints, it's still better: for any serious Ceph setup, you need to be able to "loose" at least 10% of your Ceph cluster so it can recover without impacting your overall cluster too much. The same way, I would suggest running at least the swift-object service on your compute nodes: it's common to have Swift account + containers on SSD, to seed it up. It's ok-ish to run account+container on your 3 controllers, IMO. Again, the piece of advice I'm giving is only valid because of your constraints, otherwise I would suggest a larger cluster. I hope this helps, Cheers, Thomas Goirand (zigo)
VMs and OSDs on the same node (“hyperconverged”) is not a good idea in our experience. We used to run that way but moved to splitting nodes into either compute or storage. One of our older hyperconverged clusters OOM killed a VM only last week because ceph used up more memory than when the VM was scheduled. You also have different procedures to make a node safe for compute role than for storage. It’s tedious to have to worry about both when needing to take a node down for maintenance. Chris Morgan Sent from my iPhone
On Nov 1, 2021, at 5:36 AM, A Monster <amonster369@gmail.com> wrote:
Thank you for your response.
Sadly, I'm talking about actual production, but I'm very limited in terms of hardware.
I was thinking about using RAID for controller node as data redundancy, because I had the idea of maximizing the number of nova compute nodes, So basically i thought off using a controller with the following services ( Nova, Neutron, Keystone, Horizon, Glance, Cinder and Swift). Following the configuration you suggested, I would have : - 3 Controllers that are also Ceph Monitors - 9 Nova compute nodes and Ceph OSDs
My questions are : - having multiple Ceph monitors is for the sake of redundancy or does it have a performance goal ? - Combining Ceph OSD and Nova compute wouldn't have performance drawbacks or damage the integrity of data stored in each node.
wouldn't it be better is this case to use two separate servers for swift and glance and use RAID for data redundancy instead of using Ceph SDS.
Thank you very much for you time.
participants (3)
-
A Monster
-
Chris Morgan
-
Thomas Goirand