Network demand for controller nodes
Hello, Does anyone know if controller nodes are demanding on network needs? I have a situation where I don't have enough 10GB/s port in the switch. Would a cluster be network constrained if I have three controllers using 1GB/s connections and the execution and storage nodes are on 10GB/s connections? Does the VM traffic ever transverse the controller's network cards? In another word, is there a valid requirement for 10GB/s on openstack controllers? Regards, William
I think this very much depends on your architecture, workload, deployment and ultimately your requirements. Where are you deploying the network nodes (or are you using DVR)? How much north / south traffic will your environment use? What do you mean by storage nodes (eg, cinder volume workers or actual lvm / ceph storage nodes)? How big are your Glance images going to be and how many VMs will be built concurrently? Where are you storing your glance images? I'm sure openstack can run using 1G networks, it's just a question of will your deployment meet your performance requirements. ________________________________ From: William Muriithi <wmuriithi@perasoinc.com> Sent: Sunday, July 7, 2024 20:35 To: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: Network demand for controller nodes CAUTION: This email originates from outside THG ________________________________ Hello, Does anyone know if controller nodes are demanding on network needs? I have a situation where I don't have enough 10GB/s port in the switch. Would a cluster be network constrained if I have three controllers using 1GB/s connections and the execution and storage nodes are on 10GB/s connections? Does the VM traffic ever transverse the controller's network cards? In another word, is there a valid requirement for 10GB/s on openstack controllers? Regards, William
Hey. Here is a graph from one of our clusters. 3 control nodes. 34 hypervisors, some 900 VMs. 7 days average. Grey tone is after hours for us. We have a 25Gb network, with 40Gb uplinks. iSCSI for storage, those iSCSI graphs look very different during back up time and when Delphix is doing a data push, but the control nodes are pretty happy and mostly under 1Gb. [image: image.png] I wouldn't want to run this cluster on 1Gb switches, but the size of the cluster matters. While our control nodes are pretty happy at about 1Gb for this specific cluster size, it's going to depend on your setup and importantly the overall size. When Delphix does a refresh, we can see 40Gb line rate on the uplinks for about 6 hours during that refresh, but that is not on the control nodes, those are storage networks. Cheers. Michael, On Tue, Jul 9, 2024 at 8:38 AM Danny Webb <Danny.Webb@thg.com> wrote:
I think this very much depends on your architecture, workload, deployment and ultimately your requirements.
Where are you deploying the network nodes (or are you using DVR)? How much north / south traffic will your environment use? What do you mean by storage nodes (eg, cinder volume workers or actual lvm / ceph storage nodes)? How big are your Glance images going to be and how many VMs will be built concurrently? Where are you storing your glance images?
I'm sure openstack can run using 1G networks, it's just a question of will your deployment meet your performance requirements. ------------------------------ *From:* William Muriithi <wmuriithi@perasoinc.com> *Sent:* Sunday, July 7, 2024 20:35 *To:* openstack-discuss@lists.openstack.org < openstack-discuss@lists.openstack.org> *Subject:* Network demand for controller nodes
* CAUTION: This email originates from outside THG * ------------------------------ Hello,
Does anyone know if controller nodes are demanding on network needs? I have a situation where I don't have enough 10GB/s port in the switch.
Would a cluster be network constrained if I have three controllers using 1GB/s connections and the execution and storage nodes are on 10GB/s connections? Does the VM traffic ever transverse the controller's network cards?
In another word, is there a valid requirement for 10GB/s on openstack controllers?
Regards, William
Hi, Dnia niedziela, 7 lipca 2024 21:35:32 CEST William Muriithi pisze:
Hello,
Does anyone know if controller nodes are demanding on network needs? I have a situation where I don't have enough 10GB/s port in the switch.
Would a cluster be network constrained if I have three controllers using 1GB/s connections and the execution and storage nodes are on 10GB/s connections? Does the VM traffic ever transverse the controller's network cards?
In another word, is there a valid requirement for 10GB/s on openstack controllers?
Regards, William
From Neutron perspective it can depends a bit on the backend which You are using. For example with ML2/OVN even if FIP traffic is distributed by default, SNAT is centralized always so if You have VMs which have only connectivity to the outside world using SNAT then this traffic is going (typically) through Your controller nodes. Of course you can workaround this by configuring some of the compute nodes to act as a gateway chassis and in such case those will be used to send this centralized SNAT traffic. The downside of such solution is that You can't really control which vms will use which gateway chassis so You may have traffic from completly different tenant going through the compute node with vms which belongs to someone else and maybe require more bandwidth. Other similar thing like SNAT is for example Octavia with ovn-octavia-provider driver (not with Amphora) and Neutron's Floating IP port forwarding which is also always centralized. In case of ML2/OVS it is kind of similar, even with DVR enabled SNAT is always centralized. -- Slawek Kaplonski Principal Software Engineer Red Hat
participants (4)
-
Danny Webb
-
Michael Knox
-
Sławek Kapłoński
-
William Muriithi