Compute node NIC bonding for increased instance throughput
Good day, I have a question for the OpenStack community, hopefully someone can help me out here. Goal ------------------ Provision an NFS instance capable of providing 20Gbps of network throughput to be used by multiple other instances within the same project/network. Background ------------------ We run an OpenStack Stein cluster on Ubuntu 18.04. Our Neutron architecture is using openvswitch and GRE. Our compute nodes have two 10G NIC's and are configured in a layer3+4 LACP to the Top of Rack switch. Observations ------------------ Successfully see 20Gbps of traffic balanced across both slaves in the bond when performing iperf3 tests at the *baremetal/os/ubuntu* layer with two other compute nodes as iperf3 clients. Problem ------------------ We are unable to achieve 20Gbps at the instance level. We have tried multiple iperf3 connections from multiple other instances on different compute nodes and we are only able to reach 10Gbps and notice that traffic is not utilizing both slaves in the bond. One slave gets all of the traffic while the other slave sits basically idle. I have some configuration output here: http://paste.openstack.org/show/QdQq76q6VI1XN5tLW0xH/ Any help would be appreciated! Jared Baker Cloud Architect, Ontario Institute for Cancer Research
Hi Jared, A single stream will utilize just 1 link. Have you tried with multiple streams using different sources? What do you mean with layer3+4. Do you mean the xmit hash policy? Sinan
Op 8 jan. 2020 om 17:25 heeft shubjero <shubjero@gmail.com> het volgende geschreven:
Good day,
I have a question for the OpenStack community, hopefully someone can help me out here.
Goal ------------------ Provision an NFS instance capable of providing 20Gbps of network throughput to be used by multiple other instances within the same project/network.
Background ------------------ We run an OpenStack Stein cluster on Ubuntu 18.04. Our Neutron architecture is using openvswitch and GRE. Our compute nodes have two 10G NIC's and are configured in a layer3+4 LACP to the Top of Rack switch.
Observations ------------------ Successfully see 20Gbps of traffic balanced across both slaves in the bond when performing iperf3 tests at the *baremetal/os/ubuntu* layer with two other compute nodes as iperf3 clients.
Problem ------------------ We are unable to achieve 20Gbps at the instance level. We have tried multiple iperf3 connections from multiple other instances on different compute nodes and we are only able to reach 10Gbps and notice that traffic is not utilizing both slaves in the bond. One slave gets all of the traffic while the other slave sits basically idle.
I have some configuration output here: http://paste.openstack.org/show/QdQq76q6VI1XN5tLW0xH/
Any help would be appreciated!
Jared Baker Cloud Architect, Ontario Institute for Cancer Research
participants (2)
-
shubjero
-
Sinan Polat