[openstack-dev] [Fuel] 10Gbe performance issue.

Piotr Korthals piotr.korthals at intel.com
Thu Jan 22 11:19:29 UTC 2015

Thanks, Rick looks like GRO was something we was missing in our setup.

Here are some results form my tests

iperf with GRO disabled on server side : 2,5-3Gbps
iperf with GRO enabled on server side :  3,5-4 Gbps (gro was enabled on
eth0, br-eth0, br-storage)

Additionally i used OVS VLAN splinters option "Enable OVS VLAN splinters
hard trunks workaround" of fuel deployment

iperf with GRO disabled using hw VLAN splinters and MTU 1,5k : ~5 Gbps
iperf with GRO disabled using hw VLAN splinters and MTU 9k    : 9-10
iperf with GRO enabled using hw VLAN splinters and MTU 1,5k  : 9-10 Gbps

Then i tested iperf between machines of 2 different configurations (with
OVS VLAN splinters, and without it),

default->OVS_VLAN_spliters (GRO disabled) : 2,5 Gbps
default->OVS_VLAN_spliters (GRO enabled) : 5 Gbps

OVS_VLAN_spliters->default (GRO disabled) : 2,5-3 Gbps
OVS_VLAN_spliters->default (GRO enabled) : 5-10 Gbps

This looks like OVS is not performing good enough in this setup for
tagged vlans (our br-storage is running on tagged vlan)

any commands?

Dnia 2015-01-21, śro o godzinie 08:47 -0800, Rick Jones pisze:

> On 01/21/2015 03:20 AM, Skamruk, Piotr wrote:
> > On Wed, 2015-01-21 at 10:53 +0000, Skamruk, Piotr wrote:
> >> On Tue, 2015-01-20 at 17:41 +0100, Tomasz Napierala wrote:
> >>> [...]
> >>> How this was measured? VM to VM? Compute to compute?
> >> [...]
> >> Probably in ~30 minutes we also will have results on plain centos with
> >> mirantis kernel, and on fuel deployed centos with plain centos kernel
> >> (2.6.32 in both cases, but with different patchset subnumber).
> >
> > OK, our test were done little badly. On plain centos iperf were runned
> > directly on physical interfaces, but under fuel deployed nodes... We
> > ware using br-storage interfaces, which in real are openvs based.
> >
> > So this is not a kernel problem, but this is a single stream over ovs
> > issue.
> >
> > So we will investigate this further...
> >
> Not sure if iperf will emit it, but you might look at the bytes per 
> receive on the receiving end.  Or  you can hang a tcpdump off the 
> receiving interface (the br-storage I presume here) and see if you are 
> getting the likes of GRO - if you are getting GRO you will see "large" 
> TCP segments in the packet trace on the receiving side.  You can do the 
> same with the physical interfaces for comparison.
> 2.5 to 3 Gbit/s "feels" rather like what one would get with 10 GbE in 
> the days before GRO/LRO.
> happy benchmarking,
> rick jones
> http://www.netperf.org/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150122/f587102e/attachment.html>

More information about the OpenStack-dev mailing list