[openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)
Rick Jones
rick.jones2 at hp.com
Wed Feb 25 17:07:49 UTC 2015
On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
> I’m writing a plan/script to benchmark OVS+OF(CT) vs
> OVS+LB+iptables+ipsets,
> so we can make sure there’s a real difference before jumping into any
> OpenFlow security group filters when we have connection tracking in OVS.
>
> The plan is to keep all of it in a single multicore host, and make
> all the measures within it, to make sure we just measure the
> difference due to the software layers.
>
> Suggestions or ideas on what to measure are welcome, there’s an initial
> draft here:
>
> https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
Conditions to be benchmarked
Initial connection establishment time
Max throughput on the same CPU
Large MTUs and stateless offloads can mask a multitude of path-length
sins. And there is a great deal more to performance than Mbit/s. While
some of that may be covered by the first item via the likes of say
netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a
focus on Mbit/s (which I assume is the focus of the second item) there
is something for packet per second performance. Something like netperf
TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.
Doesn't have to be netperf, that is simply the hammer I wield :)
What follows may be a bit of perfect being the enemy of the good, or
mission creep...
On the same CPU would certainly simplify things, but it will almost
certainly exhibit different processor data cache behaviour than actually
going through a physical network with a multi-core system. Physical
NICs will possibly (probably?) have RSS going, which may cause cache
lines to be pulled around. The way packets will be buffered will differ
as well. Etc etc. How well the different solutions scale with cores is
definitely a difference of interest between the two sofware layers.
rick
More information about the OpenStack-dev
mailing list