[openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)
Tapio Tallgren
tapio.tallgren at gmail.com
Wed Feb 25 19:34:09 UTC 2015
Hi,
The RFC2544 with near zero packet loss is a pretty standard performance
benchmark. It is also used in the OPNFV project (
https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
).
Does this mean that OpenStack will have stateful firewalls (or security
groups)? Any other ideas planned, like ebtables type filtering?
-Tapio
On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones <rick.jones2 at hp.com> wrote:
> On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
>
>> I’m writing a plan/script to benchmark OVS+OF(CT) vs
>> OVS+LB+iptables+ipsets,
>> so we can make sure there’s a real difference before jumping into any
>> OpenFlow security group filters when we have connection tracking in OVS.
>>
>> The plan is to keep all of it in a single multicore host, and make
>> all the measures within it, to make sure we just measure the
>> difference due to the software layers.
>>
>> Suggestions or ideas on what to measure are welcome, there’s an initial
>> draft here:
>>
>> https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
>>
>
> Conditions to be benchmarked
>
> Initial connection establishment time
> Max throughput on the same CPU
>
> Large MTUs and stateless offloads can mask a multitude of path-length
> sins. And there is a great deal more to performance than Mbit/s. While
> some of that may be covered by the first item via the likes of say netperf
> TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on
> Mbit/s (which I assume is the focus of the second item) there is something
> for packet per second performance. Something like netperf TCP_RR and
> perhaps aggregate TCP_RR or UDP_RR testing.
>
> Doesn't have to be netperf, that is simply the hammer I wield :)
>
> What follows may be a bit of perfect being the enemy of the good, or
> mission creep...
>
> On the same CPU would certainly simplify things, but it will almost
> certainly exhibit different processor data cache behaviour than actually
> going through a physical network with a multi-core system. Physical NICs
> will possibly (probably?) have RSS going, which may cause cache lines to be
> pulled around. The way packets will be buffered will differ as well. Etc
> etc. How well the different solutions scale with cores is definitely a
> difference of interest between the two sofware layers.
>
> rick
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
-Tapio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150225/d8a3dab5/attachment.html>
More information about the OpenStack-dev
mailing list