<div dir="ltr">Hi,<div><br></div><div>The RFC2544 with near zero packet loss is a pretty standard performance benchmark. It is also used in the OPNFV project (<a href="https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases">https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases</a>).</div><div><br></div><div>Does this mean that OpenStack will have stateful firewalls (or security groups)? Any other ideas planned, like ebtables type filtering?</div><div><br></div><div>-Tapio</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones <span dir="ltr"><<a href="mailto:rick.jones2@hp.com" target="_blank">rick.jones2@hp.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I’m writing a plan/script to benchmark OVS+OF(CT) vs<br>
OVS+LB+iptables+ipsets,<br>
so we can make sure there’s a real difference before jumping into any<br>
OpenFlow security group filters when we have connection tracking in OVS.<br>
<br>
The plan is to keep all of it in a single multicore host, and make<br>
all the measures within it, to make sure we just measure the<br>
difference due to the software layers.<br>
<br>
Suggestions or ideas on what to measure are welcome, there’s an initial<br>
draft here:<br>
<br>
<a href="https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct" target="_blank">https://github.com/mangelajo/<u></u>ovs-experiments/tree/master/<u></u>ovs-ct</a><br>
</blockquote>
<br></div></div>
Conditions to be benchmarked<br>
<br>
Initial connection establishment time<br>
Max throughput on the same CPU<br>
<br>
Large MTUs and stateless offloads can mask a multitude of path-length sins. And there is a great deal more to performance than Mbit/s. While some of that may be covered by the first item via the likes of say netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on Mbit/s (which I assume is the focus of the second item) there is something for packet per second performance. Something like netperf TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.<br>
<br>
Doesn't have to be netperf, that is simply the hammer I wield :)<br>
<br>
What follows may be a bit of perfect being the enemy of the good, or mission creep...<br>
<br>
On the same CPU would certainly simplify things, but it will almost certainly exhibit different processor data cache behaviour than actually going through a physical network with a multi-core system. Physical NICs will possibly (probably?) have RSS going, which may cause cache lines to be pulled around. The way packets will be buffered will differ as well. Etc etc. How well the different solutions scale with cores is definitely a difference of interest between the two sofware layers.<span class="HOEnZb"><font color="#888888"><br>
<br>
rick</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
______________________________<u></u>______________________________<u></u>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.<u></u>openstack.org?subject:<u></u>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">-Tapio</div>
</div>