<div><span style="font-size: 14px;">Inline comments follow after this, but I wanted to respond to Brian question</span></div><div><span style="font-size: 14px;">which has been cut out:</span></div><div><span style="font-size: 14px;"><br></span></div><div><span style="font-size: 14px;">We’re talking here of doing a preliminary analysis of the networking performance,</span></div><div><span style="font-size: 14px;">before writing any real code at neutron level.</span></div><div><br></div><div><span style="font-size: 14px;">If that looks right, then we should go into a preliminary (and orthogonal to iptables/LB)</span></div><div><span style="font-size: 14px;">implementation. At that moment we will be able to examine the scalability of the solution</span></div><div><span style="font-size: 14px;">in regards of switching openflow rules, which is going to be severely affected</span></div><div><span style="font-size: 14px;">by the way we use to handle OF rules in the bridge:</span></div><div><span style="font-size: 14px;"><br></span></div><div><span style="font-size: 14px;"> * via OpenFlow, making the agent a “real" OF controller, with the current effort to use</span></div><div><span style="font-size: 14px;"> the ryu framework plugin to do that.</span></div><div><span style="font-size: 14px;"> * via cmdline (would be alleviated with the current rootwrap work, but the former one</span></div><div><span style="font-size: 14px;"> would be preferred).</span></div><div><span style="font-size: 14px;"><br></span></div><div><span style="font-size: 14px;">Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben Pfaff for the</span></div><div><span style="font-size: 14px;">explanation, if you’re reading this ;-))</span></div><div><span style="font-size: 14px;"><br></span></div><div><span style="font-size: 14px;">Best,</span></div><div><span style="font-size: 14px;">Miguel Ángel</span></div><div><span style="font-size: 14px;"><br></span></div><div><span style="font-size: 14px;"><br></span></div><div><span style="color: rgb(160, 160, 168);"><br></span></div><div><span style="color: rgb(160, 160, 168);">On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:</span></div>
<blockquote type="cite" style="border-left-style:solid;border-width:1px;margin-left:0px;padding-left:10px;">
<span><div><div><div dir="ltr">Hi,<div><br></div><div>The RFC2544 with near zero packet loss is a pretty standard performance benchmark. It is also used in the OPNFV project (<a href="https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases">https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases</a>).</div><div><br></div><div>Does this mean that OpenStack will have stateful firewalls (or security groups)? Any other ideas planned, like ebtables type filtering?</div><div><br></div></div></div></div></span></blockquote><div><span style="font-size: 14px;">What I am proposing is in the terms of maintaining the statefulness we have now</span></div><div><span style="font-size: 14px;">regards security groups (RELATED/ESTABLISHED connections are allowed back </span></div><div><span style="font-size: 14px;">on open ports) while adding a new firewall driver working only with OVS+OF (no iptables </span></div><div><span style="font-size: 14px;">or linux bridge).</span></div><div><span style="font-size: 14px;"><br></span></div><div>That will be possible (without auto-populating OF rules in oposite directions) due to</div><div>the new connection tracker functionality to be eventually merged into ovs.</div><div> </div><blockquote type="cite" style="border-left-style:solid;border-width:1px;margin-left:0px;padding-left:10px;"><span><div><div><div dir="ltr"><div></div><div>-Tapio</div></div><div><br><div>On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones <span dir="ltr"><<a href="mailto:rick.jones2@hp.com" target="_blank">rick.jones2@hp.com</a>></span> wrote:<br><blockquote type="cite"><div><div><div>On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:<br><blockquote type="cite"><div>
I’m writing a plan/script to benchmark OVS+OF(CT) vs<br>
OVS+LB+iptables+ipsets,<br>
so we can make sure there’s a real difference before jumping into any<br>
OpenFlow security group filters when we have connection tracking in OVS.<br>
<br>
The plan is to keep all of it in a single multicore host, and make<br>
all the measures within it, to make sure we just measure the<br>
difference due to the software layers.<br>
<br>
Suggestions or ideas on what to measure are welcome, there’s an initial<br>
draft here:<br>
<br>
<a href="https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct" target="_blank">https://github.com/mangelajo/<u></u>ovs-experiments/tree/master/<u></u>ovs-ct</a><br>
</div></blockquote><br></div></div>
Conditions to be benchmarked<br>
<br>
Initial connection establishment time<br>
Max throughput on the same CPU<br>
<br>
Large MTUs and stateless offloads can mask a multitude of path-length sins. And there is a great deal more to performance than Mbit/s. While some of that may be covered by the first item via the likes of say netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on Mbit/s (which I assume is the focus of the second item) there is something for packet per second performance. Something like netperf TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.<br>
<br>
Doesn't have to be netperf, that is simply the hammer I wield :)<br>
<br>
What follows may be a bit of perfect being the enemy of the good, or mission creep...<br>
<br>
On the same CPU would certainly simplify things, but it will almost certainly exhibit different processor data cache behaviour than actually going through a physical network with a multi-core system. Physical NICs will possibly (probably?) have RSS going, which may cause cache lines to be pulled around. The way packets will be buffered will differ as well. Etc etc. How well the different solutions scale with cores is definitely a difference of interest between the two sofware layers.<span><font color="#888888"><br>
<br>
rick</font></span><div><div><br>
<br>
______________________________<u></u>______________________________<u></u>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.<u></u>openstack.org?subject:<u></u>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-dev</a><br>
</div></div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div>-Tapio</div>
</div>
</div><div><div>__________________________________________________________________________</div><div>OpenStack Development Mailing List (not for usage questions)</div><div>Unsubscribe: <a href="mailto:OpenStack-dev-request@lists.openstack.org?subject:unsubscribe">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a></div><div><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a></div></div></div></span>
</blockquote>
<div>
<br>
</div>