[openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

Miguel Ángel Ajo majopela at redhat.com
Thu Feb 26 06:57:27 UTC 2015


On Thursday, 26 de February de 2015 at 7:48, Miguel Ángel Ajo wrote:
> Inline comments follow after this, but I wanted to respond to Brian question
> which has been cut out:
>  
> We’re talking here of doing a preliminary analysis of the networking performance,
> before writing any real code at neutron level.
>  
> If that looks right, then we should go into a preliminary (and orthogonal to iptables/LB)
> implementation. At that moment we will be able to examine the scalability of the solution
> in regards of switching openflow rules, which is going to be severely affected
> by the way we use to handle OF rules in the bridge:
>  
>    * via OpenFlow, making the agent a “real" OF controller, with the current effort to use
>       the ryu framework plugin to do that.
>    * via cmdline (would be alleviated with the current rootwrap work, but the former one
>      would be preferred).
>  
> Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben Pfaff for the
> explanation, if you’re reading this ;-))
>  
> Best,
> Miguel Ángel
>  
>  
>  
> On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
> > Hi,
> >  
> > The RFC2544 with near zero packet loss is a pretty standard performance benchmark. It is also used in the OPNFV project (https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases).
> >  
> > Does this mean that OpenStack will have stateful firewalls (or security groups)? Any other ideas planned, like ebtables type filtering?
> >  
> What I am proposing is in the terms of maintaining the statefulness we have now
> regards security groups (RELATED/ESTABLISHED connections are allowed back  
> on open ports) while adding a new firewall driver working only with OVS+OF (no iptables  
> or linux bridge).
>  
> That will be possible (without auto-populating OF rules in oposite directions) due to
> the new connection tracker functionality to be eventually merged into ovs.
>   
> > -Tapio
> >  
> >  
> > On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones <rick.jones2 at hp.com (mailto:rick.jones2 at hp.com)> wrote:
> > > On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
> > > > I’m writing a plan/script to benchmark OVS+OF(CT) vs
> > > > OVS+LB+iptables+ipsets,
> > > > so we can make sure there’s a real difference before jumping into any
> > > > OpenFlow security group filters when we have connection tracking in OVS.
> > > >  
> > > > The plan is to keep all of it in a single multicore host, and make
> > > > all the measures within it, to make sure we just measure the
> > > > difference due to the software layers.
> > > >  
> > > > Suggestions or ideas on what to measure are welcome, there’s an initial
> > > > draft here:
> > > >  
> > > > https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
> > >  
> > > Conditions to be benchmarked
> > >  
> > >     Initial connection establishment time
> > >     Max throughput on the same CPU
> > >  
> > > Large MTUs and stateless offloads can mask a multitude of path-length sins.  And there is a great deal more to performance than Mbit/s. While some of that may be covered by the first item via the likes of say netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on Mbit/s (which I assume is the focus of the second item) there is something for packet per second performance.  Something like netperf TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.
> > >  
> > > Doesn't have to be netperf, that is simply the hammer I wield :)
> > >  
> > > What follows may be a bit of perfect being the enemy of the good, or mission creep...
> > >  
> > > On the same CPU would certainly simplify things, but it will almost certainly exhibit different processor data cache behaviour than actually going through a physical network with a multi-core system.  Physical NICs will possibly (probably?) have RSS going, which may cause cache lines to be pulled around.  The way packets will be buffered will differ as well.  Etc etc.  How well the different solutions scale with cores is definitely a difference of interest between the two sofware layers.
> > >  


Hi rick, thanks for your feedback here, I’ll take it into consideration,  
specially about the small packet pps measurements, and
really using physical hosts.

Although I may start with an AIO setup for simplicity, we should
get more conclusive results from at least two hosts and decent NICs.

I will put all this together in the document, and loop you in for review.  
> > > rick
> > >  
> > >  
> > > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe (http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe)
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >  
> >  
> >  
> > --  
> > -Tapio  
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe (mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe)
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >  
> >  
> >  
>  
>  
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe (mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150226/612e7333/attachment.html>


More information about the OpenStack-dev mailing list