[openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

Ben Pfaff blp at nicira.com
Fri Feb 27 04:21:50 UTC 2015


This sounds quite similar to the planned support in OVN to "gateway" a
logical network to a particular VLAN on a physical port, so perhaps it
will be sufficient.

On Thu, Feb 26, 2015 at 05:58:40PM -0800, Kevin Benton wrote:
> If a port is bound with a VLAN segmentation type, it will get a VLAN id and
> a name of a physical network that it corresponds to. In the current plugin,
> each agent is configured with a mapping between physical networks and OVS
> bridges. The agent takes the bound port information and sets up rules to
> forward traffic from the VM port to the OVS bridge corresponding to the
> physical network. The bridge usually then has a physical interface added to
> it for the tagged traffic to use to reach the rest of the network.
> 
> On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff <blp at nicira.com> wrote:
> 
> > What kind of VLAN support would you need?
> >
> > On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
> > > If OVN chooses not to support VLANs, we will still need the current OVS
> > > reference anyway so it definitely won't be wasted work.
> > >
> > > On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo <
> > > majopela at redhat.com> wrote:
> > >
> > > >
> > > > Sharing thoughts that I was having:
> > > >
> > > > May be during the next summit it???s worth discussing the future of the
> > > > reference agent(s), I feel we???ll be replicating a lot of work across
> > > > OVN/OVS/RYU(ofagent) and may be other plugins,
> > > >
> > > > I guess until OVN and it???s integration are ready we can???t stop, so
> > it makes
> > > > sense to keep development at our side, also having an independent
> > plugin
> > > > can help us iterate faster for new features, yet I expect that OVN
> > will be
> > > > more fluent at working with OVS and OpenFlow, as their designers have
> > > > a very deep knowledge of OVS under the hood, and it???s C. ;)
> > > >
> > > > Best regards,
> > > >
> > > > On 26/2/2015, at 7:57, Miguel ??ngel Ajo <majopela at redhat.com> wrote:
> > > >
> > > > On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo wrote:
> > > >
> > > >  Inline comments follow after this, but I wanted to respond to Brian
> > > > questionwhich has been cut out:
> > > > We???re talking here of doing a preliminary analysis of the networking
> > > > performance,before writing any real code at neutron level.
> > > >
> > > > If that looks right, then we should go into a preliminary (and
> > orthogonal
> > > > to iptables/LB)implementation. At that moment we will be able to
> > examine
> > > > the scalability of the solutionin regards of switching openflow rules,
> > > > which is going to be severely affectedby the way we use to handle OF
> > rules
> > > > in the bridge:
> > > >    * via OpenFlow, making the agent a ???real" OF controller, with the
> > > > current effort to use      the ryu framework plugin to do that.   * via
> > > > cmdline (would be alleviated with the current rootwrap work, but the
> > former
> > > > one     would be preferred).
> > > > Also, ipset groups can be moved into conjunctive groups in OF (thanks
> > Ben
> > > > Pfaff for theexplanation, if you???re reading this ;-))
> > > > Best,Miguel ??ngel
> > > >
> > > >
> > > > On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
> > > >
> > > > Hi,
> > > >
> > > > The RFC2544 with near zero packet loss is a pretty standard performance
> > > > benchmark. It is also used in the OPNFV project (
> > > >
> > https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
> > > > ).
> > > >
> > > > Does this mean that OpenStack will have stateful firewalls (or security
> > > > groups)? Any other ideas planned, like ebtables type filtering?
> > > >
> > > > What I am proposing is in the terms of maintaining the statefulness we
> > > > have nowregards security groups (RELATED/ESTABLISHED connections are
> > > > allowed back on open ports) while adding a new firewall driver working
> > only
> > > > with OVS+OF (no iptables or linux bridge).
> > > >
> > > > That will be possible (without auto-populating OF rules in oposite
> > > > directions) due to
> > > > the new connection tracker functionality to be eventually merged into
> > ovs.
> > > >
> > > >
> > > > -Tapio
> > > >
> > > > On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones <rick.jones2 at hp.com>
> > wrote:
> > > >
> > > > On 02/25/2015 05:52 AM, Miguel ??ngel Ajo wrote:
> > > >
> > > > I???m writing a plan/script to benchmark OVS+OF(CT) vs
> > > > OVS+LB+iptables+ipsets,
> > > > so we can make sure there???s a real difference before jumping into any
> > > > OpenFlow security group filters when we have connection tracking in
> > OVS.
> > > >
> > > > The plan is to keep all of it in a single multicore host, and make
> > > > all the measures within it, to make sure we just measure the
> > > > difference due to the software layers.
> > > >
> > > > Suggestions or ideas on what to measure are welcome, there???s an
> > initial
> > > > draft here:
> > > >
> > > > https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
> > > >
> > > >
> > > > Conditions to be benchmarked
> > > >
> > > >     Initial connection establishment time
> > > >     Max throughput on the same CPU
> > > >
> > > > Large MTUs and stateless offloads can mask a multitude of path-length
> > > > sins.  And there is a great deal more to performance than Mbit/s. While
> > > > some of that may be covered by the first item via the likes of say
> > netperf
> > > > TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus
> > on
> > > > Mbit/s (which I assume is the focus of the second item) there is
> > something
> > > > for packet per second performance.  Something like netperf TCP_RR and
> > > > perhaps aggregate TCP_RR or UDP_RR testing.
> > > >
> > > > Doesn't have to be netperf, that is simply the hammer I wield :)
> > > >
> > > > What follows may be a bit of perfect being the enemy of the good, or
> > > > mission creep...
> > > >
> > > > On the same CPU would certainly simplify things, but it will almost
> > > > certainly exhibit different processor data cache behaviour than
> > actually
> > > > going through a physical network with a multi-core system.  Physical
> > NICs
> > > > will possibly (probably?) have RSS going, which may cause cache lines
> > to be
> > > > pulled around.  The way packets will be buffered will differ as well.
> > Etc
> > > > etc.  How well the different solutions scale with cores is definitely a
> > > > difference of interest between the two sofware layers.
> > > >
> > > >
> > > >
> > > > Hi rick, thanks for your feedback here, I???ll take it into
> > consideration,
> > > > specially about the small packet pps measurements, and
> > > > really using physical hosts.
> > > >
> > > > Although I may start with an AIO setup for simplicity, we should
> > > > get more conclusive results from at least two hosts and decent NICs.
> > > >
> > > > I will put all this together in the document, and loop you in for
> > review.
> > > >
> > > > rick
> > > >
> > > >
> > > >
> > __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe
> > >
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -Tapio
> > > >
> > > >
> > __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > > >
> > > >
> > __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > > >
> > __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > > > Miguel Angel Ajo
> > > >
> > > >
> > > >
> > > >
> > > >
> > __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > >
> > >
> > > --
> > > Kevin Benton
> >
> > >
> > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Kevin Benton

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list