[openstack-dev] [Neutron] OVS flow modification performance

Miguel Angel Ajo Pelayo majopela at redhat.com
Fri Apr 15 07:22:41 UTC 2016


On Fri, Apr 15, 2016 at 7:32 AM, IWAMOTO Toshihiro
<iwamoto at valinux.co.jp> wrote:
> At Mon, 11 Apr 2016 14:42:59 +0200,
> Miguel Angel Ajo Pelayo wrote:
>>
>> On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
>> <iwamoto at valinux.co.jp> wrote:
>> > At Fri, 8 Apr 2016 12:21:21 +0200,
>> > Miguel Angel Ajo Pelayo wrote:
>> >>
>> >> Hi, good that you're looking at this,
>> >>
>> >>
>> >> You could create a lot of ports with this method [1] and a bit of extra
>> >> bash, without the extra expense of instance RAM.
>> >>
>> >>
>> >> [1]
>> >> http://www.ajo.es/post/89207996034/creating-a-network-interface-to-tenant-network-in
>> >>
>> >>
>> >> This effort is going to be still more relevant in the context of
>> >> openvswitch firewall. We still need to make sure it's tested with the
>> >> native interface, and eventually we will need flow bundling (like in
>> >> ovs-ofctl --bundle add-flows) where the whole addition/removal/modification
>> >> is sent to be executed atomically by the switch.
>> >
>> > Bad news is that ovs-firewall isn't currently using the native
>> > of_interface much.  I can add install_xxx methods to
>> > OpenFlowSwitchMixin classes so that ovs-firewall can use the native
>> > interface.
>> > Do you have a plan for implementing flow bundling or using conjunction?
>> >
>>
>> Adding Jakub to the thread,
>>
>> IMO, if the native interface is going to provide us with greater speed
>> for rule manipulation, we should look into it.
>>
>> We don't use bundling or conjunctions yet, but it's part of the plan.
>> Bundling will allow atomicity of operations with rules (switching
>> firewall rules, etc, as we have with iptables-save /
>> iptables-restore), and conjunctions will reduce the number of entries.
>> (No expansion of IP addresses for remote groups, no expansion of
>> security group rules per port, when several ports are on the same
>> security group on the same compute host).
>>
>> Do we have any metric of bare rule manipulation time (ms/rule, for example)?
>
> No bare numbers but from a graph in the other mail I sent last week,
> bind_devices for 160 ports (iirc, that amounts to 800 flows) takes
> 4.5sec with of_interface=native and 8sec with of_interface=ovs-ofctl,
> which means an native add-flow is 4ms faster than the other.
>
> As the ovs firewall uses DeferredOVSBridge and has less exec
> overheads, I have no idea how much gain the native of_interface
> brings.
>
>> As a note, we're around 80 rules/port with IPv6 + IPv4 on the default
>> sec group plus a couple of rules.
>
> I booted 120VMs on one network and the default security group
> generated 62k flows.  It seems using conjunction is the #1 item for
> performance.
>

Ouch, hello again cartesian product!, luckily we already know how to
optimize that, now we need to get our hands on it.

@iwamoto, thanks for trying it.



>
>
>>
>> >> On Thu, Apr 7, 2016 at 10:00 AM, IWAMOTO Toshihiro <iwamoto at valinux.co.jp>
>> >> wrote:
>> >>
>> >> > At Thu, 07 Apr 2016 16:33:02 +0900,
>> >> > IWAMOTO Toshihiro wrote:
>> >> > >
>> >> > > At Mon, 18 Jan 2016 12:12:28 +0900,
>> >> > > IWAMOTO Toshihiro wrote:
>> >> > > >
>> >> > > > I'm sending out this mail to share the finding and discuss how to
>> >> > > > improve with those interested in neutron ovs performance.
>> >> > > >
>> >> > > > TL;DR: The native of_interface code, which has been merged recently
>> >> > > > and isn't default, seems to consume less CPU time but gives a mixed
>> >> > > > result.  I'm looking into this for improvement.
>> >> > >
>> >> > > I went on to look at implementation details of eventlet etc, but it
>> >> > > turned out to be fairly simple.  The OVS agent in the
>> >> > > of_interface=native mode waits for a openflow connection from
>> >> > > ovs-vswitchd, which can take up to 5 seconds.
>> >> > >
>> >> > > Please look at the attached graph.
>> >> > > The x-axis is time from agent restarts, the y-axis is numbers of ports
>> >> > > processed (in treat_devices and bind_devices).  Each port is counted
>> >> > > twice; the first slope is treat_devices and the second is
>> >> > > bind_devices.  The native of_interface needs some more time on
>> >> > > start-up, but bind_devices is about 2x faster.
>> >> > >
>> >> > > The data was collected with 160 VMs with the devstack default settings.
>> >> >
>> >> > And if you wonder how other services are doing meanwhile, here is a
>> >> > bonus chart.
>> >> >
>> >> > The ovs agent was restarted 3 times with of_interface=native, then 3
>> >> > times with of_interface=ovs-ofctl.
>> >> >
>> >> > As the test machine has 16 CPUs, 6.25% CPU usage can mean a single
>> >> > threaded process is CPU bound.
>> >> >
>> >> > Frankly, the OVS agent would have little rooms for improvement than
>> >> > other services.  Also, it might be fun to draw similar charts for
>> >> > other types of workloads.
>> >> >
>> >> >
>> >> > __________________________________________________________________________
>> >> > OpenStack Development Mailing List (not for usage questions)
>> >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list