[openstack-dev] [Neutron][qa] Parallel testing update

Isaku Yamahata isaku.yamahata at gmail.com
Mon Jan 6 17:02:52 UTC 2014


On Mon, Jan 06, 2014 at 05:04:47PM +0100,
Salvatore Orlando <sorlando at nicira.com> wrote:

> I have already discussed the matter with Jay on IRC, even if for a
> different issue.
> In this specific case 'batching' will have the benefit of reducing the
> rootwrap overhead.
> 
> However, it seems the benefit from batching is not resolutive. I admit I
> have not run tests in the gate with batching; I've just tested in an
> environment without significant load, obtaining a performance increase of
> less than 10%.
> 
> From what I gathered even if commands are 'batched' to ovs-vsctl,
> operations are still individually performed on the kernel module. I did not
> investigate whether the cli commands sends a single or multiple commands on
> the ovsdb interface.
> Nevertheless, another thing to note is that it's not just ovs-vsctl that
> becomes very slow, but also, and more often than that, ovs-ofctl, for which
> there is no batching.

Then ovs-ofctl add/mod-flows SWITCH FILE will help on defer_apply_off()?
If yes, I'm willing to create such patch.
add/mod-flows batches add/mod-flow. ovs-ofctl sends OF barrier message
and wait for its reply to confirm the result.
Single barrier synchronization of add/mod-flows vs each barrier
synchronizations of add/mod-flow.

The current implementation doesn't have defer_apply_on/off in
process_network_ports(). Is there any reason for it?

Thanks,
Isaku Yamahata


> Summarising, I'm not opposed to batching for ovs-vsctl, and I would
> definitely welcome it; I just don't think it will be the ultimate solution.
> 
> Salvatore
> 
> 
> On 6 January 2014 11:40, Isaku Yamahata <isaku.yamahata at gmail.com> wrote:
> 
> > On Fri, Dec 27, 2013 at 11:09:02AM +0100,
> > Salvatore Orlando <sorlando at nicira.com> wrote:
> >
> > > Hi,
> > >
> > > We now have several patches under review which improve a lot how neutron
> > > handles parallel testing.
> > > In a nutshell, these patches try to ensure the ovs agent processes new,
> > > removed, and updated interfaces as soon as possible,
> > >
> > > These patches are:
> > > https://review.openstack.org/#/c/61105/
> > > https://review.openstack.org/#/c/61964/
> > > https://review.openstack.org/#/c/63100/
> > > https://review.openstack.org/#/c/63558/
> > >
> > > There is still room for improvement. For instance the calls from the
> > agent
> > > into the plugins might be consistently reduced.
> > > However, even if the above patches shrink a lot the time required for
> > > processing a device, we are still hitting a hard limit with the execution
> > > ovs commands for setting local vlan tags and clearing flows (or adding
> > the
> > > flow rule for dropping all the traffic).
> > > In some instances this commands slow down a lot, requiring almost 10
> > > seconds to complete. This adds a delay in interface processing which in
> > > some cases leads to the hideous SSH timeout error (the same we see with
> > bug
> > > 1253896 in normal testing).
> > > It is also worth noting that when this happens sysstat reveal CPU usage
> > is
> > > very close to 100%
> > >
> > > From the neutron side there is little we can do. Introducing parallel
> > > processing for interface, as we do for the l3 agent, is not actually a
> > > solution, since ovs-vswitchd v1.4.x, the one executed on gate tests, is
> > not
> > > multithreaded. If you think the situation might be improved by changing
> > the
> > > logic for handling local vlan tags and putting ports on the dead vlan, I
> > > would be happy to talk about that.
> >
> > How about batching those ovsdb operations?
> > Instead of issueing many ovs-vsctl command,
> > ovs-vsctl -- command0 [args] -- command1 [args] -- ...
> >
> > Then, the number of ovs-vsctl will be reduced and ovs-vsctl issues
> > only single ovsdb transaction.
> > --
> > Isaku Yamahata <isaku.yamahata at gmail.com>
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

-- 
Isaku Yamahata <isaku.yamahata at gmail.com>



More information about the OpenStack-dev mailing list