[openstack-dev] [Neutron][qa] Parallel testing update
Isaku Yamahata
isaku.yamahata at gmail.com
Wed Jan 8 01:52:16 UTC 2014
Mathieu, Thank you for clarification.
I'll take a look at the patches.
On Tue, Jan 07, 2014 at 02:34:24PM +0100,
Salvatore Orlando <sorlando at nicira.com> wrote:
> Thanks Mathieu!
>
> I think we should first merge Edouard's patch, which appears to be a
> prerequisite.
> I think we could benefit a lot by applying this mechanism to
> process_network_ports.
>
> However, I am not sure if there could be drawbacks arising from the fact
> that the agent would assign the local VLAN port (either the lvm id or the
> DEAD_VLAN tag) and then at the end of the iteration the flow modifications,
> such as the drop all rule, will be applied.
> This will probably create a short interval of time in which we might have
> unexpected behaviours (such as VMs on DEAD VLAN able to communicate each
> other for instance).
Agree that more careful ordered update is necessary with deferred
application.
Thanks,
Isaku Yamahata
> I think we can generalize this discussion and use deferred application for
> ovs-vsctl as well.
> Would you agree with that?
>
> Thanks,
> Salvatore
>
>
> On 7 January 2014 14:08, Mathieu Rohon <mathieu.rohon at gmail.com> wrote:
>
> > I think that isaku is talking about a more intensive usage of
> > defer_apply_on/off as it is done in the patch of gongysh [1].
> >
> > Isaku, i don't see any reason why this could not be done in
> > precess_network_ports, if needed. Moreover the patch from edouard [2]
> > resolves multithreading issues while precessing defer_apply_off.
> >
> >
> > [1]https://review.openstack.org/#/c/61341/
> > [2]https://review.openstack.org/#/c/63917/
> >
> > On Mon, Jan 6, 2014 at 9:24 PM, Salvatore Orlando <sorlando at nicira.com>
> > wrote:
> > > This thread is starting to get a bit confusing, at least for people with
> > a
> > > single-pipeline brain like me!
> > >
> > > I am not entirely sure if I understand correctly Isaku's proposal
> > concerning
> > > deferring the application of flow changes.
> > > I think it's worth discussing in a separate thread, and a supporting
> > patch
> > > will help as well; I think that in order to avoid unexpected behaviours,
> > > vlan tagging on the port and flow setup should always be performed at the
> > > same time; if we get a much better performance using a mechanism similar
> > to
> > > iptables' defer_apply, then we should it.
> > >
> > > Regarding rootwrap. This 6x slowdown, while proving that rootwrap
> > imposes a
> > > non-negligible overhead, it should not be used as a sort of proof that
> > > rootwrap makes things 6 times worse! What I've been seeing on the gate
> > and
> > > in my tests are ALRM_CLOCK errors raised by ovs commands, so rootwrap has
> > > little to do with it.
> > >
> > > Still, I think we can say that rootwrap adds about 50ms to each command,
> > > becoming particularly penalising especially for 'fast' commands.
> > > I think the best things to do, as Joe advices, a test with rootwrap
> > disabled
> > > on the gate - and I will take care of that.
> > >
> > > On the other hand, I would invite community members picking up some of
> > the
> > > bugs we've registered for 'less frequent' failures observed during
> > parallel
> > > testing; especially if you're coming to Montreal next week.
> > >
> > > Salvatore
> > >
> > >
> > >
> > > On 6 January 2014 20:31, Jay Pipes <jaypipes at gmail.com> wrote:
> > >>
> > >> On Mon, 2014-01-06 at 11:17 -0800, Joe Gordon wrote:
> > >> >
> > >> >
> > >> >
> > >> > On Mon, Jan 6, 2014 at 10:35 AM, Jay Pipes <jaypipes at gmail.com>
> > wrote:
> > >> > On Mon, 2014-01-06 at 09:56 -0800, Joe Gordon wrote:
> > >> >
> > >> > > What about it? Also those numbers are pretty old at this
> > >> > point. I was
> > >> > > thinking disable rootwrap and run full parallel tempest
> > >> > against it.
> > >> >
> > >> >
> > >> > I think that is a little overkill for what we're trying to do
> > >> > here. We
> > >> > are specifically talking about combining many utils.execute()
> > >> > calls into
> > >> > a single one. I think it's pretty obvious that the latter will
> > >> > be better
> > >> > performing than the first, unless you think that rootwrap has
> > >> > no
> > >> > performance overhead at all?
> > >> >
> > >> >
> > >> > mocking out rootwrap with straight sudo, is a very quick way to
> > >> > approximate the performance benefit of combining many utlils.execute()
> > >> > calls together (at least rootwrap wise). Also it would tell us how
> > >> > much of the problem is rootwrap induced and how much is other.
> > >>
> > >> Yes, I understand that, which is what the article I linked earlier
> > >> showed?
> > >>
> > >> % time sudo ip link >/dev/null
> > >> sudo ip link > /dev/null 0.00s user 0.00s system 43% cpu 0.009 total
> > >> % sudo time quantum-rootwrap /etc/quantum/rootwrap.conf ip link
> > >> > /dev/null
> > >> quantum-rootwrap /etc/quantum/rootwrap.conf ip link > /dev/null 0.04s
> > >> user 0.02s system 87% cpu 0.059 total
> > >>
> > >> A very tiny, non-scientific simple indication that rootwrap is around 6
> > >> times slower than a simple sudo call.
> > >>
> > >> Best,
> > >> -jay
> > >>
> > >>
> > >> _______________________________________________
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev at lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Isaku Yamahata <isaku.yamahata at gmail.com>
More information about the OpenStack-dev
mailing list