<div dir="ltr">Hi again,<div><br></div><div>I've now run the experimental job a good deal of times, and I've filed bugs for all the issues which came out.</div><div>Most of them occurred no more than once among all test execution (I think about 30).</div>
<div><br></div><div>They're all tagged with neutron-parallel [1]. for ease of tracking, I've associated all the bug reports with neutron, but some are probably more tempest or nova issues.</div><div><br></div><div>
Salvatore</div><div><br></div><div>[1] <a href="https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel">https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel</a></div></div><div class="gmail_extra">
<br><br><div class="gmail_quote">On 27 December 2013 11:09, Salvatore Orlando <span dir="ltr"><<a href="mailto:sorlando@nicira.com" target="_blank">sorlando@nicira.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hi,<div><br></div><div>We now have several patches under review which improve a lot how neutron handles parallel testing.</div><div>In a nutshell, these patches try to ensure the ovs agent processes new, removed, and updated interfaces as soon as possible,</div>
<div><br></div><div>These patches are:</div><div><a href="https://review.openstack.org/#/c/61105/" target="_blank">https://review.openstack.org/#/c/61105/</a><br></div><div><a href="https://review.openstack.org/#/c/61964/" target="_blank">https://review.openstack.org/#/c/61964/</a><br>
</div><div><a href="https://review.openstack.org/#/c/63100/" target="_blank">https://review.openstack.org/#/c/63100/</a><br></div><div><a href="https://review.openstack.org/#/c/63558/" target="_blank">https://review.openstack.org/#/c/63558/</a><br>
</div>
<div><br></div><div>There is still room for improvement. For instance the calls from the agent into the plugins might be consistently reduced.</div><div>However, even if the above patches shrink a lot the time required for processing a device, we are still hitting a hard limit with the execution ovs commands for setting local vlan tags and clearing flows (or adding the flow rule for dropping all the traffic).</div>
<div>In some instances this commands slow down a lot, requiring almost 10 seconds to complete. This adds a delay in interface processing which in some cases leads to the hideous SSH timeout error (the same we see with bug 1253896 in normal testing).</div>
<div>It is also worth noting that when this happens sysstat reveal CPU usage is very close to 100%</div><div><br></div><div>From the neutron side there is little we can do. Introducing parallel processing for interface, as we do for the l3 agent, is not actually a solution, since ovs-vswitchd v1.4.x, the one executed on gate tests, is not multithreaded. If you think the situation might be improved by changing the logic for handling local vlan tags and putting ports on the dead vlan, I would be happy to talk about that.</div>
<div>On my local machines I've seen a dramatic improvement in processing times by installing ovs 2.0.0, which has a multi-threaded vswitchd. Is this something we might consider for gate tests? Also, in order to reduce CPU usage on the gate (and making tests a bit faster), there is a tempest patch which stops creating and wiring neutron routers when they're not needed: <a href="https://review.openstack.org/#/c/62962/" target="_blank">https://review.openstack.org/#/c/62962/</a><br>
</div><div><br></div><div>Even in my local setup which succeeds about 85% of times, I'm still seeing some occurrences of the issue described in [1], which at the end of the day seems a dnsmasq issue.</div><div><br></div>
<div>Beyond the 'big' structural problem discussed above, there are some minor problems with a few tests:</div><div><br></div><div>1) test_network_quotas.<span style="font-size:11px;white-space:pre-wrap;font-family:'Lucida Console','Lucida Sans Typewriter',Monaco,monospace">test_create_ports_until_quota_hit </span> fails about 90% of times. I think this is because the test itself should be made aware of parallel execution and asynchronous events, and there is a patch for this already: <a href="https://review.openstack.org/#/c/64217" target="_blank">https://review.openstack.org/#/c/64217</a></div>
<div><br></div><div>2) test_attach_interfaces.test_create_list_show_delete_interfaces fails about 66% of times. The failure is always on an assertion made after deletion of interfaces, which probably means the interface is not deleted within 5 seconds. I think this might be a consequence of the higher load on the neutron service and we might try to enable multiple workers on the gate to this aim, or just increase the tempest timeout. On a slightly different note, allow me to say that the way assertion are made on this test might be improved a bit. So far one has to go through the code to see why the test failed.</div>
<div><br></div><div>Thanks for reading this rather long message.</div><div>Regards,</div><div>Salvatore</div><div><br></div><div>[1] <a href="https://lists.launchpad.net/openstack/msg23817.html" target="_blank">https://lists.launchpad.net/openstack/msg23817.html</a></div>
<div><br></div><div><br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On 2 December 2013 22:01, Kyle Mestery (kmestery) <span dir="ltr"><<a href="mailto:kmestery@cisco.com" target="_blank">kmestery@cisco.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Yes, this is all great Salvatore and Armando! Thank you for all of this work<br>
and the explanation behind it all.<br>
<span><font color="#888888"><br>
Kyle<br>
</font></span><div><div><br>
On Dec 2, 2013, at 2:24 PM, Eugene Nikanorov <<a href="mailto:enikanorov@mirantis.com" target="_blank">enikanorov@mirantis.com</a>> wrote:<br>
<br>
> Salvatore and Armando, thanks for your great work and detailed explanation!<br>
><br>
> Eugene.<br>
><br>
><br>
> On Mon, Dec 2, 2013 at 11:48 PM, Joe Gordon <<a href="mailto:joe.gordon0@gmail.com" target="_blank">joe.gordon0@gmail.com</a>> wrote:<br>
><br>
> On Dec 2, 2013 9:04 PM, "Salvatore Orlando" <<a href="mailto:sorlando@nicira.com" target="_blank">sorlando@nicira.com</a>> wrote:<br>
> ><br>
> > Hi,<br>
> ><br>
> > As you might have noticed, there has been some progress on parallel tests for neutron.<br>
> > In a nutshell:<br>
> > * Armando fixed the issue with IP address exhaustion on the public network [1]<br>
> > * Salvatore has now a patch which has a 50% success rate (the last failures are because of me playing with it) [2]<br>
> > * Salvatore is looking at putting back on track full isolation [3]<br>
> > * All the bugs affecting parallel tests can be queried here [10]<br>
> > * This blueprint tracks progress made towards enabling parallel testing [11]<br>
> ><br>
> > ---------<br>
> > The long story is as follows:<br>
> > Parallel testing basically is not working because parallelism means higher contention for public IP addresses. This was made worse by the fact that some tests created a router with a gateway set but never deleted it. As a result, there were even less addresses in the public range.<br>
> > [1] was already merged and with [4] we shall make the public network for neutron a /24 (the full tempest suite is still showing a lot of IP exhaustion errors).<br>
> ><br>
> > However, this was just one part of the issue. The biggest part actually lied with the OVS agent and its interactions with the ML2 plugin. A few patches ([5], [6], [7]) were already pushed to reduce the number of notifications sent from the plugin to the agent. However, the agent is organised in a way such that a notification is immediately acted upon thus preempting the main agent loop, which is the one responsible for wiring ports into networks. Considering the high level of notifications currently sent from the server, this becomes particularly wasteful if one consider that security membership updates for ports trigger global iptables-save/restore commands which are often executed in rapid succession, thus resulting in long delays for wiring VIFs to the appropriate network.<br>
> > With the patch [2] we are refactoring the agent to make it more efficient. This is not production code, but once we'll get close to 100% pass for parallel testing this patch will be split in several patches, properly structured, and hopefully easy to review.<br>
> > It is worth noting there is still work to do: in some cases the loop still takes too long, and it has been observed ovs commands taking even 10 seconds to complete. To this aim, it is worth considering use of async processes introduced in [8] as well as leveraging ovsdb monitoring [9] for limiting queries to ovs database.<br>
> > We're still unable to explain some failures where the network appears to be correctly wired (floating IP, router port, dhcp port, and VIF port), but the SSH connection fails. We're hoping to reproduce this failure patter locally.<br>
> ><br>
> > Finally, the tempest patch for full tempest isolation should be made usable soon. Having another experimental job for it is something worth considering as for some reason it is not always easy reproducing the same failure modes exhibited on the gate.<br>
> ><br>
> > Regards,<br>
> > Salvatore<br>
> ><br>
><br>
> Awesome work, thanks for the update.<br>
><br>
><br>
> > [1] <a href="https://review.openstack.org/#/c/58054/" target="_blank">https://review.openstack.org/#/c/58054/</a><br>
> > [2] <a href="https://review.openstack.org/#/c/57420/" target="_blank">https://review.openstack.org/#/c/57420/</a><br>
> > [3] <a href="https://review.openstack.org/#/c/53459/" target="_blank">https://review.openstack.org/#/c/53459/</a><br>
> > [4] <a href="https://review.openstack.org/#/c/58284/" target="_blank">https://review.openstack.org/#/c/58284/</a><br>
> > [5] <a href="https://review.openstack.org/#/c/58860/" target="_blank">https://review.openstack.org/#/c/58860/</a><br>
> > [6] <a href="https://review.openstack.org/#/c/58597/" target="_blank">https://review.openstack.org/#/c/58597/</a><br>
> > [7] <a href="https://review.openstack.org/#/c/58415/" target="_blank">https://review.openstack.org/#/c/58415/</a><br>
> > [8] <a href="https://review.openstack.org/#/c/45676/" target="_blank">https://review.openstack.org/#/c/45676/</a><br>
> > [9] <a href="https://bugs.launchpad.net/neutron/+bug/1177973" target="_blank">https://bugs.launchpad.net/neutron/+bug/1177973</a><br>
> > [10] <a href="https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel&field.tags_combinator=ANY" target="_blank">https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel&field.tags_combinator=ANY</a><br>
> > [11] <a href="https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel" target="_blank">https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel</a><br>
> ><br>
> > _______________________________________________<br>
> > OpenStack-dev mailing list<br>
> > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
> ><br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>