<div dir="ltr">It's worth noticing that elastic recheck is signalling bug 1253896 and bug 1224001 but they have actually the same signature.<div>I found also interesting that neutron is triggering a lot bug 1254890, which appears to be a hang on /dev/nbdX during key injection; so far I have no explanation for that.</div>
<div><br></div><div>As suggested on IRC, the neutron isolated job had a failure rate of about 5-7% last week (until thursday I think). It might be therefore also looking at tempest/devstack patches which might be triggering failure or uncovering issues in neutron.</div>
<div><br></div><div>I shared a few findings on the mailing list yesterday ([1]). I hope people actively looking at failures will find them helpful.</div><div><br></div><div>Salvatore</div><div><br></div><div>[1] <a href="http://lists.openstack.org/pipermail/openstack-dev/2014-January/025013.html">http://lists.openstack.org/pipermail/openstack-dev/2014-January/025013.html</a></div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On 22 January 2014 14:57, Sean Dague <span dir="ltr"><<a href="mailto:sean@dague.net" target="_blank">sean@dague.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On 01/22/2014 09:38 AM, Sean Dague wrote:<br>
> Things aren't great, but they are actually better than yesterday.<br>
><br>
> Vital Stats:<br>
> Gate queue length: 107<br>
> Check queue length: 107<br>
> Head of gate entered: 45hrs ago<br>
> Changes merged in last 24hrs: 58<br>
><br>
> The 58 changes merged is actually a good number, not a great number, but<br>
> best we've seen in a number of days. I saw at least a 6 streak merge<br>
> yesterday, so zuul is starting to behave like we expect it should.<br>
><br>
> = Previous Top Bugs =<br>
><br>
> Our previous top 2 issues - 1270680 and 1270608 (not confusing at all)<br>
> are under control.<br>
><br>
> Bug 1270680 - v3 extensions api inherently racey wrt instances<br>
><br>
> Russell managed the second part of the fix for this, we've not seen it<br>
> come back since that was ninja merged.<br>
><br>
> Bug 1270608 - n-cpu 'iSCSI device not found' log causes<br>
> gate-tempest-dsvm-*-full to fail<br>
><br>
> Turning off the test that was triggering this made it completely go<br>
> away. We'll have to revisit if that's because there is a cinder bug or a<br>
> tempest bug, but we'll do that once the dust has settled.<br>
><br>
> = New Top Bugs =<br>
><br>
> Note: all fail numbers are across all queues<br>
><br>
> Bug 1253896 - Attempts to verify guests are running via SSH fails. SSH<br>
> connection to guest does not work.<br>
><br>
> 83 fails in 24hrs<br>
><br>
><br>
> Bug 1224001 - test_network_basic_ops fails waiting for network to become<br>
> available<br>
><br>
> 51 fails in 24hrs<br>
><br>
><br>
> Bug 1254890 - "Timed out waiting for thing" causes tempest-dsvm-* failures<br>
><br>
> 30 fails in 24hrs<br>
><br>
><br>
> We are now sorting - <a href="http://status.openstack.org/elastic-recheck/" target="_blank">http://status.openstack.org/elastic-recheck/</a> by<br>
> failures in the last 24hrs, so we can use it more as a hit list. The top<br>
> 3 issues are fingerprinted against infra, but are mostly related to<br>
> normal restart operations at this point.<br>
><br>
> = Starvation Update =<br>
><br>
> with 214 jobs across queues, and averaging 7 devstack nodes per job, our<br>
> working set is 1498 nodes (i.e. if we had than number we'd be able to be<br>
> running all the jobs right now in parallel).<br>
><br>
> Our current quota of nodes gives us ~ 480. Which is < 1/3 our working<br>
> set, and part of the reasons for delays. Rackspace has generously<br>
> increased our quota in 2 of their availability zones, and Monty is going<br>
> to prioritize getting those online.<br>
><br>
> Because of Jenkins scaling issues (it starts generating failures when<br>
> talking to too many build slaves), that means spinning up more Jenkins<br>
> masters. We've found a 1 / 100 ratio makes Jenkins basically stable,<br>
> pushing beyond that means new fails. Jenkins is not inherently elastic,<br>
> so this is a somewhat manual process. Monty is diving on that.<br>
><br>
> There is also a TCP slow start algorthm for zuul that Clark was working<br>
> on yesterday, which we'll put into production as soon as it is good.<br>
> This will prevent us from speculating all the way down the gate queue,<br>
> just to throw it all away on a reset. It acts just like TCP, on every<br>
> success we grow our speculation length, on every fail we reduce it, with<br>
> a sane minimum so we don't over throttle ourselves.<br>
><br>
><br>
> Thanks to everyone that's been pitching in digging on reset bugs. More<br>
> help is needed. Many core reviewers are at this point completely<br>
> ignoring normal reviews until the gate is back, so if you are waiting<br>
> for a review on some code, the best way to get it, is help us fix the<br>
> bugs reseting the gate.<br>
<br>
</div></div>One last thing, Anita has also gotten on top of pruning out all the<br>
neutron changes from the gate. Something is very wrong in the neutron<br>
isolated jobs right now, so their chance of passing is close enough to<br>
0, that we need to keep them out of the gate. This is a new regression<br>
in the last couple of days.<br>
<br>
This is a contributing factor in the gates moving again.<br>
<br>
She and Mark are rallying the Neutron folks to sort this one out.<br>
<div class="HOEnZb"><div class="h5"><br>
-Sean<br>
<br>
--<br>
Sean Dague<br>
Samsung Research America<br>
<a href="mailto:sean@dague.net">sean@dague.net</a> / <a href="mailto:sean.dague@samsung.com">sean.dague@samsung.com</a><br>
<a href="http://dague.net" target="_blank">http://dague.net</a><br>
<br>
</div></div><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>