<div dir="ltr">The locked nodes were due to something Clark found earlier this week, and hopefully fixed with:<div><br></div><div><a href="https://review.openstack.org/526234">https://review.openstack.org/526234</a><br></div><div><br></div><div>Short story is that the request handlers (holding the locks on the nodes), were never being allowed</div><div>to continue processing because of an exception being thrown that was short circuiting the process.</div><div><br></div><div>There was *something* causing our node requests to disappear (zuul restarts?). The node request locks</div><div>are removed after 8 hours if the request no longer exists. This removal was causing the exception being</div><div>handled in the review above. Why the request handlers still have requests 8 hours after their removal</div><div>is a bit of a mystery. Maybe some weirdness with citycloud.</div><div><br></div><div>-Dave</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Dec 8, 2017 at 8:58 AM, Paul Belanger <span dir="ltr"><<a href="mailto:pabelanger@redhat.com" target="_blank">pabelanger@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Fri, Dec 08, 2017 at 08:38:24PM +1100, Ian Wienand wrote:<br>
> Hello,<br>
><br>
> Just to save people reverse-engineering IRC logs...<br>
><br>
> At ~04:00UTC frickler called out that things had been sitting in the<br>
> gate for ~17 hours.<br>
><br>
> Upon investigation, one of the stuck jobs was a<br>
> legacy-tempest-dsvm-neutron-<wbr>full job<br>
> (<wbr>bba5d98bb7b14b99afb539a75ee86a<wbr>80) as part of<br>
> <a href="https://review.openstack.org/475955" rel="noreferrer" target="_blank">https://review.openstack.org/<wbr>475955</a><br>
><br>
> Checking the zuul logs, it had sent that to ze04<br>
><br>
> 2017-12-07 15:06:20,962 DEBUG zuul.Pipeline.openstack.gate: Build <Build bba5d98bb7b14b99afb539a75ee86a<wbr>80 of legacy-tempest-dsvm-neutron-<wbr>full on <Worker <a href="http://ze04.openstack.org" rel="noreferrer" target="_blank">ze04.openstack.org</a>>> started<br>
><br>
> However, zuul-executor was not running on ze04. I believe there were<br>
> issues with this host yesterday. "/etc/init.d/zuul-executor start" and<br>
> "service zuul-executor start" reported as OK, but didn't actually<br>
> start the daemon. Rather than debug, I just used<br>
> _SYSTEMCTL_SKIP_REDIRECT=1 and that got it going. We should look into<br>
> that, I've noticed similar things with zuul-scheduler too.<br>
><br>
> At this point, the evidence suggested zuul was waiting for jobs that<br>
> would never return. Thus I saved the queues, restarted zuul-scheduler<br>
> and re-queued.<br>
><br>
> Soon after frickler again noticed that releasenotes jobs were now<br>
> failing with "could not import extension openstackdocstheme" [1]. We<br>
> suspect [2].<br>
><br>
> However, the gate did not become healthy. Upon further investigation,<br>
> the executors are very frequently failing jobs with<br>
><br>
> 2017-12-08 06:41:10,412 ERROR zuul.AnsibleJob: [build: 11062f1cca144052afb733813cdb16<wbr>d8] Exception while executing job<br>
> Traceback (most recent call last):<br>
> File "/usr/local/lib/python3.5/<wbr>dist-packages/zuul/executor/<wbr>server.py", line 588, in execute<br>
> str(self.job.unique))<br>
> File "/usr/local/lib/python3.5/<wbr>dist-packages/zuul/executor/<wbr>server.py", line 702, in _execute<br>
> File "/usr/local/lib/python3.5/<wbr>dist-packages/zuul/executor/<wbr>server.py", line 1157, in prepareAnsibleFiles<br>
> File "/usr/local/lib/python3.5/<wbr>dist-packages/zuul/executor/<wbr>server.py", line 500, in make_inventory_dict<br>
> for name in node['name']:<br>
> TypeError: unhashable type: 'list'<br>
><br>
> This is leading to the very high "retry_limit" failures.<br>
><br>
> We suspect change [3] as this did some changes in the node area. I<br>
> did not want to revert this via a force-merge, I unfortunately don't<br>
> have time to do something like apply manually on the host and babysit<br>
> (I did not have time for a short email, so I sent a long one instead :)<br>
><br>
> At this point, I sent the alert to warn people the gate is unstable,<br>
> which is about the latest state.<br>
><br>
> Good luck,<br>
><br>
> -i<br>
><br>
> [1] <a href="http://logs.openstack.org/95/526595/1/check/build-openstack-releasenotes/f38ccb4/job-output.txt.gz" rel="noreferrer" target="_blank">http://logs.openstack.org/95/<wbr>526595/1/check/build-<wbr>openstack-releasenotes/<wbr>f38ccb4/job-output.txt.gz</a><br>
> [2] <a href="https://review.openstack.org/525688" rel="noreferrer" target="_blank">https://review.openstack.org/<wbr>525688</a><br>
> [3] <a href="https://review.openstack.org/521324" rel="noreferrer" target="_blank">https://review.openstack.org/<wbr>521324</a><br>
><br>
</div></div>Digging into some of the issues this morning, I believe that citycloud-sto2 has<br>
been wedged for some time. I see ready / locked nodes sitting for 2+ days. We<br>
also have a few ready / locked nodes in rax-iad, which I think are related to<br>
the unhasable list from this morning.<br>
<br>
As i understand it, the only way to release these nodes is to stop the<br>
scheduler, is that correct? If so, I'd like to request we add some sort of CLI<br>
--force option to delete, or some other command, if it make sense.<br>
<br>
I'll hold off on a restart until jeblair or shrews has a moment to look at logs.<br>
<span class="HOEnZb"><font color="#888888"><br>
Paul<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
______________________________<wbr>_________________<br>
OpenStack-Infra mailing list<br>
<a href="mailto:OpenStack-Infra@lists.openstack.org">OpenStack-Infra@lists.<wbr>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-infra</a></div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>David Shrewsbury (Shrews)<br></div></div></div>
</div>