<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Nov 11, 2017 at 10:47 PM, Alex Schultz <span dir="ltr"><<a href="mailto:aschultz@redhat.com" target="_blank">aschultz@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Ok so here's the current status of things.  I've gone through some of<br>
the pending patches and sent them to the gate over the weekend since<br>
the gate was empty (yay!).  We've managed to land a bunch of patches.<br>
That being said for any patch for master with scenario jobs, please do<br>
not recheck/approve. Currently the non-containerized scenario001/004<br>
jobs are broken due to Bug 1731688[0] (these run on<br>
tripleo-quickstart-extras/<wbr>tripleo-ci).  There is a patch[1] out for a<br>
revert of the breaking change. The scenario001-container job is super<br>
flaky due to Bug 1731063[2] and we could use some help figuring out<br>
what's going on.  We're also seeing some issues around heat<br>
interactions[3][4] but those seems to be less of a problem than the<br>
previously mentioned bugs.<br>
<br>
So at the moment any changes that don't have scenario jobs associated<br>
with them may be approved/rechecked freely.  We can discuss on Monday<br>
what to do about the scenario jobs if we still are running into issues<br>
without a solution in sight.  Also please keep an eye on the gate<br>
queue[5] and don't approve things if it starts getting excessively<br>
long.<br>
<br>
Thanks,<br>
-Alex<br>
<br>
<br>
[0] <a href="https://bugs.launchpad.net/tripleo/+bug/1731688" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bug/1731688</a><br>
[1] <a href="https://review.openstack.org/#/c/519041/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/519041/</a><br>
[2] <a href="https://bugs.launchpad.net/tripleo/+bug/1731063" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bug/1731063</a><br>
[3] <a href="https://bugs.launchpad.net/tripleo/+bug/1731032" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bug/1731032</a><br>
[4] <a href="https://bugs.launchpad.net/tripleo/+bug/1731540" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bug/1731540</a><br>
[5] <a href="http://zuulv3.openstack.org/" rel="noreferrer" target="_blank">http://zuulv3.openstack.org/</a><br>
<div class="HOEnZb"><div class="h5"><br>
On Wed, Nov 8, 2017 at 3:39 PM, Alex Schultz <<a href="mailto:aschultz@redhat.com">aschultz@redhat.com</a>> wrote:<br>
> So we have some good news and some bad news.  The good news is that<br>
> we've managed to get the gate queue[0] under control since we've held<br>
> off on pushing new things to the gate.  The bad news is that we've<br>
> still got some random failures occurring during the deployment of<br>
> master.  Since we're not seeing infra related issues, we should be OK<br>
> to merge things to stable/* branches.  Unfortunately until we resolve<br>
> the issues in master[1] we could potentially backup the queue.  Please<br>
> do not merge things that are not critical bugs.  I would ask that<br>
> folks please take a look at the open bugs and help figure out what is<br>
> going wrong. I've created two issues today that I've seen in the gate<br>
> that we don't appear to have open patches for. One appears to be an<br>
> issue in the heat deployment process[3] and the other is related to<br>
> the tempest verification of being able to launch a VM & ssh to it[4].<br>
><br>
> Thanks,<br>
> -Alex<br>
><br>
> [3] <a href="https://bugs.launchpad.net/tripleo/+bug/1731032" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bug/1731032</a><br>
> [4] <a href="https://bugs.launchpad.net/tripleo/+bug/1731063" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bug/1731063</a><br>
><br>
> On Tue, Nov 7, 2017 at 8:33 AM, Alex Schultz <<a href="mailto:aschultz@redhat.com">aschultz@redhat.com</a>> wrote:<br>
>> Hey Folks<br>
>><br>
>> So we're at 24+ hours again in the gate[0] and the queue only<br>
>> continues to grow. We currently have 6 ci/alert bugs[1]. Please do not<br>
>> approve of recheck anything that isn't related to these bugs.  I will<br>
>> most likely need to go through the queue and abandon everything to<br>
>> clear it up as we are consistently hitting timeouts on various jobs<br>
>> which is preventing anything from merging.<br>
>><br>
>> Thanks,<br>
>> -Alex<br>
>><br>
> [0] <a href="http://zuulv3.openstack.org/" rel="noreferrer" target="_blank">http://zuulv3.openstack.org/</a><br>
> [1] <a href="https://bugs.launchpad.net/tripleo/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.importance%3Alist=CRITICAL&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=ci+alert&field.tags_combinator=ALL&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bugs?field.<wbr>searchtext=&orderby=-<wbr>importance&field.status%<wbr>3Alist=NEW&field.status%<wbr>3Alist=CONFIRMED&field.status%<wbr>3Alist=TRIAGED&field.status%<wbr>3Alist=INPROGRESS&field.<wbr>importance%3Alist=CRITICAL&<wbr>assignee_option=any&field.<wbr>assignee=&field.bug_reporter=&<wbr>field.bug_commenter=&field.<wbr>subscriber=&field.structural_<wbr>subscriber=&field.tag=ci+<wbr>alert&field.tags_combinator=<wbr>ALL&field.has_cve.used=&field.<wbr>omit_dupes.used=&field.omit_<wbr>dupes=on&field.affects_me.<wbr>used=&field.has_patch.used=&<wbr>field.has_branches.used=&<wbr>field.has_branches=on&field.<wbr>has_no_branches.used=&field.<wbr>has_no_branches=on&field.has_<wbr>blueprints.used=&field.has_<wbr>blueprints=on&field.has_no_<wbr>blueprints.used=&field.has_no_<wbr>blueprints=on&search=Search</a><br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
</div></div></blockquote></div><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Thanks for continuing to push on this Alex!</div></div>