<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Aug 16, 2017 at 4:33 AM, Emilien Macchi <span dir="ltr"><<a href="mailto:emilien@redhat.com" target="_blank">emilien@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">So far, we're having 3 critical issues, that we all need to address as<br>
soon as we can.<br>
<br>
Problem #1: Upgrade jobs timeout from Newton to Ocata<br>
<a href="https://bugs.launchpad.net/tripleo/+bug/1702955" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bug/1702955</a><br>
Today I spent an hour to look at it and here's what I've found so far:<br>
depending on which public cloud we're running the TripleO CI jobs, it<br>
timeouts or not.<br>
Here's an example of Heat resources that run in our CI:<br>
<a href="https://www.diffchecker.com/VTXkNFuk" rel="noreferrer" target="_blank">https://www.diffchecker.com/<wbr>VTXkNFuk</a><br>
On the left, resources on a job that failed (running on internap) and<br>
on the right (running on citycloud) it worked.<br>
I've been through all upgrade steps and I haven't seen specific tasks<br>
that take more time here or here, but some little changes that make<br>
the big change at the end (so hard to debug).<br>
Note: both jobs use AFS mirrors.<br>
Help on that front would be very welcome.<br>
<br>
<br>
Problem #2: from Ocata to Pike (containerized) missing container upload step<br>
<a href="https://bugs.launchpad.net/tripleo/+bug/1710938" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bug/1710938</a><br>
Wes has a patch (thanks!) that is currently in the gate:<br>
<a href="https://review.openstack.org/#/c/493972" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/493972</a><br>
Thanks to that work, we managed to find the problem #3.<br>
<br>
<br>
Problem #3: from Ocata to Pike: all container images are<br>
uploaded/specified, even for services not deployed<br>
<a href="https://bugs.launchpad.net/tripleo/+bug/1710992" rel="noreferrer" target="_blank">https://bugs.launchpad.net/<wbr>tripleo/+bug/1710992</a><br>
The CI jobs are timeouting during the upgrade process because<br>
downloading + uploading _all_ containers in local cache takes more<br>
than 20 minutes.<br>
So this is where we are now, upgrade jobs timeout on that. Steve Baker<br>
is currently looking at it but we'll probably offer some help.<br>
<br>
<br>
Solutions:<br>
- for stable/ocata: make upgrade jobs non-voting<br>
- for pike: keep upgrade jobs non-voting and release without upgrade testing<br>
<br></blockquote><div><br></div><div><div>+1 but for Ocata to Pike, sounds like the container/images related problems 2 and 3 above are both in progress or being looked at (weshay/sbaker ++) in which case we might be able to fix O...P jobs at least?</div><div><br></div><div>For Newton to Ocata, is it consistent which clouds we are timing out on? I 've looked at that <a href="https://bugs.launchpad.net/tripleo/+bug/1702955">https://bugs.launchpad.net/tripleo/+bug/1702955</a> before and I know other folks from upgrades have too, but couldn't find some root cause, or any upgrades operations taking too long/timing out/error etc. If it is consistent which clouds time out we can use that info to guide us in the case that we make the jobs non-voting for N...O (e.g. a known list of 'timing out clouds' to decide if we should inspect the ci logs closer before merging some patch). Obviously only until/unless we actually root cause that one (I will also find some time to check again)</div></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Risks:<br>
- for stable/ocata: it's highly possible to inject regression if jobs<br>
aren't voting anymore.<br>
- for pike: the quality of the release won't be good enough in term of<br>
CI coverage comparing to Ocata.<br>
<br>
Mitigations:<br>
- for stable/ocata: make jobs non-voting and enforce our<br>
core-reviewers to pay double attention on what is landed. It should be<br>
temporary until we manage to fix the CI jobs.<br>
- for master: release RC1 without upgrade jobs and make progress<br></blockquote><div><br></div><div>for master, +1 I think this is essentially what I am saying above for O...P - sounds like problem 2 is well in progress from weshay and the other container/image related problem 3 is the main outstanding item. Since RC1 is this week I think what you are proposing as mitigation is fair. So we re-evaluate making these jobs voting before the final RCs end of August</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
- Run TripleO upgrade scenarios as third party CI in RDO Cloud or<br>
somewhere with resources and without timeout constraints.</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I would like some feedback on the proposal so we can move forward this week,<br>
Thanks.<br></blockquote><div><br></div><div><br></div><div>thanks for putting this together. I think if we really had to pick one the O..P ci has priority obviously this week (!)... I think the container/images related issues for O...P are both expected/teething issues from the huge amount of work done by the containerization team and can hopefully be resolved quickly. </div><div><br></div><div>marios</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span class="gmail-HOEnZb"><font color="#888888">--<br>
Emilien Macchi<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
</font></span></blockquote></div><br></div></div>