<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 30, 2017 at 10:08 AM, Steven Hardy <span dir="ltr"><<a href="mailto:shardy@redhat.com" target="_blank">shardy@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail-HOEnZb"><div class="gmail-h5">On Wed, Mar 29, 2017 at 10:07:24PM -0400, Paul Belanger wrote:<br>
> On Thu, Mar 30, 2017 at 09:56:59AM +1300, Steve Baker wrote:<br>
> > On Thu, Mar 30, 2017 at 9:39 AM, Emilien Macchi <<a href="mailto:emilien@redhat.com">emilien@redhat.com</a>> wrote:<br>
> ><br>
> > > On Mon, Mar 27, 2017 at 8:00 AM, Flavio Percoco <<a href="mailto:flavio@redhat.com">flavio@redhat.com</a>> wrote:<br>
> > > > On 23/03/17 16:24 +0100, Martin André wrote:<br>
> > > >><br>
> > > >> On Wed, Mar 22, 2017 at 2:20 PM, Dan Prince <<a href="mailto:dprince@redhat.com">dprince@redhat.com</a>> wrote:<br>
> > > >>><br>
> > > >>> On Wed, 2017-03-22 at 13:35 +0100, Flavio Percoco wrote:<br>
> > > >>>><br>
> > > >>>> On 22/03/17 13:32 +0100, Flavio Percoco wrote:<br>
> > > >>>> > On 21/03/17 23:15 -0400, Emilien Macchi wrote:<br>
> > > >>>> > > Hey,<br>
> > > >>>> > ><br>
> > > >>>> > > I've noticed that container jobs look pretty unstable lately; to<br>
> > > >>>> > > me,<br>
> > > >>>> > > it sounds like a timeout:<br>
> > > >>>> > > <a href="http://logs.openstack.org/19/447319/2/check-tripleo/gate-tripleo-" rel="noreferrer" target="_blank">http://logs.openstack.org/19/<wbr>447319/2/check-tripleo/gate-<wbr>tripleo-</a><br>
> > > >>>> > > ci-centos-7-ovb-containers-<wbr>oooq-nv/bca496a/console.html#_<wbr>2017-03-<br>
> > > >>>> > > 22_00_08_55_358973<br>
> > > >>>> ><br>
> > > >>>> > There are different hypothesis on what is going on here. Some<br>
> > > >>>> > patches have<br>
> > > >>>> > landed to improve the write performance on containers by using<br>
> > > >>>> > hostpath mounts<br>
> > > >>>> > but we think the real slowness is coming from the images download.<br>
> > > >>>> ><br>
> > > >>>> > This said, this is still under investigation and the containers<br>
> > > >>>> > squad will<br>
> > > >>>> > report back as soon as there are new findings.<br>
> > > >>>><br>
> > > >>>> Also, to be more precise, Martin André is looking into this. He also<br>
> > > >>>> fixed the<br>
> > > >>>> gate in the last 2 weeks.<br>
> > > >>><br>
> > > >>><br>
> > > >>> I spoke w/ Martin on IRC. He seems to think this is the cause of some<br>
> > > >>> of the failures:<br>
> > > >>><br>
> > > >>> <a href="http://logs.openstack.org/32/446432/1/check-tripleo/gate-" rel="noreferrer" target="_blank">http://logs.openstack.org/32/<wbr>446432/1/check-tripleo/gate-</a><br>
> > > tripleo-ci-cen<br>
> > > >>> tos-7-ovb-containers-oooq-nv/<wbr>543bc80/logs/oooq/overcloud-<wbr>controller-<br>
> > > >>> 0/var/log/extra/docker/<wbr>containers/heat_engine/log/<wbr>heat/heat-<br>
> > > >>> engine.log.txt.gz#_2017-03-21_<wbr>20_26_29_697<br>
> > > >>><br>
> > > >>><br>
> > > >>> Looks like Heat isn't able to create Nova instances in the overcloud<br>
> > > >>> due to "Host 'overcloud-novacompute-0' is not mapped to any cell'. This<br>
> > > >>> means our cells initialization code for containers may not be quite<br>
> > > >>> right... or there is a race somewhere.<br>
> > > >><br>
> > > >><br>
> > > >> Here are some findings. I've looked at time measures from CI for<br>
> > > >> <a href="https://review.openstack.org/#/c/448533/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/448533/</a> which provided the most<br>
> > > >> recent results:<br>
> > > >><br>
> > > >> * gate-tripleo-ci-centos-7-ovb-<wbr>ha [1]<br>
> > > >> undercloud install: 23<br>
> > > >> overcloud deploy: 72<br>
> > > >> total time: 125<br>
> > > >> * gate-tripleo-ci-centos-7-ovb-<wbr>nonha [2]<br>
> > > >> undercloud install: 25<br>
> > > >> overcloud deploy: 48<br>
> > > >> total time: 122<br>
> > > >> * gate-tripleo-ci-centos-7-ovb-<wbr>updates [3]<br>
> > > >> undercloud install: 24<br>
> > > >> overcloud deploy: 57<br>
> > > >> total time: 152<br>
> > > >> * gate-tripleo-ci-centos-7-ovb-<wbr>containers-oooq-nv [4]<br>
> > > >> undercloud install: 28<br>
> > > >> overcloud deploy: 48<br>
> > > >> total time: 165 (timeout)<br>
> > > >><br>
> > > >> Looking at the undercloud & overcloud install times, the most task<br>
> > > >> consuming tasks, the containers job isn't doing that bad compared to<br>
> > > >> other OVB jobs. But looking closer I could see that:<br>
> > > >> - the containers job pulls docker images from dockerhub, this process<br>
> > > >> takes roughly 18 min.<br>
> > > ><br>
> > > ><br>
> > > > I think we can optimize this a bit by having the script that populates<br>
> > > the<br>
> > > > local<br>
> > > > registry in the overcloud job to run in parallel. The docker daemon can<br>
> > > do<br>
> > > > multiple pulls w/o problems.<br>
> > > ><br>
> > > >> - the overcloud validate task takes 10 min more than it should because<br>
> > > >> of the bug Dan mentioned (a fix is in the queue at<br>
> > > >> <a href="https://review.openstack.org/#/c/448575/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/448575/</a>)<br>
> > > ><br>
> > > ><br>
> > > > +A<br>
> > > ><br>
> > > >> - the postci takes a long time with quickstart, 13 min (4 min alone<br>
> > > >> spent on docker log collection) whereas it takes only 3 min when using<br>
> > > >> tripleo.sh<br>
> > > ><br>
> > > ><br>
> > > > mmh, does this have anything to do with ansible being in between? Or is<br>
> > > that<br>
> > > > time specifically for the part that gets the logs?<br>
> > > ><br>
> > > >><br>
> > > >> Adding all these numbers, we're at about 40 min of additional time for<br>
> > > >> oooq containers job which is enough to cross the CI job limit.<br>
> > > >><br>
> > > >> There is certainly a lot of room for optimization here and there and<br>
> > > >> I'll explore how we can speed up the containers CI job over the next<br>
> > > ><br>
> > > ><br>
> > > > Thanks a lot for the update. The time break down is fantastic,<br>
> > > > Flavio<br>
> > ><br>
> > > TBH the problem is far from being solved:<br>
> > ><br>
> > > 1. Click on <a href="https://status-tripleoci.rhcloud.com/" rel="noreferrer" target="_blank">https://status-tripleoci.<wbr>rhcloud.com/</a><br>
> > > 2. Select gate-tripleo-ci-centos-7-ovb-<wbr>containers-oooq-nv<br>
> > ><br>
> > > Container job has been failing more than 55% of the time.<br>
> > ><br>
> > > As a reference,<br>
> > > gate-tripleo-ci-centos-7-ovb-<wbr>nonha has 90% of success.<br>
> > > gate-tripleo-ci-centos-7-ovb-<wbr>ha has 64% of success.<br>
> > ><br>
> > > It clearly means the ovb-containers job was and is not ready to be run<br>
> > > in the check pipeline, it's not reliable enough.<br>
> > ><br>
> > > The current queue time in TripleO OVB is 11 hours. This is not<br>
> > > acceptable for TripleO developers and we need a short term solution,<br>
> > > which is disabling this job from the check pipeline:<br>
> > > <a href="https://review.openstack.org/#/c/451546/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/451546/</a><br>
> > ><br>
> > ><br>
> > Yes, given resource constraints I don't see an alternative in the short<br>
> > term.<br>
> ><br>
> ><br>
> > > On the long-term, we need to:<br>
> > ><br>
> > > - Stabilize ovb-containers which is AFIK already WIP by Martin (kudos<br>
> > > to him). My hope is Martin gets enough help from Container squad to<br>
> > > work on this topic.<br>
> > > - Remove ovb-nonha scenario from the check pipeline - and probably<br>
> > > keep it periodic. Dan Prince started some work on it:<br>
> > > <a href="https://review.openstack.org/#/c/449791/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/449791/</a> and<br>
> > > <a href="https://review.openstack.org/#/c/449785/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/449785/</a> - but not much progress on it<br>
> > > in the recent days.<br>
> > > - Engage some work on getting multinode-scenario(001,002,<wbr>003,004) jobs<br>
> > > for containers, so we don't need much OVB jobs (only one probably) for<br>
> > > container scenarios.<br>
> > ><br>
> > ><br>
> > Another work item in progress which should help with the stability of the<br>
> > ovb containers job is Dan has set up a docker-distribution based registry<br>
> > on a node in rhcloud. Once jobs are pulling images from this there should<br>
> > be less timeouts due to image pull speed.<br>
> ><br>
> Before we go and stand up private infrastructure for tripleo to depend on, can<br>
> we please work on solving this is for all openstack projects upstream? We do<br>
> want to run regional mirrors for docker things, however we need to address<br>
> issues on how to integration this with AFS.<br>
><br>
> We are trying to break the cycle of tripleo standing up private infrastructure<br>
> and consume more community based. So far we are making good progress, however I<br>
> would see this effort a step backwards, not forward.<br>
<br>
</div></div>To be fair, we discussed this on IRC yesterday, everyone agreed infra<br>
supported docker cache/registry was a great idea, but you said there was no<br>
known timeline for it actually getting done.<br>
<br>
So while we all want to see that happen, and potentially help out with the<br>
effort, we're also trying to mitigate the fact that work isn't done by<br>
working around it in our OVB environment.<br>
<br>
FWIW I think we absolutely need multinode container jobs, e.g using infra<br>
resources, as that has worked out great for our puppet based CI, but we<br>
really need to work out how to optimize the container download speed in<br>
that environment before that will work well AFAIK.<br></blockquote><div><br></div><div>Gabriele has started working on this <a href="https://review.openstack.org/#/c/454152/">https://review.openstack.org/#/c/454152/</a></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
You referenced <a href="https://review.openstack.org/#/c/447524/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/447524/</a> in your other<br>
reply, which AFAICS is a spec about publishing to dockerhub, which sounds<br>
great, but we have the opposite problem, we need to consume those published<br>
images during our CI runs, and currently downloading images takes too long.<br>
So we ideally need some sort of local registry/pull-through-cache that<br>
speeds up that process.<br>
<br>
How can we move forward here, is there anyone on the infra side we can work<br>
with to discuss further?<br>
<br>
Thanks!<br>
<br>
Steve<br>
<div class="gmail-HOEnZb"><div class="gmail-h5"><br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>