<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 30, 2017 at 9:39 AM, Emilien Macchi <span dir="ltr"><<a href="mailto:emilien@redhat.com" target="_blank">emilien@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Mon, Mar 27, 2017 at 8:00 AM, Flavio Percoco <<a href="mailto:flavio@redhat.com">flavio@redhat.com</a>> wrote:<br>
> On 23/03/17 16:24 +0100, Martin André wrote:<br>
>><br>
>> On Wed, Mar 22, 2017 at 2:20 PM, Dan Prince <<a href="mailto:dprince@redhat.com">dprince@redhat.com</a>> wrote:<br>
>>><br>
>>> On Wed, 2017-03-22 at 13:35 +0100, Flavio Percoco wrote:<br>
>>>><br>
>>>> On 22/03/17 13:32 +0100, Flavio Percoco wrote:<br>
>>>> > On 21/03/17 23:15 -0400, Emilien Macchi wrote:<br>
>>>> > > Hey,<br>
>>>> > ><br>
>>>> > > I've noticed that container jobs look pretty unstable lately; to<br>
>>>> > > me,<br>
>>>> > > it sounds like a timeout:<br>
>>>> > > <a href="http://logs.openstack.org/19/447319/2/check-tripleo/gate-tripleo-" rel="noreferrer" target="_blank">http://logs.openstack.org/19/<wbr>447319/2/check-tripleo/gate-<wbr>tripleo-</a><br>
>>>> > > ci-centos-7-ovb-containers-<wbr>oooq-nv/bca496a/console.html#_<wbr>2017-03-<br>
>>>> > > 22_00_08_55_358973<br>
>>>> ><br>
>>>> > There are different hypothesis on what is going on here. Some<br>
>>>> > patches have<br>
>>>> > landed to improve the write performance on containers by using<br>
>>>> > hostpath mounts<br>
>>>> > but we think the real slowness is coming from the images download.<br>
>>>> ><br>
>>>> > This said, this is still under investigation and the containers<br>
>>>> > squad will<br>
>>>> > report back as soon as there are new findings.<br>
>>>><br>
>>>> Also, to be more precise, Martin André is looking into this. He also<br>
>>>> fixed the<br>
>>>> gate in the last 2 weeks.<br>
>>><br>
>>><br>
>>> I spoke w/ Martin on IRC. He seems to think this is the cause of some<br>
>>> of the failures:<br>
>>><br>
>>> <a href="http://logs.openstack.org/32/446432/1/check-tripleo/gate-tripleo-ci-cen" rel="noreferrer" target="_blank">http://logs.openstack.org/32/<wbr>446432/1/check-tripleo/gate-<wbr>tripleo-ci-cen</a><br>
>>> tos-7-ovb-containers-oooq-nv/<wbr>543bc80/logs/oooq/overcloud-<wbr>controller-<br>
>>> 0/var/log/extra/docker/<wbr>containers/heat_engine/log/<wbr>heat/heat-<br>
>>> engine.log.txt.gz#_2017-03-21_<wbr>20_26_29_697<br>
>>><br>
>>><br>
>>> Looks like Heat isn't able to create Nova instances in the overcloud<br>
>>> due to "Host 'overcloud-novacompute-0' is not mapped to any cell'. This<br>
>>> means our cells initialization code for containers may not be quite<br>
>>> right... or there is a race somewhere.<br>
>><br>
>><br>
>> Here are some findings. I've looked at time measures from CI for<br>
>> <a href="https://review.openstack.org/#/c/448533/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/448533/</a> which provided the most<br>
>> recent results:<br>
>><br>
>> * gate-tripleo-ci-centos-7-ovb-<wbr>ha [1]<br>
>> undercloud install: 23<br>
>> overcloud deploy: 72<br>
>> total time: 125<br>
>> * gate-tripleo-ci-centos-7-ovb-<wbr>nonha [2]<br>
>> undercloud install: 25<br>
>> overcloud deploy: 48<br>
>> total time: 122<br>
>> * gate-tripleo-ci-centos-7-ovb-<wbr>updates [3]<br>
>> undercloud install: 24<br>
>> overcloud deploy: 57<br>
>> total time: 152<br>
>> * gate-tripleo-ci-centos-7-ovb-<wbr>containers-oooq-nv [4]<br>
>> undercloud install: 28<br>
>> overcloud deploy: 48<br>
>> total time: 165 (timeout)<br>
>><br>
>> Looking at the undercloud & overcloud install times, the most task<br>
>> consuming tasks, the containers job isn't doing that bad compared to<br>
>> other OVB jobs. But looking closer I could see that:<br>
>> - the containers job pulls docker images from dockerhub, this process<br>
>> takes roughly 18 min.<br>
><br>
><br>
> I think we can optimize this a bit by having the script that populates the<br>
> local<br>
> registry in the overcloud job to run in parallel. The docker daemon can do<br>
> multiple pulls w/o problems.<br>
><br>
>> - the overcloud validate task takes 10 min more than it should because<br>
>> of the bug Dan mentioned (a fix is in the queue at<br>
>> <a href="https://review.openstack.org/#/c/448575/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/448575/</a>)<br>
><br>
><br>
> +A<br>
><br>
>> - the postci takes a long time with quickstart, 13 min (4 min alone<br>
>> spent on docker log collection) whereas it takes only 3 min when using<br>
>> tripleo.sh<br>
><br>
><br>
> mmh, does this have anything to do with ansible being in between? Or is that<br>
> time specifically for the part that gets the logs?<br>
><br>
>><br>
>> Adding all these numbers, we're at about 40 min of additional time for<br>
>> oooq containers job which is enough to cross the CI job limit.<br>
>><br>
>> There is certainly a lot of room for optimization here and there and<br>
>> I'll explore how we can speed up the containers CI job over the next<br>
><br>
><br>
> Thanks a lot for the update. The time break down is fantastic,<br>
> Flavio<br>
<br>
</div></div>TBH the problem is far from being solved:<br>
<br>
1. Click on <a href="https://status-tripleoci.rhcloud.com/" rel="noreferrer" target="_blank">https://status-tripleoci.<wbr>rhcloud.com/</a><br>
2. Select gate-tripleo-ci-centos-7-ovb-<wbr>containers-oooq-nv<br>
<br>
Container job has been failing more than 55% of the time.<br>
<br>
As a reference,<br>
gate-tripleo-ci-centos-7-ovb-<wbr>nonha has 90% of success.<br>
gate-tripleo-ci-centos-7-ovb-<wbr>ha has 64% of success.<br>
<br>
It clearly means the ovb-containers job was and is not ready to be run<br>
in the check pipeline, it's not reliable enough.<br>
<br>
The current queue time in TripleO OVB is 11 hours. This is not<br>
acceptable for TripleO developers and we need a short term solution,<br>
which is disabling this job from the check pipeline:<br>
<a href="https://review.openstack.org/#/c/451546/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/451546/</a><br>
<br></blockquote><div><br></div><div>Yes, given resource constraints I don't see an alternative in the short term.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On the long-term, we need to:<br>
<br>
- Stabilize ovb-containers which is AFIK already WIP by Martin (kudos<br>
to him). My hope is Martin gets enough help from Container squad to<br>
work on this topic.<br>
- Remove ovb-nonha scenario from the check pipeline - and probably<br>
keep it periodic. Dan Prince started some work on it:<br>
<a href="https://review.openstack.org/#/c/449791/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/449791/</a> and<br>
<a href="https://review.openstack.org/#/c/449785/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/449785/</a> - but not much progress on it<br>
in the recent days.<br>
- Engage some work on getting multinode-scenario(001,002,<wbr>003,004) jobs<br>
for containers, so we don't need much OVB jobs (only one probably) for<br>
container scenarios.<br>
<br></blockquote><div><br></div><div>Another work item in progress which should help with the stability of the ovb containers job is Dan has set up a docker-distribution based registry on a node in rhcloud. Once jobs are pulling images from this there should be less timeouts due to image pull speed.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I know everyone is busy by working on container support in composable<br>
services, but we might assign more resources on CI work here,<br>
otherwise I'm not sure how we're going to stabilize the CI.<br>
<br>
Any feedback is very welcome.<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
>> weeks.<br>
>><br>
>> Martin<br>
>><br>
>> [1]<br>
>> <a href="http://logs.openstack.org/33/448533/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/d2c1b16/" rel="noreferrer" target="_blank">http://logs.openstack.org/33/<wbr>448533/2/check-tripleo/gate-<wbr>tripleo-ci-centos-7-ovb-ha/<wbr>d2c1b16/</a><br>
>> [2]<br>
>> <a href="http://logs.openstack.org/33/448533/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-nonha/d6df760/" rel="noreferrer" target="_blank">http://logs.openstack.org/33/<wbr>448533/2/check-tripleo/gate-<wbr>tripleo-ci-centos-7-ovb-nonha/<wbr>d6df760/</a><br>
>> [3]<br>
>> <a href="http://logs.openstack.org/33/448533/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-updates/3b1f795/" rel="noreferrer" target="_blank">http://logs.openstack.org/33/<wbr>448533/2/check-tripleo/gate-<wbr>tripleo-ci-centos-7-ovb-<wbr>updates/3b1f795/</a><br>
>> [4]<br>
>> <a href="http://logs.openstack.org/33/448533/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-containers-oooq-nv/b816f20/" rel="noreferrer" target="_blank">http://logs.openstack.org/33/<wbr>448533/2/check-tripleo/gate-<wbr>tripleo-ci-centos-7-ovb-<wbr>containers-oooq-nv/b816f20/</a><br>
>><br>
>>> Dan<br>
>>><br>
>>>><br>
>>>> Flavio<br>
>>>><br>
>>>><br>
>>>><br>
>>>> ______________________________<wbr>______________________________<wbr>_________<br>
>>>> _____<br>
>>>> OpenStack Development Mailing List (not for usage questions)<br>
>>>> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubs" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:unsubs</a><br>
>>>> cribe<br>
>>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
>>><br>
>>><br>
>>><br>
>>> ______________________________<wbr>______________________________<wbr>______________<br>
>>> OpenStack Development Mailing List (not for usage questions)<br>
>>> Unsubscribe:<br>
>>> <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
>><br>
>><br>
>> ______________________________<wbr>______________________________<wbr>______________<br>
>> OpenStack Development Mailing List (not for usage questions)<br>
>> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
><br>
><br>
> --<br>
> @flaper87<br>
> Flavio Percoco<br>
><br>
> ______________________________<wbr>______________________________<wbr>______________<br>
> OpenStack Development Mailing List (not for usage questions)<br>
> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
><br>
<br>
<br>
<br>
--<br>
</div></div><span class="HOEnZb"><font color="#888888">Emilien Macchi<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>