[TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1
Ruslanas Gžibovskis
ruslanas at lpic.lt
Wed Apr 29 07:45:15 UTC 2020
I just now realized, that I have seen in some log, messages, missing some
puppet-dependencies... Cannot find log again...
On Wed, 29 Apr 2020 at 09:39, Ruslanas Gžibovskis <ruslanas at lpic.lt> wrote:
> podman ps -a = clean, no containers at all.
> I have a watch -d "sudo podman ps -a ; sudo podman images -a ; sudo df -h"
>
> paunch.log is empty. (I did several reinstallations).
>
> I found in image logs:
> 2020-04-29 08:52:49,854 140572 DEBUG urllib3.connectionpool [ ]
> https://registry-1.docker.io:443 "GET /v2/ HTTP/1.1" 401 87
> 2020-04-29 08:52:49,855 140572 DEBUG tripleo_common.image.image_uploader [
> ] https://registry-1.docker.io/v2/ status code 401
> 2020-04-29 08:52:49,855 140572 DEBUG tripleo_common.image.image_uploader [
> ] Token parameters: params {'scope':
> 'repository:tripleotrain/centos-binary-zaqar-wsgi:pull', 'service': '
> registry.docker.io'}
> 2020-04-29 08:52:49,731 140572 DEBUG urllib3.connectionpool [ ]
> https://registry-1.docker.io:443 "GET /v2/ HTTP/1.1" 401 87
> 2020-04-29 08:52:49,732 140572 DEBUG tripleo_common.image.image_uploader [
> ] https://registry-1.docker.io/v2/ status code 401
> 2020-04-29 08:52:49,732 140572 DEBUG tripleo_common.image.image_uploader [
> ] Token parameters: params {'scope':
> 'repository:tripleotrain/centos-binary-rsyslog:pull', 'service': '
> registry.docker.io'}
> 2020-04-29 08:52:49,583 140572 DEBUG urllib3.connectionpool [ ]
> https://registry-1.docker.io:443 "GET /v2/ HTTP/1.1" 401 87
> 2020-04-29 08:52:49,584 140572 DEBUG tripleo_common.image.image_uploader [
> ] https://registry-1.docker.io/v2/ status code 401
> 2020-04-29 08:52:49,584 140572 DEBUG tripleo_common.image.image_uploader [
> ] Token parameters: params {'scope':
> 'repository:tripleotrain/centos-binary-swift-proxy-server:pull', 'service':
> 'registry.docker.io'}
> 2020-04-29 08:52:49,586 140572 DEBUG urllib3.connectionpool [ ] Starting
> new HTTPS connection (1): auth.docker.io:443
> 2020-04-29 08:52:49,606 140572 DEBUG urllib3.connectionpool [ ]
> https://registry-1.docker.io:443 "GET /v2/ HTTP/1.1" 401 87
> 2020-04-29 08:52:49,607 140572 DEBUG tripleo_common.image.image_uploader [
> ] https://registry-1.docker.io/v2/ status code 401
> 2020-04-29 08:52:49,607 140572 DEBUG tripleo_common.image.image_uploader [
> ] Token parameters: params {'scope':
> 'repository:tripleotrain/centos-binary-swift-object:pull', 'service': '
> registry.docker.io'}
>
> Later I saw connectionpool retrying, but I have not seen "
> tripleo_common.image.image_uploader" with same.
>
> Every 2.0s: sudo podman ps -a ; sudo podman images -a ; sudo df -h
>
> Wed Apr 29 09:38:26 2020
>
> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
> REPOSITORY TAG
> IMAGE ID CREATED SIZE
> docker.io/tripleotrain/centos-binary-nova-api current-tripleo
> e32831544953 2 days ago 1.39 GB
> docker.io/tripleotrain/centos-binary-glance-api current-tripleo
> edbb7dff6427 2 days ago 1.31 GB
> docker.io/tripleotrain/centos-binary-mistral-api current-tripleo
> bcb3e95028a3 2 days ago 1.54 GB
> docker.io/tripleotrain/centos-binary-ironic-pxe current-tripleo
> 2f1eb1da3fa4 2 days ago 909 MB
> docker.io/tripleotrain/centos-binary-heat-api current-tripleo
> b425da0e0a89 2 days ago 947 MB
> docker.io/tripleotrain/centos-binary-ironic-api current-tripleo
> d0b670006bc6 2 days ago 903 MB
> docker.io/tripleotrain/centos-binary-swift-proxy-server current-tripleo
> 73432aea0d63 2 days ago 895 MB
> docker.io/tripleotrain/centos-binary-neutron-server current-tripleo
> d7b8f19cc5ed 2 days ago 1.1 GB
> docker.io/tripleotrain/centos-binary-keystone current-tripleo
> 8352bb3fd528 2 days ago 905 MB
> docker.io/tripleotrain/centos-binary-zaqar-wsgi current-tripleo
> 49a7f0066616 2 days ago 894 MB
> docker.io/tripleotrain/centos-binary-placement-api current-tripleo
> 096ce1da63d3 2 days ago 1 GB
> docker.io/tripleotrain/centos-binary-ironic-inspector current-tripleo
> 4505c408a230 2 days ago 817 MB
> docker.io/tripleotrain/centos-binary-rabbitmq current-tripleo
> bee62aacf8fb 2 days ago 700 MB
> docker.io/tripleotrain/centos-binary-haproxy current-tripleo
> 4b11e3d9c95f 2 days ago 692 MB
> docker.io/tripleotrain/centos-binary-mariadb current-tripleo
> 16cc78bc1e94 2 days ago 845 MB
> docker.io/tripleotrain/centos-binary-keepalived current-tripleo
> 67de7d2af948 2 days ago 568 MB
> docker.io/tripleotrain/centos-binary-memcached current-tripleo
> a1019d76359c 2 days ago 561 MB
> docker.io/tripleotrain/centos-binary-iscsid current-tripleo
> c62bc10064c2 2 days ago 527 MB
> docker.io/tripleotrain/centos-binary-cron current-tripleo
> be0199eb5b89 2 days ago 522 MB
>
>
> On Tue, 28 Apr 2020 at 20:10, Alex Schultz <aschultz at redhat.com> wrote:
>
>> On Tue, Apr 28, 2020 at 11:57 AM Ruslanas Gžibovskis <ruslanas at lpic.lt>
>> wrote:
>> >
>> > Hi all,
>> >
>> > I am running a fresh install of rdo train, on centos7
>> > I almost a week I am facing error at this step:
>> > TASK [Run container-puppet tasks (generate config) during step 1]
>> >
>> > So I have ansible.log attached, I cannot find anything, where it is
>> failing.
>> > According to some understanding in ansible, it fails if it finds stderr
>> output.
>> > I cannot find error/fail or smth, I see Notices and Warnings, but I
>> believe it is not stderr?
>> >
>> > I see containers running and removed after some time.
>> > (as it should be I think)...
>> >
>> > Could you help me, where to dig?
>> >
>>
>> 2020-04-27 22:27:46,147 p=132230 u=root | TASK [Start containers for
>> step 1 using paunch]
>>
>> *****************************************************************************************************************************
>> 2020-04-27 22:27:46,148 p=132230 u=root | Monday 27 April 2020
>> 22:27:46 +0200 (0:00:00.137) 0:04:44.326 **********?
>> 2020-04-27 22:27:46,816 p=132230 u=root | ok: [remote-u]
>> 2020-04-27 22:27:46,914 p=132230 u=root | TASK [Debug output for
>> task: Start containers for step 1]
>>
>> *******************************************************************************************************************
>> 2020-04-27 22:27:46,915 p=132230 u=root | Monday 27 April 2020
>> 22:27:46 +0200 (0:00:00.767) 0:04:45.093 **********?
>> 2020-04-27 22:27:46,977 p=132230 u=root | fatal: [remote-u]: FAILED! => {
>> "failed_when_result": true,?
>> "outputs.stdout_lines | default([]) | union(outputs.stderr_lines |
>> default([]))": []
>>
>> Check /var/log/paunch.log. It probably has additional information as
>> to why the containers didn't start. You might also check the output
>> of 'sudo podman ps -a' to see if any containers exited with errors.
>>
>> > --
>> > Ruslanas Gžibovskis
>> > +370 6030 7030
>>
>>
>
> --
> Ruslanas Gžibovskis
> +370 6030 7030
>
--
Ruslanas Gžibovskis
+370 6030 7030
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200429/2c93c980/attachment.html>
More information about the openstack-discuss
mailing list