[TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1

Ruslanas Gžibovskis ruslanas at lpic.lt
Wed Apr 29 07:47:42 UTC 2020


On the other installation I have, I see a bit more images:

REPOSITORY                                                       TAG
        IMAGE ID       CREATED        SIZE
docker.io/tripleotrain/centos-binary-mistral-executor
 current-tripleo   826aaf79409f   2 months ago   1.76 GB
docker.io/tripleotrain/centos-binary-nova-compute-ironic
current-tripleo   57f6cccea249   2 months ago   2.02 GB
docker.io/tripleotrain/centos-binary-neutron-l3-agent
 current-tripleo   1c342a20093a   2 months ago   1.17 GB
docker.io/tripleotrain/centos-binary-neutron-dhcp-agent
 current-tripleo   3a9849df6381   2 months ago   1.16 GB
docker.io/tripleotrain/centos-binary-neutron-server
 current-tripleo   3f56961b1a67   2 months ago   1.1 GB
docker.io/tripleotrain/centos-binary-ironic-neutron-agent
 current-tripleo   ffaea5a0eafb   2 months ago   951 MB
docker.io/tripleotrain/centos-binary-neutron-openvswitch-agent
current-tripleo   e0bb38de7810   2 months ago   951 MB
docker.io/tripleotrain/centos-binary-mistral-api
current-tripleo   3d1388019c87   2 months ago   1.48 GB
docker.io/tripleotrain/centos-binary-glance-api
 current-tripleo   8e1d04aa8f46   2 months ago   1.31 GB
docker.io/tripleotrain/centos-binary-mistral-engine
 current-tripleo   ede1273e60ac   2 months ago   1.44 GB
docker.io/tripleotrain/centos-binary-mistral-event-engine
 current-tripleo   4e9f3446fa88   2 months ago   1.44 GB
docker.io/tripleotrain/centos-binary-nova-api
 current-tripleo   b38dce72601d   2 months ago   1.39 GB
docker.io/tripleotrain/centos-binary-zaqar-wsgi
 current-tripleo   7e9acff2d188   2 months ago   893 MB
docker.io/tripleotrain/centos-binary-nova-scheduler
 current-tripleo   08121d755b68   2 months ago   1.45 GB
docker.io/tripleotrain/centos-binary-ironic-conductor
 current-tripleo   d9810d76bacf   2 months ago   1.09 GB
docker.io/tripleotrain/centos-binary-nova-conductor
 current-tripleo   f771753e6e8b   2 months ago   1.28 GB
docker.io/tripleotrain/centos-binary-placement-api
current-tripleo   318fa9e266da   2 months ago   914 MB
docker.io/tripleotrain/centos-binary-ironic-api
 current-tripleo   5c0167e4ca6c   2 months ago   903 MB
docker.io/tripleotrain/centos-binary-ironic-pxe
 current-tripleo   f08a3d35e1ee   2 months ago   908 MB
docker.io/tripleotrain/centos-binary-keystone
 current-tripleo   b876505251f9   2 months ago   905 MB
docker.io/tripleotrain/centos-binary-heat-api
 current-tripleo   7b147f4b215a   2 months ago   946 MB
docker.io/tripleotrain/centos-binary-swift-proxy-server
 current-tripleo   34aa5292ac93   2 months ago   894 MB
docker.io/tripleotrain/centos-binary-heat-engine
current-tripleo   515b7034dcd5   2 months ago   946 MB
docker.io/tripleotrain/centos-binary-swift-container
current-tripleo   e7bd1b5f50e5   2 months ago   846 MB
docker.io/tripleotrain/centos-binary-swift-account
current-tripleo   fa8f07aab6c1   2 months ago   846 MB
docker.io/tripleotrain/centos-binary-swift-object
 current-tripleo   cdb70d74e5d8   2 months ago   846 MB
docker.io/tripleotrain/centos-binary-ironic-inspector
 current-tripleo   8ded64b6dcec   2 months ago   817 MB
docker.io/tripleotrain/centos-binary-mariadb
current-tripleo   949e61588879   2 months ago   846 MB
docker.io/tripleotrain/centos-binary-cron
 current-tripleo   3579f123aa33   2 months ago   522 MB
docker.io/tripleotrain/centos-binary-rabbitmq
 current-tripleo   75b4deddc0c3   2 months ago   700 MB
docker.io/tripleotrain/centos-binary-haproxy
current-tripleo   af7d3eadd110   2 months ago   692 MB
docker.io/tripleotrain/centos-binary-keepalived
 current-tripleo   7fc292e41708   2 months ago   568 MB
docker.io/tripleotrain/centos-binary-memcached
current-tripleo   bddc81718cfc   2 months ago   561 MB
docker.io/tripleotrain/centos-binary-iscsid
 current-tripleo   7456537e0c25   2 months ago   527 MB

I am new to containers, can I somehow transfer these images between, as I
understand, it might help?

On Wed, 29 Apr 2020 at 09:45, Ruslanas Gžibovskis <ruslanas at lpic.lt> wrote:

> I just now realized, that I have seen in some log, messages, missing some
> puppet-dependencies... Cannot find log again...
>
> On Wed, 29 Apr 2020 at 09:39, Ruslanas Gžibovskis <ruslanas at lpic.lt>
> wrote:
>
>> podman ps -a = clean, no containers at all.
>> I have a watch -d "sudo podman ps -a ; sudo podman images -a ; sudo df -h"
>>
>> paunch.log is empty. (I did several reinstallations).
>>
>> I found in image logs:
>> 2020-04-29 08:52:49,854 140572 DEBUG urllib3.connectionpool [  ]
>> https://registry-1.docker.io:443 "GET /v2/ HTTP/1.1" 401 87
>> 2020-04-29 08:52:49,855 140572 DEBUG tripleo_common.image.image_uploader
>> [  ] https://registry-1.docker.io/v2/ status code 401
>> 2020-04-29 08:52:49,855 140572 DEBUG tripleo_common.image.image_uploader
>> [  ] Token parameters: params {'scope':
>> 'repository:tripleotrain/centos-binary-zaqar-wsgi:pull', 'service': '
>> registry.docker.io'}
>> 2020-04-29 08:52:49,731 140572 DEBUG urllib3.connectionpool [  ]
>> https://registry-1.docker.io:443 "GET /v2/ HTTP/1.1" 401 87
>> 2020-04-29 08:52:49,732 140572 DEBUG tripleo_common.image.image_uploader
>> [  ] https://registry-1.docker.io/v2/ status code 401
>> 2020-04-29 08:52:49,732 140572 DEBUG tripleo_common.image.image_uploader
>> [  ] Token parameters: params {'scope':
>> 'repository:tripleotrain/centos-binary-rsyslog:pull', 'service': '
>> registry.docker.io'}
>> 2020-04-29 08:52:49,583 140572 DEBUG urllib3.connectionpool [  ]
>> https://registry-1.docker.io:443 "GET /v2/ HTTP/1.1" 401 87
>> 2020-04-29 08:52:49,584 140572 DEBUG tripleo_common.image.image_uploader
>> [  ] https://registry-1.docker.io/v2/ status code 401
>> 2020-04-29 08:52:49,584 140572 DEBUG tripleo_common.image.image_uploader
>> [  ] Token parameters: params {'scope':
>> 'repository:tripleotrain/centos-binary-swift-proxy-server:pull', 'service':
>> 'registry.docker.io'}
>> 2020-04-29 08:52:49,586 140572 DEBUG urllib3.connectionpool [  ] Starting
>> new HTTPS connection (1): auth.docker.io:443
>> 2020-04-29 08:52:49,606 140572 DEBUG urllib3.connectionpool [  ]
>> https://registry-1.docker.io:443 "GET /v2/ HTTP/1.1" 401 87
>> 2020-04-29 08:52:49,607 140572 DEBUG tripleo_common.image.image_uploader
>> [  ] https://registry-1.docker.io/v2/ status code 401
>> 2020-04-29 08:52:49,607 140572 DEBUG tripleo_common.image.image_uploader
>> [  ] Token parameters: params {'scope':
>> 'repository:tripleotrain/centos-binary-swift-object:pull', 'service': '
>> registry.docker.io'}
>>
>> Later I saw connectionpool retrying, but I have not seen "
>> tripleo_common.image.image_uploader" with same.
>>
>> Every 2.0s: sudo podman ps -a ; sudo podman images -a ; sudo df -h
>>
>> Wed Apr 29 09:38:26 2020
>>
>> CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
>> REPOSITORY                                                TAG
>>   IMAGE ID   CREATED SIZE
>> docker.io/tripleotrain/centos-binary-nova-api
>> current-tripleo   e32831544953   2 days ago   1.39 GB
>> docker.io/tripleotrain/centos-binary-glance-api
>> current-tripleo   edbb7dff6427   2 days ago   1.31 GB
>> docker.io/tripleotrain/centos-binary-mistral-api
>>  current-tripleo   bcb3e95028a3   2 days ago   1.54 GB
>> docker.io/tripleotrain/centos-binary-ironic-pxe
>> current-tripleo   2f1eb1da3fa4   2 days ago   909 MB
>> docker.io/tripleotrain/centos-binary-heat-api
>> current-tripleo   b425da0e0a89   2 days ago   947 MB
>> docker.io/tripleotrain/centos-binary-ironic-api
>> current-tripleo   d0b670006bc6   2 days ago   903 MB
>> docker.io/tripleotrain/centos-binary-swift-proxy-server
>> current-tripleo   73432aea0d63   2 days ago   895 MB
>> docker.io/tripleotrain/centos-binary-neutron-server  current-tripleo
>> d7b8f19cc5ed   2 days ago   1.1 GB
>> docker.io/tripleotrain/centos-binary-keystone
>> current-tripleo   8352bb3fd528   2 days ago   905 MB
>> docker.io/tripleotrain/centos-binary-zaqar-wsgi
>> current-tripleo   49a7f0066616   2 days ago   894 MB
>> docker.io/tripleotrain/centos-binary-placement-api
>>  current-tripleo   096ce1da63d3   2 days ago   1 GB
>> docker.io/tripleotrain/centos-binary-ironic-inspector
>> current-tripleo   4505c408a230   2 days ago   817 MB
>> docker.io/tripleotrain/centos-binary-rabbitmq
>> current-tripleo   bee62aacf8fb   2 days ago   700 MB
>> docker.io/tripleotrain/centos-binary-haproxy
>>  current-tripleo   4b11e3d9c95f   2 days ago   692 MB
>> docker.io/tripleotrain/centos-binary-mariadb
>>  current-tripleo   16cc78bc1e94   2 days ago   845 MB
>> docker.io/tripleotrain/centos-binary-keepalived
>> current-tripleo   67de7d2af948   2 days ago   568 MB
>> docker.io/tripleotrain/centos-binary-memcached
>>  current-tripleo   a1019d76359c   2 days ago   561 MB
>> docker.io/tripleotrain/centos-binary-iscsid
>> current-tripleo   c62bc10064c2   2 days ago   527 MB
>> docker.io/tripleotrain/centos-binary-cron
>> current-tripleo   be0199eb5b89   2 days ago   522 MB
>>
>>
>> On Tue, 28 Apr 2020 at 20:10, Alex Schultz <aschultz at redhat.com> wrote:
>>
>>> On Tue, Apr 28, 2020 at 11:57 AM Ruslanas Gžibovskis <ruslanas at lpic.lt>
>>> wrote:
>>> >
>>> > Hi all,
>>> >
>>> > I am running a fresh install of rdo train, on centos7
>>> > I almost a week I am facing error at this step:
>>> > TASK [Run container-puppet tasks (generate config) during step 1]
>>> >
>>> > So I have ansible.log attached, I cannot find anything, where it is
>>> failing.
>>> > According to some understanding in ansible, it fails if it finds
>>> stderr output.
>>> > I cannot find error/fail or smth, I see Notices and Warnings, but I
>>> believe it is not stderr?
>>> >
>>> > I see containers running and removed after some time.
>>> > (as it should be I think)...
>>> >
>>> > Could you help me, where to dig?
>>> >
>>>
>>> 2020-04-27 22:27:46,147 p=132230 u=root |  TASK [Start containers for
>>> step 1 using paunch]
>>>
>>> *****************************************************************************************************************************
>>> 2020-04-27 22:27:46,148 p=132230 u=root |  Monday 27 April 2020
>>> 22:27:46 +0200 (0:00:00.137)       0:04:44.326 **********?
>>> 2020-04-27 22:27:46,816 p=132230 u=root |  ok: [remote-u]
>>> 2020-04-27 22:27:46,914 p=132230 u=root |  TASK [Debug output for
>>> task: Start containers for step 1]
>>>
>>> *******************************************************************************************************************
>>> 2020-04-27 22:27:46,915 p=132230 u=root |  Monday 27 April 2020
>>> 22:27:46 +0200 (0:00:00.767)       0:04:45.093 **********?
>>> 2020-04-27 22:27:46,977 p=132230 u=root |  fatal: [remote-u]: FAILED! =>
>>> {
>>>     "failed_when_result": true,?
>>>     "outputs.stdout_lines | default([]) | union(outputs.stderr_lines |
>>> default([]))": []
>>>
>>> Check /var/log/paunch.log. It probably has additional information as
>>> to why the containers didn't start.  You might also check the output
>>> of 'sudo podman ps -a' to see if any containers exited with errors.
>>>
>>> > --
>>> > Ruslanas Gžibovskis
>>> > +370 6030 7030
>>>
>>>
>>
>> --
>> Ruslanas Gžibovskis
>> +370 6030 7030
>>
>
>
> --
> Ruslanas Gžibovskis
> +370 6030 7030
>


-- 
Ruslanas Gžibovskis
+370 6030 7030
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200429/5adfec66/attachment-0001.html>


More information about the openstack-discuss mailing list