[openstack-ansible][magnum]

Alfredo De Luca alfredo.deluca at gmail.com
Sat Feb 2 16:36:37 UTC 2019


so if I run part-013 I get the following

oot at freddo-5oyez3ot5pxi-master-0 scripts]# ./part-013
+ _prefix=docker.io/openstackmagnum/
+ atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name
heat-container-agent
docker.io/openstackmagnum/heat-container-agent:queens-stable
./part-013: line 8: atomic: command not found
+ systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit
heat-container-agent.service not found.



On Sat, Feb 2, 2019 at 5:33 PM Alfredo De Luca <alfredo.deluca at gmail.com>
wrote:

> [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s
> http://169.254.169.254/latest/meta-data/local-ipv4
> 10.0.0.5[root at freddo-5oyez3ot5pxi-master-0 scripts]#
>
> [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s
> http://169.254.169.254/latest/meta-data/public-ipv4
> 172.29.249.112[root at freddo-5oyez3ot5pxi-master-0 scripts]#
>
> 172.29.249.112 is the Floating IP... which I use to connect to the master
>
>
>
>
> On Sat, Feb 2, 2019 at 5:26 PM Clemens <clemens.hardewig at crandale.de>
> wrote:
>
>> Hi Alfredo,
>>
>> This is basics of Openstack: curl -s
>> http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the
>> metadata service with its special IP address 169.254.169.254
>> <http://169.254.169.254/latest/meta-data/local-ipv4>, to obtain the
>> local ip address; the second one to get the public ip address
>> It look like from remote that your network is not properly configured so
>> that this information is not answered from metadata service successfully.
>> What happens if you execute that command manually?
>>
>> BR C
>>
>> Am 02.02.2019 um 17:18 schrieb Alfredo De Luca <alfredo.deluca at gmail.com
>> >:
>>
>> Hi Clemens. Yes...you are right but not sure why the IPs are not correct
>>
>> if [ -z "${KUBE_NODE_IP}" ]; then
>>     KUBE_NODE_IP=$(curl -s
>> http://169.254.169.254/latest/meta-data/local-ipv4)
>> fi
>>
>> sans="IP:${KUBE_NODE_IP}"
>>
>> if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then
>>     KUBE_NODE_PUBLIC_IP=$(curl -s
>> http://169.254.169.254/latest/meta-data/public-ipv4)
>>
>> I don't have that IP at all.
>>
>>
>> On Sat, Feb 2, 2019 at 2:20 PM Clemens <clemens.hardewig at crandale.de>
>> wrote:
>>
>>> Well - it seems that failure of part-013 has its root cause in failure
>>> of part-011:
>>>
>>> in part-011,  KUBE_NODE_PUBLIC_IP and  KUBE_NODE_IP are set.
>>> Furthermore the certificates for the access to Etcd are created; this is
>>> prerequisite for any kinda of access authorization maintained by Etcd. The
>>> ip address config items require an appropriate definition as metadata. If
>>> there is no definition of that, then internet access fails and it can also
>>> not install docker in part-013 ...
>>>
>>> Am 01.02.2019 um 10:20 schrieb Alfredo De Luca <alfredo.deluca at gmail.com
>>> >:
>>>
>>> thanks Feilong, clemens et all.
>>>
>>> I going to have a look later on today and see what I can do and see.
>>>
>>> Just a question:
>>> Does the kube master need internet access to download stuff or not?
>>>
>>> Cheers
>>>
>>>
>>> On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang <feilong at catalyst.net.nz>
>>> wrote:
>>>
>>>> I'm echoing Von's comments.
>>>>
>>>> From the log of cloud-init-output.log, you should be able to see below
>>>> error:
>>>>
>>>> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019
>>>> 08:33:41 +0000. Up 76.51 seconds.*
>>>> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running
>>>> /var/lib/cloud/instance/scripts/part-011 [1]*
>>>> *+ _prefix=docker.io/openstackmagnum/
>>>> <http://docker.io/openstackmagnum/>*
>>>> *+ atomic install --storage ostree --system --system-package no --set
>>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name
>>>> heat-container-agent
>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable
>>>> <http://docker.io/openstackmagnum/heat-container-agent:queens-stable>*
>>>> *The docker daemon does not appear to be running.*
>>>> *+ systemctl start heat-container-agent*
>>>> *Failed to start heat-container-agent.service: Unit
>>>> heat-container-agent.service not found.*
>>>> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running
>>>> /var/lib/cloud/instance/scripts/part-013 [5]*
>>>>
>>>> Then please go to /var/lib/cloud/instances/<instance_id>/scripts to
>>>> find the script 011 and 013 to run it manually to get the root cause. And
>>>> welcome to pop up into #openstack-containers irc channel.
>>>>
>>>>
>>>>
>>>> On 30/01/19 11:43 PM, Clemens Hardewig wrote:
>>>>
>>>> Read the cloud-Init.log! There you can see that your
>>>> /var/lib/.../part-011 part of the config script finishes with error. Check
>>>> why.
>>>>
>>>> Von meinem iPhone gesendet
>>>>
>>>> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <
>>>> alfredo.deluca at gmail.com>:
>>>>
>>>> here are also the logs for the cloud init logs from the k8s master....
>>>>
>>>>
>>>>
>>>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca <
>>>> alfredo.deluca at gmail.com> wrote:
>>>>
>>>>> <image.png>
>>>>> In the meantime this is my cluster
>>>>>  template
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca <
>>>>> alfredo.deluca at gmail.com> wrote:
>>>>>
>>>>>> hi Clemens and Ignazio. thanks for your support.
>>>>>> it must be network related but I don't do something special
>>>>>> apparently to create a simple k8s cluster.
>>>>>> I ll post later on configurations and logs as you Clemens suggested.
>>>>>>
>>>>>>
>>>>>> Cheers
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens <clemens.hardewig at crandale.de>
>>>>>> wrote:
>>>>>>
>>>>>>> … an more important: check the other log cloud-init.log for error
>>>>>>> messages (not only cloud-init-output.log)
>>>>>>>
>>>>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca <
>>>>>>> alfredo.deluca at gmail.com>:
>>>>>>>
>>>>>>> Hi Ignazio and Clemens. I haven\t configure the proxy  and all the
>>>>>>> logs on the kube master keep saying the following
>>>>>>>
>>>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not
>>>>>>> finished
>>>>>>> [+]poststarthook/extensions/third-party-resources ok
>>>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished
>>>>>>> healthz check failed' ']'
>>>>>>> + sleep 5
>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz
>>>>>>> + '[' ok = '' ']'
>>>>>>> + sleep 5
>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz
>>>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not
>>>>>>> finished
>>>>>>> [+]poststarthook/extensions/third-party-resources ok
>>>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished
>>>>>>> healthz check failed' ']'
>>>>>>> + sleep 5
>>>>>>>
>>>>>>> Not sure what to do.
>>>>>>> My configuration is ...
>>>>>>> eth0 - 10.1.8.113
>>>>>>>
>>>>>>> But the openstack configration in terms of networkin is the default
>>>>>>> from  ansible-openstack which is 172.29.236.100/22
>>>>>>>
>>>>>>> Maybe that's the problem?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano <
>>>>>>> ignaziocassano at gmail.com> wrote:
>>>>>>>
>>>>>>>> Hello Alfredo,
>>>>>>>> your external network is using proxy ?
>>>>>>>> If you using a proxy, and yuo configured it in cluster template,
>>>>>>>> you must setup no proxy for 127.0.0.1
>>>>>>>> Ignazio
>>>>>>>>
>>>>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig <
>>>>>>>> clemens.hardewig at crandale.de> ha scritto:
>>>>>>>>
>>>>>>>>> At least on fedora there is a second cloud Init log as far as I
>>>>>>>>> remember-Look into both
>>>>>>>>>
>>>>>>>>> Br c
>>>>>>>>>
>>>>>>>>> Von meinem iPhone gesendet
>>>>>>>>>
>>>>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <
>>>>>>>>> alfredo.deluca at gmail.com>:
>>>>>>>>>
>>>>>>>>> thanks Clemens.
>>>>>>>>> I looked at the cloud-init-output.log  on the master... and at the
>>>>>>>>> moment is doing the following....
>>>>>>>>>
>>>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz
>>>>>>>>> + '[' ok = '' ']'
>>>>>>>>> + sleep 5
>>>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz
>>>>>>>>> + '[' ok = '' ']'
>>>>>>>>> + sleep 5
>>>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz
>>>>>>>>> + '[' ok = '' ']'
>>>>>>>>> + sleep 5
>>>>>>>>>
>>>>>>>>> Network ....could be but not sure where to look at
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <
>>>>>>>>> clemens.hardewig at crandale.de> wrote:
>>>>>>>>>
>>>>>>>>>> Yes, you should check the cloud-init logs of your master. Without
>>>>>>>>>> having seen them, I would guess a network issue or you have selected for
>>>>>>>>>> your minion nodes a flavor using swap perhaps ...
>>>>>>>>>> So, log files are the first step you could dig into...
>>>>>>>>>> Br c
>>>>>>>>>> Von meinem iPhone gesendet
>>>>>>>>>>
>>>>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <
>>>>>>>>>> alfredo.deluca at gmail.com>:
>>>>>>>>>>
>>>>>>>>>> Hi all.
>>>>>>>>>> I finally instaledl successufully openstack ansible (queens) but,
>>>>>>>>>> after creating a cluster template I create k8s cluster, it stuck on
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> kube_masters
>>>>>>>>>> <https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/>
>>>>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039
>>>>>>>>>> <https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/>
>>>>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state
>>>>>>>>>> changed create in progress....and after around an hour it
>>>>>>>>>> says...time out. k8s master seems to be up.....at least as VM.
>>>>>>>>>>
>>>>>>>>>> any idea?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Alfredo*
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> *Alfredo*
>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Alfredo*
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Alfredo*
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> *Alfredo*
>>>>>
>>>>>
>>>>
>>>> --
>>>> *Alfredo*
>>>>
>>>> <cloud-init.log>
>>>>
>>>> <cloud-init-output.log>
>>>>
>>>> --
>>>> Cheers & Best regards,
>>>> Feilong Wang (王飞龙)
>>>> --------------------------------------------------------------------------
>>>> Senior Cloud Software Engineer
>>>> Tel: +64-48032246
>>>> Email: flwang at catalyst.net.nz
>>>> Catalyst IT Limited
>>>> Level 6, Catalyst House, 150 Willis Street, Wellington
>>>> --------------------------------------------------------------------------
>>>>
>>>>
>>>
>>> --
>>> *Alfredo*
>>>
>>>
>>>
>>
>> --
>> *Alfredo*
>>
>>
>>
>
> --
> *Alfredo*
>
>

-- 
*Alfredo*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190202/5a2fa5b1/attachment-0001.html>


More information about the openstack-discuss mailing list