[openstack-ansible][magnum]
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
Hi all. Anyone got the same issue with Magnum? Cheers
On Mon, Jan 28, 2019 at 3:24 PM Alfredo De Luca alfredo.deluca@gmail.com wrote:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com >:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
Alfredo
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < clemens.hardewig@crandale.de> wrote:
Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
Yea, this means waiting for something... it will continue forever .... look to the last messages before this log sequence starts ...
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com >:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de > wrote: Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com >:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
Alfredo
At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com >:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de > wrote: Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com >:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
Alfredo
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < clemens.hardewig@crandale.de> wrote:
Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
-- *Alfredo*
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
+ '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < clemens.hardewig@crandale.de> wrote:
Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
-- *Alfredo*
Hello, in openstack there are a lot of newtroks . Your 172.29 blabla bla network is probably the network where openstack endpoint are exposed , right ? If yes, that is not the network where virtual machine are attached. In your openstack ym must have also networks for virtual machines. When you create a magnum cluster, yum must specify an external netowrok used by virtual machine for download packages from internet and to be contacted . Magnum create private netowrk (probablly your 10.1.8 network) which is connected to the external network by a virtual router created by magnum heat template. Try to see your network topology in openstack dashboard. Ignazio
Il giorno mar 29 gen 2019 alle ore 16:08 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < clemens.hardewig@crandale.de> wrote:
Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
-- *Alfredo*
-- *Alfredo*
Well -
as you only sent a small potion of the log, I can still only guessing that the problem lies with your network config. As Ignazio said, the most straight way to install a K8s cluster based on the default template is to let magnum create a new network and router for you. This is achieved by leaving the private network field empty in horizon. It seems to be that your cluster has not started the basic Kubernetes services (Etcd, Kubernetes-apiserver, -controller-manager, …) sucessfully.
I don’t know how you have started your cluster (CLI or horizon) so perhaps you could share that.
Br c
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 http://172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano <ignaziocassano@gmail.com mailto:ignaziocassano@gmail.com> wrote: Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> ha scritto: At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> wrote: Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
Alfredo
-- Alfredo
-- Alfredo
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 http://172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano <ignaziocassano@gmail.com mailto:ignaziocassano@gmail.com> wrote: Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> ha scritto: At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> wrote: Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
Alfredo
-- Alfredo
-- Alfredo
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < clemens.hardewig@crandale.de> wrote:
Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
-- *Alfredo*
-- *Alfredo*
[image: image.png] In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca alfredo.deluca@gmail.com wrote:
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < clemens.hardewig@crandale.de> wrote:
Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca alfredo.deluca@gmail.com wrote:
[image: image.png] In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca alfredo.deluca@gmail.com wrote:
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < clemens.hardewig@crandale.de> wrote:
Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com >:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com > wrote: <image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com > wrote: hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de > wrote: … an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com >:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
+ '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 http://172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano <ignaziocassano@gmail.com mailto:ignaziocassano@gmail.com > wrote: Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de > ha scritto: At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com >:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de > wrote: Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com >:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
Alfredo
I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below
error:
/Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds.// //2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]// //+ _prefix=docker.io/openstackmagnum/// //+ atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable// //The docker daemon does not appear to be running.// //+ systemctl start heat-container-agent// //Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.// //2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]/
Then please go to |/var/lib/cloud/instances/<instance_id>/scripts| to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com> wrote:
<image.png> In the meantime this is my cluster template On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca <alfredo.deluca@gmail.com <mailto:alfredo.deluca@gmail.com>> wrote: hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested. Cheers On Tue, Jan 29, 2019 at 9:16 PM Clemens <clemens.hardewig@crandale.de <mailto:clemens.hardewig@crandale.de>> wrote: … an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca <alfredo.deluca@gmail.com <mailto:alfredo.deluca@gmail.com>>: Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5 Not sure what to do. My configuration is ... eth0 - 10.1.8.113 But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 <http://172.29.236.100/22> Maybe that's the problem? On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano <ignaziocassano@gmail.com <mailto:ignaziocassano@gmail.com>> wrote: Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig <clemens.hardewig@crandale.de <mailto:clemens.hardewig@crandale.de>> ha scritto: At least on fedora there is a second cloud Init log as far as I remember-Look into both Br c Von meinem iPhone gesendet Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com <mailto:alfredo.deluca@gmail.com>>:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following.... ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 Network ....could be but not sure where to look at On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <clemens.hardewig@crandale.de <mailto:clemens.hardewig@crandale.de>> wrote: Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com <mailto:alfredo.deluca@gmail.com>>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on kube_masters <https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 <https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM. any idea? /*Alfredo*/
-- /*Alfredo*/
-- /*Alfredo*/
-- /*Alfredo*/ -- /*Alfredo*/
-- /*Alfredo*/
<cloud-init.log> <cloud-init-output.log>
thanks Feilong, clemens et all.
I going to have a look later on today and see what I can do and see.
Just a question: Does the kube master need internet access to download stuff or not?
Cheers
On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang feilong@catalyst.net.nz wrote:
I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below error:
*Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds.* *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]* *+ _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/* *+ atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable* *The docker daemon does not appear to be running.* *+ systemctl start heat-container-agent* *Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.* *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]*
Then please go to /var/lib/cloud/instances/<instance_id>/scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca alfredo.deluca@gmail.com wrote:
<image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca alfredo.deluca@gmail.com wrote:
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < clemens.hardewig@crandale.de> wrote:
> Yes, you should check the cloud-init logs of your master. Without > having seen them, I would guess a network issue or you have selected for > your minion nodes a flavor using swap perhaps ... > So, log files are the first step you could dig into... > Br c > Von meinem iPhone gesendet > > Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < > alfredo.deluca@gmail.com>: > > Hi all. > I finally instaledl successufully openstack ansible (queens) but, > after creating a cluster template I create k8s cluster, it stuck on > > > kube_masters > https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ > b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 > https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ > OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create > in progress....and after around an hour it says...time out. k8s master > seems to be up.....at least as VM. > > any idea? > > > > > *Alfredo* > >
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
<cloud-init.log>
<cloud-init-output.log>
-- Cheers & Best regards, Feilong Wang (王飞龙)
Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington
Yes, it needs Internet access. Ignazio
Il giorno Ven 1 Feb 2019 13:20 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
thanks Feilong, clemens et all.
I going to have a look later on today and see what I can do and see.
Just a question: Does the kube master need internet access to download stuff or not?
Cheers
On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang feilong@catalyst.net.nz wrote:
I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below error:
*Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds.* *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]* *+ _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/* *+ atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable* *The docker daemon does not appear to be running.* *+ systemctl start heat-container-agent* *Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.* *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]*
Then please go to /var/lib/cloud/instances/<instance_id>/scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca alfredo.deluca@gmail.com wrote:
<image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < alfredo.deluca@gmail.com> wrote:
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
> At least on fedora there is a second cloud Init log as far as I > remember-Look into both > > Br c > > Von meinem iPhone gesendet > > Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < > alfredo.deluca@gmail.com>: > > thanks Clemens. > I looked at the cloud-init-output.log on the master... and at the > moment is doing the following.... > > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > > Network ....could be but not sure where to look at > > > On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < > clemens.hardewig@crandale.de> wrote: > >> Yes, you should check the cloud-init logs of your master. Without >> having seen them, I would guess a network issue or you have selected for >> your minion nodes a flavor using swap perhaps ... >> So, log files are the first step you could dig into... >> Br c >> Von meinem iPhone gesendet >> >> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >> alfredo.deluca@gmail.com>: >> >> Hi all. >> I finally instaledl successufully openstack ansible (queens) but, >> after creating a cluster template I create k8s cluster, it stuck on >> >> >> kube_masters >> https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ >> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >> https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ >> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >> in progress....and after around an hour it says...time out. k8s master >> seems to be up.....at least as VM. >> >> any idea? >> >> >> >> >> *Alfredo* >> >> > > -- > *Alfredo* > >
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
<cloud-init.log>
<cloud-init-output.log>
-- Cheers & Best regards, Feilong Wang (王飞龙)
Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington
-- *Alfredo*
Alfredo, if you configured your template for using floatingip you can connect to the master and check if it can connect to Internet.
Il giorno Ven 1 Feb 2019 13:20 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
thanks Feilong, clemens et all.
I going to have a look later on today and see what I can do and see.
Just a question: Does the kube master need internet access to download stuff or not?
Cheers
On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang feilong@catalyst.net.nz wrote:
I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below error:
*Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds.* *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]* *+ _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/* *+ atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable* *The docker daemon does not appear to be running.* *+ systemctl start heat-container-agent* *Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.* *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]*
Then please go to /var/lib/cloud/instances/<instance_id>/scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca alfredo.deluca@gmail.com wrote:
<image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < alfredo.deluca@gmail.com> wrote:
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
> At least on fedora there is a second cloud Init log as far as I > remember-Look into both > > Br c > > Von meinem iPhone gesendet > > Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < > alfredo.deluca@gmail.com>: > > thanks Clemens. > I looked at the cloud-init-output.log on the master... and at the > moment is doing the following.... > > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > > Network ....could be but not sure where to look at > > > On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < > clemens.hardewig@crandale.de> wrote: > >> Yes, you should check the cloud-init logs of your master. Without >> having seen them, I would guess a network issue or you have selected for >> your minion nodes a flavor using swap perhaps ... >> So, log files are the first step you could dig into... >> Br c >> Von meinem iPhone gesendet >> >> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >> alfredo.deluca@gmail.com>: >> >> Hi all. >> I finally instaledl successufully openstack ansible (queens) but, >> after creating a cluster template I create k8s cluster, it stuck on >> >> >> kube_masters >> https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ >> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >> https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ >> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >> in progress....and after around an hour it says...time out. k8s master >> seems to be up.....at least as VM. >> >> any idea? >> >> >> >> >> *Alfredo* >> >> > > -- > *Alfredo* > >
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
<cloud-init.log>
<cloud-init-output.log>
-- Cheers & Best regards, Feilong Wang (王飞龙)
Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington
-- *Alfredo*
Hi Ignazio. I ve already done that so that's why I can connect to the master. Then I can ping 8.8.8.8 any other IP on internet but not through domainnames..... such as google.com or yahoo.com. Doesn't resolve names.
the server doesn't have either dig or nslookup and I can't install them cause the domainname. So I changed the domainname into IP but still the same issue...
[root@freddo-5oyez3ot5pxi-master-0 ~]# yum repolist Fedora Modular 29 - x86_64 0.0 B/s | 0 B 00:20 Error: Failed to synchronize cache for repo 'fedora-modular'
On Sat, Feb 2, 2019 at 9:38 AM Ignazio Cassano ignaziocassano@gmail.com wrote:
Alfredo, if you configured your template for using floatingip you can connect to the master and check if it can connect to Internet.
Il giorno Ven 1 Feb 2019 13:20 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
thanks Feilong, clemens et all.
I going to have a look later on today and see what I can do and see.
Just a question: Does the kube master need internet access to download stuff or not?
Cheers
On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang feilong@catalyst.net.nz wrote:
I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below error:
*Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds.* *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]* *+ _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/* *+ atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable* *The docker daemon does not appear to be running.* *+ systemctl start heat-container-agent* *Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.* *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]*
Then please go to /var/lib/cloud/instances/<instance_id>/scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca < alfredo.deluca@gmail.com> wrote:
<image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < alfredo.deluca@gmail.com> wrote:
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
> Hello Alfredo, > your external network is using proxy ? > If you using a proxy, and yuo configured it in cluster template, you > must setup no proxy for 127.0.0.1 > Ignazio > > Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < > clemens.hardewig@crandale.de> ha scritto: > >> At least on fedora there is a second cloud Init log as far as I >> remember-Look into both >> >> Br c >> >> Von meinem iPhone gesendet >> >> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >> alfredo.deluca@gmail.com>: >> >> thanks Clemens. >> I looked at the cloud-init-output.log on the master... and at the >> moment is doing the following.... >> >> ++ curl --silent http://127.0.0.1:8080/healthz >> + '[' ok = '' ']' >> + sleep 5 >> ++ curl --silent http://127.0.0.1:8080/healthz >> + '[' ok = '' ']' >> + sleep 5 >> ++ curl --silent http://127.0.0.1:8080/healthz >> + '[' ok = '' ']' >> + sleep 5 >> >> Network ....could be but not sure where to look at >> >> >> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >> clemens.hardewig@crandale.de> wrote: >> >>> Yes, you should check the cloud-init logs of your master. Without >>> having seen them, I would guess a network issue or you have selected for >>> your minion nodes a flavor using swap perhaps ... >>> So, log files are the first step you could dig into... >>> Br c >>> Von meinem iPhone gesendet >>> >>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>> alfredo.deluca@gmail.com>: >>> >>> Hi all. >>> I finally instaledl successufully openstack ansible (queens) but, >>> after creating a cluster template I create k8s cluster, it stuck on >>> >>> >>> kube_masters >>> https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ >>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>> https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ >>> OS::Heat::ResourceGroup 16 minutes Create In Progress state >>> changed create in progress....and after around an hour it >>> says...time out. k8s master seems to be up.....at least as VM. >>> >>> any idea? >>> >>> >>> >>> >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >>
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
<cloud-init.log>
<cloud-init-output.log>
-- Cheers & Best regards, Feilong Wang (王飞龙)
Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington
-- *Alfredo*
Well - it seems that failure of part-013 has its root cause in failure of part-011:
in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ...
Am 01.02.2019 um 10:20 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
thanks Feilong, clemens et all.
I going to have a look later on today and see what I can do and see.
Just a question: Does the kube master need internet access to download stuff or not?
Cheers
On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang <feilong@catalyst.net.nz mailto:feilong@catalyst.net.nz> wrote: I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below error:
Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds. 2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]
- _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable
The docker daemon does not appear to be running.
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. 2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]
Then please go to /var/lib/cloud/instances/<instance_id>/scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com> wrote: <image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com> wrote: hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> wrote: … an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 http://172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano <ignaziocassano@gmail.com mailto:ignaziocassano@gmail.com> wrote: Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> ha scritto: At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> wrote: Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
Alfredo
-- Alfredo
-- Alfredo
-- Alfredo
-- Alfredo
-- Alfredo
<cloud-init.log> <cloud-init-output.log>
-- Cheers & Best regards, Feilong Wang (王飞龙)
Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang@catalyst.net.nz mailto:flwang@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington
-- Alfredo
Anfang der weitergeleiteten Nachricht:
Von: Clemens clemens.hardewig@crandale.de Betreff: Aw: [openstack-ansible][magnum] Datum: 2. Februar 2019 um 14:20:37 MEZ An: Alfredo De Luca alfredo.deluca@gmail.com Kopie: Feilong Wang feilong@catalyst.net.nz, openstack-discuss@lists.openstack.org
Well - it seems that failure of part-013 has its root cause in failure of part-011:
in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ...
Hi Clemens. Yes...you are right but not sure why the IPs are not correct
if [ -z "${KUBE_NODE_IP}" ]; then KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4) fi
sans="IP:${KUBE_NODE_IP}"
if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)
I don't have that IP at all.
On Sat, Feb 2, 2019 at 2:20 PM Clemens clemens.hardewig@crandale.de wrote:
Well - it seems that failure of part-013 has its root cause in failure of part-011:
in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ...
Am 01.02.2019 um 10:20 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
thanks Feilong, clemens et all.
I going to have a look later on today and see what I can do and see.
Just a question: Does the kube master need internet access to download stuff or not?
Cheers
On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang feilong@catalyst.net.nz wrote:
I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below error:
*Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds.* *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]* *+ _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/* *+ atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable* *The docker daemon does not appear to be running.* *+ systemctl start heat-container-agent* *Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.* *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]*
Then please go to /var/lib/cloud/instances/<instance_id>/scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca alfredo.deluca@gmail.com wrote:
<image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < alfredo.deluca@gmail.com> wrote:
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
> At least on fedora there is a second cloud Init log as far as I > remember-Look into both > > Br c > > Von meinem iPhone gesendet > > Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < > alfredo.deluca@gmail.com>: > > thanks Clemens. > I looked at the cloud-init-output.log on the master... and at the > moment is doing the following.... > > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > > Network ....could be but not sure where to look at > > > On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < > clemens.hardewig@crandale.de> wrote: > >> Yes, you should check the cloud-init logs of your master. Without >> having seen them, I would guess a network issue or you have selected for >> your minion nodes a flavor using swap perhaps ... >> So, log files are the first step you could dig into... >> Br c >> Von meinem iPhone gesendet >> >> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >> alfredo.deluca@gmail.com>: >> >> Hi all. >> I finally instaledl successufully openstack ansible (queens) but, >> after creating a cluster template I create k8s cluster, it stuck on >> >> >> kube_masters >> https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ >> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >> https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ >> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >> in progress....and after around an hour it says...time out. k8s master >> seems to be up.....at least as VM. >> >> any idea? >> >> >> >> >> *Alfredo* >> >> > > -- > *Alfredo* > >
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
<cloud-init.log>
<cloud-init-output.log>
-- Cheers & Best regards, Feilong Wang (王飞龙)
Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington
-- *Alfredo*
Hi Alfredo,
This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 http://169.254.169.254/latest/meta-data/local-ipv4, to obtain the local ip address; the second one to get the public ip address It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually?
BR C
Am 02.02.2019 um 17:18 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Hi Clemens. Yes...you are right but not sure why the IPs are not correct
if [ -z "${KUBE_NODE_IP}" ]; then KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4 http://169.254.169.254/latest/meta-data/local-ipv4) fi
sans="IP:${KUBE_NODE_IP}"
if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4 http://169.254.169.254/latest/meta-data/public-ipv4)
I don't have that IP at all.
On Sat, Feb 2, 2019 at 2:20 PM Clemens <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> wrote: Well - it seems that failure of part-013 has its root cause in failure of part-011:
in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ...
Am 01.02.2019 um 10:20 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
thanks Feilong, clemens et all.
I going to have a look later on today and see what I can do and see.
Just a question: Does the kube master need internet access to download stuff or not?
Cheers
On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang <feilong@catalyst.net.nz mailto:feilong@catalyst.net.nz> wrote: I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below error:
Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds. 2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]
- _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable
The docker daemon does not appear to be running.
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. 2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]
Then please go to /var/lib/cloud/instances/<instance_id>/scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com> wrote: <image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com> wrote: hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> wrote: … an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 http://172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano <ignaziocassano@gmail.com mailto:ignaziocassano@gmail.com> wrote: Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> ha scritto: At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> wrote: Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
> Hi all. > I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on > > > kube_masters https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ OS::Heat::ResourceGroup 16 minutes Create In Progress state changed > create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM. > > any idea? > > > > > Alfredo >
-- Alfredo
-- Alfredo
-- Alfredo
-- Alfredo
-- Alfredo
<cloud-init.log> <cloud-init-output.log>
-- Cheers & Best regards, Feilong Wang (王飞龙)
Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang@catalyst.net.nz mailto:flwang@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington
-- Alfredo
-- Alfredo
[root@freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/local-ipv4 10.0.0.5[root@freddo-5oyez3ot5pxi-master-0 scripts]#
[root@freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/public-ipv4 172.29.249.112[root@freddo-5oyez3ot5pxi-master-0 scripts]#
172.29.249.112 is the Floating IP... which I use to connect to the master
On Sat, Feb 2, 2019 at 5:26 PM Clemens clemens.hardewig@crandale.de wrote:
Hi Alfredo,
This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 http://169.254.169.254/latest/meta-data/local-ipv4, to obtain the local ip address; the second one to get the public ip address It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually?
BR C
Am 02.02.2019 um 17:18 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Hi Clemens. Yes...you are right but not sure why the IPs are not correct
if [ -z "${KUBE_NODE_IP}" ]; then KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4) fi
sans="IP:${KUBE_NODE_IP}"
if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)
I don't have that IP at all.
On Sat, Feb 2, 2019 at 2:20 PM Clemens clemens.hardewig@crandale.de wrote:
Well - it seems that failure of part-013 has its root cause in failure of part-011:
in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ...
Am 01.02.2019 um 10:20 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
thanks Feilong, clemens et all.
I going to have a look later on today and see what I can do and see.
Just a question: Does the kube master need internet access to download stuff or not?
Cheers
On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang feilong@catalyst.net.nz wrote:
I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below error:
*Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds.* *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]* *+ _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/* *+ atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable* *The docker daemon does not appear to be running.* *+ systemctl start heat-container-agent* *Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.* *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]*
Then please go to /var/lib/cloud/instances/<instance_id>/scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca < alfredo.deluca@gmail.com> wrote:
<image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < alfredo.deluca@gmail.com> wrote:
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
… an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log)
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '' ']'
- sleep 5
++ curl --silent http://127.0.0.1:8080/healthz
- '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished
[+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']'
- sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
> Hello Alfredo, > your external network is using proxy ? > If you using a proxy, and yuo configured it in cluster template, you > must setup no proxy for 127.0.0.1 > Ignazio > > Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < > clemens.hardewig@crandale.de> ha scritto: > >> At least on fedora there is a second cloud Init log as far as I >> remember-Look into both >> >> Br c >> >> Von meinem iPhone gesendet >> >> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >> alfredo.deluca@gmail.com>: >> >> thanks Clemens. >> I looked at the cloud-init-output.log on the master... and at the >> moment is doing the following.... >> >> ++ curl --silent http://127.0.0.1:8080/healthz >> + '[' ok = '' ']' >> + sleep 5 >> ++ curl --silent http://127.0.0.1:8080/healthz >> + '[' ok = '' ']' >> + sleep 5 >> ++ curl --silent http://127.0.0.1:8080/healthz >> + '[' ok = '' ']' >> + sleep 5 >> >> Network ....could be but not sure where to look at >> >> >> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >> clemens.hardewig@crandale.de> wrote: >> >>> Yes, you should check the cloud-init logs of your master. Without >>> having seen them, I would guess a network issue or you have selected for >>> your minion nodes a flavor using swap perhaps ... >>> So, log files are the first step you could dig into... >>> Br c >>> Von meinem iPhone gesendet >>> >>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>> alfredo.deluca@gmail.com>: >>> >>> Hi all. >>> I finally instaledl successufully openstack ansible (queens) but, >>> after creating a cluster template I create k8s cluster, it stuck on >>> >>> >>> kube_masters >>> https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ >>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>> https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ >>> OS::Heat::ResourceGroup 16 minutes Create In Progress state >>> changed create in progress....and after around an hour it >>> says...time out. k8s master seems to be up.....at least as VM. >>> >>> any idea? >>> >>> >>> >>> >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >>
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
<cloud-init.log>
<cloud-init-output.log>
-- Cheers & Best regards, Feilong Wang (王飞龙)
Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington
-- *Alfredo*
-- *Alfredo*
so if I run part-013 I get the following
oot@freddo-5oyez3ot5pxi-master-0 scripts]# ./part-013 + _prefix=docker.io/openstackmagnum/ + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found + systemctl start heat-container-agent Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
On Sat, Feb 2, 2019 at 5:33 PM Alfredo De Luca alfredo.deluca@gmail.com wrote:
[root@freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/local-ipv4 10.0.0.5[root@freddo-5oyez3ot5pxi-master-0 scripts]#
[root@freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/public-ipv4 172.29.249.112[root@freddo-5oyez3ot5pxi-master-0 scripts]#
172.29.249.112 is the Floating IP... which I use to connect to the master
On Sat, Feb 2, 2019 at 5:26 PM Clemens clemens.hardewig@crandale.de wrote:
Hi Alfredo,
This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 http://169.254.169.254/latest/meta-data/local-ipv4, to obtain the local ip address; the second one to get the public ip address It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually?
BR C
Am 02.02.2019 um 17:18 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
Hi Clemens. Yes...you are right but not sure why the IPs are not correct
if [ -z "${KUBE_NODE_IP}" ]; then KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4) fi
sans="IP:${KUBE_NODE_IP}"
if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)
I don't have that IP at all.
On Sat, Feb 2, 2019 at 2:20 PM Clemens clemens.hardewig@crandale.de wrote:
Well - it seems that failure of part-013 has its root cause in failure of part-011:
in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ...
Am 01.02.2019 um 10:20 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
thanks Feilong, clemens et all.
I going to have a look later on today and see what I can do and see.
Just a question: Does the kube master need internet access to download stuff or not?
Cheers
On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang feilong@catalyst.net.nz wrote:
I'm echoing Von's comments.
From the log of cloud-init-output.log, you should be able to see below error:
*Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds.* *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1]* *+ _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/* *+ atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable* *The docker daemon does not appear to be running.* *+ systemctl start heat-container-agent* *Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.* *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5]*
Then please go to /var/lib/cloud/instances/<instance_id>/scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel.
On 30/01/19 11:43 PM, Clemens Hardewig wrote:
Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why.
Von meinem iPhone gesendet
Am 30.01.2019 um 10:11 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
here are also the logs for the cloud init logs from the k8s master....
On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca < alfredo.deluca@gmail.com> wrote:
<image.png> In the meantime this is my cluster template
On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < alfredo.deluca@gmail.com> wrote:
hi Clemens and Ignazio. thanks for your support. it must be network related but I don't do something special apparently to create a simple k8s cluster. I ll post later on configurations and logs as you Clemens suggested.
Cheers
On Tue, Jan 29, 2019 at 9:16 PM Clemens clemens.hardewig@crandale.de wrote:
> … an more important: check the other log cloud-init.log for error > messages (not only cloud-init-output.log) > > Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < > alfredo.deluca@gmail.com>: > > Hi Ignazio and Clemens. I haven\t configure the proxy and all the > logs on the kube master keep saying the following > > + '[' ok = '[-]poststarthook/bootstrap-controller failed: not > finished > [+]poststarthook/extensions/third-party-resources ok > [-]poststarthook/rbac/bootstrap-roles failed: not finished > healthz check failed' ']' > + sleep 5 > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '[-]poststarthook/bootstrap-controller failed: not > finished > [+]poststarthook/extensions/third-party-resources ok > [-]poststarthook/rbac/bootstrap-roles failed: not finished > healthz check failed' ']' > + sleep 5 > > Not sure what to do. > My configuration is ... > eth0 - 10.1.8.113 > > But the openstack configration in terms of networkin is the default > from ansible-openstack which is 172.29.236.100/22 > > Maybe that's the problem? > > > > > > > On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < > ignaziocassano@gmail.com> wrote: > >> Hello Alfredo, >> your external network is using proxy ? >> If you using a proxy, and yuo configured it in cluster template, >> you must setup no proxy for 127.0.0.1 >> Ignazio >> >> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >> clemens.hardewig@crandale.de> ha scritto: >> >>> At least on fedora there is a second cloud Init log as far as I >>> remember-Look into both >>> >>> Br c >>> >>> Von meinem iPhone gesendet >>> >>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>> alfredo.deluca@gmail.com>: >>> >>> thanks Clemens. >>> I looked at the cloud-init-output.log on the master... and at the >>> moment is doing the following.... >>> >>> ++ curl --silent http://127.0.0.1:8080/healthz >>> + '[' ok = '' ']' >>> + sleep 5 >>> ++ curl --silent http://127.0.0.1:8080/healthz >>> + '[' ok = '' ']' >>> + sleep 5 >>> ++ curl --silent http://127.0.0.1:8080/healthz >>> + '[' ok = '' ']' >>> + sleep 5 >>> >>> Network ....could be but not sure where to look at >>> >>> >>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>> clemens.hardewig@crandale.de> wrote: >>> >>>> Yes, you should check the cloud-init logs of your master. Without >>>> having seen them, I would guess a network issue or you have selected for >>>> your minion nodes a flavor using swap perhaps ... >>>> So, log files are the first step you could dig into... >>>> Br c >>>> Von meinem iPhone gesendet >>>> >>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>> alfredo.deluca@gmail.com>: >>>> >>>> Hi all. >>>> I finally instaledl successufully openstack ansible (queens) but, >>>> after creating a cluster template I create k8s cluster, it stuck on >>>> >>>> >>>> kube_masters >>>> https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/ >>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>> https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/ >>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state >>>> changed create in progress....and after around an hour it >>>> says...time out. k8s master seems to be up.....at least as VM. >>>> >>>> any idea? >>>> >>>> >>>> >>>> >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > >
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
<cloud-init.log>
<cloud-init-output.log>
-- Cheers & Best regards, Feilong Wang (王飞龙)
Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
One after the other: First of all part-011 needs to run successfully: Did your certificates create successfully? What is in /etc/kubernetes/certs ? Or did you run part-011 already successfully?
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
part-011 run succesfully.... + set -o errexit + set -o nounset + set -o pipefail + '[' True == True ']' + exit 0
But what I think it's wrong is the floating IP . It's not the IP that goes on internet which is the eth0 on my machine that has 10.1.8.113... anyway here is the network image
[image: image.png]
On Sat, Feb 2, 2019 at 5:47 PM Clemens clemens.hardewig@crandale.de wrote:
One after the other: First of all part-011 needs to run successfully: Did your certificates create successfully? What is in /etc/kubernetes/certs ? Or did you run part-011 already successfully?
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Nope - this looks ok: When a cluster is created, then it creates a private network for you (in your case 10.0.0.0/24), connecting this network via a router to your public network. Floating ip is the assigned to your machine accordingly.
So - if now your part-011 runs ok, do you have also now all the Etcd certificates/keys in your /etc/kubernetes/certs
Am 02.02.2019 um 17:55 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
part-011 run succesfully....
- set -o errexit
- set -o nounset
- set -o pipefail
- '[' True == True ']'
- exit 0
But what I think it's wrong is the floating IP . It's not the IP that goes on internet which is the eth0 on my machine that has 10.1.8.113... anyway here is the network image
<image.png>
On Sat, Feb 2, 2019 at 5:47 PM Clemens <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> wrote: One after the other: First of all part-011 needs to run successfully: Did your certificates create successfully? What is in /etc/kubernetes/certs ? Or did you run part-011 already successfully?
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
-- Alfredo
Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image …
+ _prefix=docker.io/openstackmagnum/ http://docker.io/openstackmagnum/ + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable http://docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found + systemctl start heat-container-agent Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is
*NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help https://fedoraproject.org/wiki/Communicating_and_getting_help"* *BUG_REPORT_URL="https://bugzilla.redhat.com/ https://bugzilla.redhat.com/"* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud*
so not sure why I don't have atomic tho
On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de wrote:
Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image …
- _prefix=docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
I used fedora-magnum-27-4 and it works
Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is
*NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help https://fedoraproject.org/wiki/Communicating_and_getting_help"* *BUG_REPORT_URL="https://bugzilla.redhat.com/ https://bugzilla.redhat.com/"* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud*
so not sure why I don't have atomic tho
On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de wrote:
Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image …
- _prefix=docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
-- *Alfredo*
thanks ignazio Where can I get it from?
On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano ignaziocassano@gmail.com wrote:
I used fedora-magnum-27-4 and it works
Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is
*NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help https://fedoraproject.org/wiki/Communicating_and_getting_help"* *BUG_REPORT_URL="https://bugzilla.redhat.com/ https://bugzilla.redhat.com/"* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud*
so not sure why I don't have atomic tho
On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de wrote:
Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image …
- _prefix=docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
-- *Alfredo*
wget https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20...
Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
thanks ignazio Where can I get it from?
On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano ignaziocassano@gmail.com wrote:
I used fedora-magnum-27-4 and it works
Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is
*NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help https://fedoraproject.org/wiki/Communicating_and_getting_help"* *BUG_REPORT_URL="https://bugzilla.redhat.com/ https://bugzilla.redhat.com/"* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud*
so not sure why I don't have atomic tho
On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de wrote:
Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image …
- _prefix=docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
-- *Alfredo*
-- *Alfredo*
Then upload it with:
openstack image create \ --disk-format=qcow2 \ --container-format=bare \ --file=Fedora-Atomic-27-20180212.2.x86_64.qcow2\ --property os_distro='fedora-atomic' \ fedora-atomic-latest
Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
thanks ignazio Where can I get it from?
On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano ignaziocassano@gmail.com wrote:
I used fedora-magnum-27-4 and it works
Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is
*NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help https://fedoraproject.org/wiki/Communicating_and_getting_help"* *BUG_REPORT_URL="https://bugzilla.redhat.com/ https://bugzilla.redhat.com/"* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud*
so not sure why I don't have atomic tho
On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de wrote:
Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image …
- _prefix=docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
-- *Alfredo*
-- *Alfredo*
I also suggest to change dns in your external network used by magnum. Using openstack dashboard you can change it to 8.8.8.8 (If I remember fine you wrote that you can ping 8.8.8.8 from kuke baster)
Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
thanks ignazio Where can I get it from?
On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano ignaziocassano@gmail.com wrote:
I used fedora-magnum-27-4 and it works
Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is
*NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help https://fedoraproject.org/wiki/Communicating_and_getting_help"* *BUG_REPORT_URL="https://bugzilla.redhat.com/ https://bugzilla.redhat.com/"* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud*
so not sure why I don't have atomic tho
On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de wrote:
Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image …
- _prefix=docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
-- *Alfredo*
-- *Alfredo*
Hi Ignazio. Thanks for the link...... so
Now at least atomic is present on the system. Also I ve already had 8.8.8.8 on the system. So I can connect on the floating IP to the kube master....than I can ping 8.8.8.8 but for example doesn't resolve the names...so if I ping 8.8.8.8 *root@my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=1 ttl=118 time=12.1 ms* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=2 ttl=118 time=12.2 ms*
but if I ping google.com doesn't resolve. I can't either find on fedora dig or nslookup to check resolv.conf has *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* *nameserver 8.8.8.8*
It\s all so weird.
On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
I also suggest to change dns in your external network used by magnum. Using openstack dashboard you can change it to 8.8.8.8 (If I remember fine you wrote that you can ping 8.8.8.8 from kuke baster)
Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
thanks ignazio Where can I get it from?
On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano ignaziocassano@gmail.com wrote:
I used fedora-magnum-27-4 and it works
Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is
*NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help https://fedoraproject.org/wiki/Communicating_and_getting_help"* *BUG_REPORT_URL="https://bugzilla.redhat.com/ https://bugzilla.redhat.com/"* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud*
so not sure why I don't have atomic tho
On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de wrote:
Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image …
- _prefix=docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
-- *Alfredo*
-- *Alfredo*
Alfredo, try to check security group linked to your kubemaster.
Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
Hi Ignazio. Thanks for the link...... so
Now at least atomic is present on the system. Also I ve already had 8.8.8.8 on the system. So I can connect on the floating IP to the kube master....than I can ping 8.8.8.8 but for example doesn't resolve the names...so if I ping 8.8.8.8 *root@my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=1 ttl=118 time=12.1 ms* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=2 ttl=118 time=12.2 ms*
but if I ping google.com doesn't resolve. I can't either find on fedora dig or nslookup to check resolv.conf has *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* *nameserver 8.8.8.8*
It\s all so weird.
On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
I also suggest to change dns in your external network used by magnum. Using openstack dashboard you can change it to 8.8.8.8 (If I remember fine you wrote that you can ping 8.8.8.8 from kuke baster)
Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
thanks ignazio Where can I get it from?
On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
I used fedora-magnum-27-4 and it works
Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is
*NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help https://fedoraproject.org/wiki/Communicating_and_getting_help"* *BUG_REPORT_URL="https://bugzilla.redhat.com/ https://bugzilla.redhat.com/"* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud*
so not sure why I don't have atomic tho
On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de wrote:
Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image …
- _prefix=docker.io/openstackmagnum/
- atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found
- systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Failed to start heat-container-agent.service: Unit heat-container-agent.service not found.
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
Hi Ignazio. sorry for late reply. security group is fine. It\s not blocking the network traffic.
Not sure why but, with this fedora release I can finally find atomic but there is no yum,nslookup,dig,host command..... why is so different from another version (latest) which had yum but not atomic.
It's all weird
Cheers
On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Alfredo, try to check security group linked to your kubemaster.
Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
Hi Ignazio. Thanks for the link...... so
Now at least atomic is present on the system. Also I ve already had 8.8.8.8 on the system. So I can connect on the floating IP to the kube master....than I can ping 8.8.8.8 but for example doesn't resolve the names...so if I ping 8.8.8.8 *root@my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=1 ttl=118 time=12.1 ms* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=2 ttl=118 time=12.2 ms*
but if I ping google.com doesn't resolve. I can't either find on fedora dig or nslookup to check resolv.conf has *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* *nameserver 8.8.8.8*
It\s all so weird.
On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
I also suggest to change dns in your external network used by magnum. Using openstack dashboard you can change it to 8.8.8.8 (If I remember fine you wrote that you can ping 8.8.8.8 from kuke baster)
Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
thanks ignazio Where can I get it from?
On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
I used fedora-magnum-27-4 and it works
Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is
*NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help https://fedoraproject.org/wiki/Communicating_and_getting_help"* *BUG_REPORT_URL="https://bugzilla.redhat.com/ https://bugzilla.redhat.com/"* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud*
so not sure why I don't have atomic tho
On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de wrote:
> Now to the failure of your part-013: Are you sure that you used the > glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your > error message below suggests that your image does not contain ‚atomic‘ as > part of the image … > > + _prefix=docker.io/openstackmagnum/ > + atomic install --storage ostree --system --system-package no --set > REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name > heat-container-agent > docker.io/openstackmagnum/heat-container-agent:queens-stable > ./part-013: line 8: atomic: command not found > + systemctl start heat-container-agent > Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found. > > Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < > alfredo.deluca@gmail.com>: > > Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found. > > >
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
Alfredo it is very strange you can ping 8.8.8.8 but you cannot resolve names. I think atomic command uses names for finishing master installation. Curl is installed on master....
Il giorno Mer 6 Feb 2019 09:00 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
Hi Ignazio. sorry for late reply. security group is fine. It\s not blocking the network traffic.
Not sure why but, with this fedora release I can finally find atomic but there is no yum,nslookup,dig,host command..... why is so different from another version (latest) which had yum but not atomic.
It's all weird
Cheers
On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Alfredo, try to check security group linked to your kubemaster.
Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
Hi Ignazio. Thanks for the link...... so
Now at least atomic is present on the system. Also I ve already had 8.8.8.8 on the system. So I can connect on the floating IP to the kube master....than I can ping 8.8.8.8 but for example doesn't resolve the names...so if I ping 8.8.8.8 *root@my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=1 ttl=118 time=12.1 ms* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=2 ttl=118 time=12.2 ms*
but if I ping google.com doesn't resolve. I can't either find on fedora dig or nslookup to check resolv.conf has *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* *nameserver 8.8.8.8*
It\s all so weird.
On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
I also suggest to change dns in your external network used by magnum. Using openstack dashboard you can change it to 8.8.8.8 (If I remember fine you wrote that you can ping 8.8.8.8 from kuke baster)
Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
thanks ignazio Where can I get it from?
On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
I used fedora-magnum-27-4 and it works
Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
> Hi Clemens. > So the image I downloaded is this > https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... > which is the latest I think. > But you are right...and I noticed that too.... It doesn't have > atomic binary > the os-release is > > *NAME=Fedora* > *VERSION="29 (Cloud Edition)"* > *ID=fedora* > *VERSION_ID=29* > *PLATFORM_ID="platform:f29"* > *PRETTY_NAME="Fedora 29 (Cloud Edition)"* > *ANSI_COLOR="0;34"* > *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* > *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* > *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ > https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* > *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help > https://fedoraproject.org/wiki/Communicating_and_getting_help"* > *BUG_REPORT_URL="https://bugzilla.redhat.com/ > https://bugzilla.redhat.com/"* > *REDHAT_BUGZILLA_PRODUCT="Fedora"* > *REDHAT_BUGZILLA_PRODUCT_VERSION=29* > *REDHAT_SUPPORT_PRODUCT="Fedora"* > *REDHAT_SUPPORT_PRODUCT_VERSION=29* > *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy > https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* > *VARIANT="Cloud Edition"* > *VARIANT_ID=cloud* > > > so not sure why I don't have atomic tho > > > On Sat, Feb 2, 2019 at 7:53 PM Clemens clemens.hardewig@crandale.de > wrote: > >> Now to the failure of your part-013: Are you sure that you used the >> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >> error message below suggests that your image does not contain ‚atomic‘ as >> part of the image … >> >> + _prefix=docker.io/openstackmagnum/ >> + atomic install --storage ostree --system --system-package no >> --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >> heat-container-agent >> docker.io/openstackmagnum/heat-container-agent:queens-stable >> ./part-013: line 8: atomic: command not found >> + systemctl start heat-container-agent >> Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found. >> >> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >> alfredo.deluca@gmail.com>: >> >> Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found. >> >> >> > > -- > *Alfredo* > >
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
hi Ignazio. Unfortunately doesn't resolve either with ping or curl .... but what is strange also it doesn't have yum or dnf o any installer ....unless it use only atomic.....
I think at the end it\s the issue with the network as I found out my all-in-one deployment doesn't have the br-ex which it supposed to be the external network interface.
I installed OS with ansible-openstack
Cheers
On Wed, Feb 6, 2019 at 3:39 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Alfredo it is very strange you can ping 8.8.8.8 but you cannot resolve names. I think atomic command uses names for finishing master installation. Curl is installed on master....
Il giorno Mer 6 Feb 2019 09:00 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
Hi Ignazio. sorry for late reply. security group is fine. It\s not blocking the network traffic.
Not sure why but, with this fedora release I can finally find atomic but there is no yum,nslookup,dig,host command..... why is so different from another version (latest) which had yum but not atomic.
It's all weird
Cheers
On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Alfredo, try to check security group linked to your kubemaster.
Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
Hi Ignazio. Thanks for the link...... so
Now at least atomic is present on the system. Also I ve already had 8.8.8.8 on the system. So I can connect on the floating IP to the kube master....than I can ping 8.8.8.8 but for example doesn't resolve the names...so if I ping 8.8.8.8 *root@my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=1 ttl=118 time=12.1 ms* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=2 ttl=118 time=12.2 ms*
but if I ping google.com doesn't resolve. I can't either find on fedora dig or nslookup to check resolv.conf has *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* *nameserver 8.8.8.8*
It\s all so weird.
On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
I also suggest to change dns in your external network used by magnum. Using openstack dashboard you can change it to 8.8.8.8 (If I remember fine you wrote that you can ping 8.8.8.8 from kuke baster)
Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
thanks ignazio Where can I get it from?
On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
> I used fedora-magnum-27-4 and it works > > Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < > alfredo.deluca@gmail.com> ha scritto: > >> Hi Clemens. >> So the image I downloaded is this >> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... >> which is the latest I think. >> But you are right...and I noticed that too.... It doesn't have >> atomic binary >> the os-release is >> >> *NAME=Fedora* >> *VERSION="29 (Cloud Edition)"* >> *ID=fedora* >> *VERSION_ID=29* >> *PLATFORM_ID="platform:f29"* >> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >> *ANSI_COLOR="0;34"* >> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >> *HOME_URL="https://fedoraproject.org/ https://fedoraproject.org/"* >> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >> https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* >> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >> https://fedoraproject.org/wiki/Communicating_and_getting_help"* >> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >> https://bugzilla.redhat.com/"* >> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >> *REDHAT_SUPPORT_PRODUCT="Fedora"* >> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >> https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* >> *VARIANT="Cloud Edition"* >> *VARIANT_ID=cloud* >> >> >> so not sure why I don't have atomic tho >> >> >> On Sat, Feb 2, 2019 at 7:53 PM Clemens < >> clemens.hardewig@crandale.de> wrote: >> >>> Now to the failure of your part-013: Are you sure that you used >>> the glance image ‚fedora-atomic-latest‘ and not some other fedora image? >>> Your error message below suggests that your image does not contain ‚atomic‘ >>> as part of the image … >>> >>> + _prefix=docker.io/openstackmagnum/ >>> + atomic install --storage ostree --system --system-package no >>> --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>> heat-container-agent >>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>> ./part-013: line 8: atomic: command not found >>> + systemctl start heat-container-agent >>> Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found. >>> >>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>> alfredo.deluca@gmail.com>: >>> >>> Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found. >>> >>> >>> >> >> -- >> *Alfredo* >> >>
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
Hi Alfredo, I know some utilities are not installed on the fedora image but on my installation it is not a problem. As you wrote there are some issues on networking. I've never used openstack-ansible, so I cannot help you. I am sorry Ignazio
Il giorno gio 7 feb 2019 alle ore 09:17 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
hi Ignazio. Unfortunately doesn't resolve either with ping or curl .... but what is strange also it doesn't have yum or dnf o any installer ....unless it use only atomic.....
I think at the end it\s the issue with the network as I found out my all-in-one deployment doesn't have the br-ex which it supposed to be the external network interface.
I installed OS with ansible-openstack
Cheers
On Wed, Feb 6, 2019 at 3:39 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Alfredo it is very strange you can ping 8.8.8.8 but you cannot resolve names. I think atomic command uses names for finishing master installation. Curl is installed on master....
Il giorno Mer 6 Feb 2019 09:00 Alfredo De Luca alfredo.deluca@gmail.com ha scritto:
Hi Ignazio. sorry for late reply. security group is fine. It\s not blocking the network traffic.
Not sure why but, with this fedora release I can finally find atomic but there is no yum,nslookup,dig,host command..... why is so different from another version (latest) which had yum but not atomic.
It's all weird
Cheers
On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano ignaziocassano@gmail.com wrote:
Alfredo, try to check security group linked to your kubemaster.
Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Ignazio. Thanks for the link...... so
Now at least atomic is present on the system. Also I ve already had 8.8.8.8 on the system. So I can connect on the floating IP to the kube master....than I can ping 8.8.8.8 but for example doesn't resolve the names...so if I ping 8.8.8.8 *root@my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=1 ttl=118 time=12.1 ms* *64 bytes from 8.8.8.8 http://8.8.8.8: icmp_seq=2 ttl=118 time=12.2 ms*
but if I ping google.com doesn't resolve. I can't either find on fedora dig or nslookup to check resolv.conf has *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* *nameserver 8.8.8.8*
It\s all so weird.
On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
I also suggest to change dns in your external network used by magnum. Using openstack dashboard you can change it to 8.8.8.8 (If I remember fine you wrote that you can ping 8.8.8.8 from kuke baster)
Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
> thanks ignazio > Where can I get it from? > > > On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < > ignaziocassano@gmail.com> wrote: > >> I used fedora-magnum-27-4 and it works >> >> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >> alfredo.deluca@gmail.com> ha scritto: >> >>> Hi Clemens. >>> So the image I downloaded is this >>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-2... >>> which is the latest I think. >>> But you are right...and I noticed that too.... It doesn't have >>> atomic binary >>> the os-release is >>> >>> *NAME=Fedora* >>> *VERSION="29 (Cloud Edition)"* >>> *ID=fedora* >>> *VERSION_ID=29* >>> *PLATFORM_ID="platform:f29"* >>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>> *ANSI_COLOR="0;34"* >>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>> *HOME_URL="https://fedoraproject.org/ >>> https://fedoraproject.org/"* >>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>> https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"* >>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>> https://fedoraproject.org/wiki/Communicating_and_getting_help"* >>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>> https://bugzilla.redhat.com/"* >>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>> https://fedoraproject.org/wiki/Legal:PrivacyPolicy"* >>> *VARIANT="Cloud Edition"* >>> *VARIANT_ID=cloud* >>> >>> >>> so not sure why I don't have atomic tho >>> >>> >>> On Sat, Feb 2, 2019 at 7:53 PM Clemens < >>> clemens.hardewig@crandale.de> wrote: >>> >>>> Now to the failure of your part-013: Are you sure that you used >>>> the glance image ‚fedora-atomic-latest‘ and not some other fedora image? >>>> Your error message below suggests that your image does not contain ‚atomic‘ >>>> as part of the image … >>>> >>>> + _prefix=docker.io/openstackmagnum/ >>>> + atomic install --storage ostree --system --system-package no >>>> --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>> heat-container-agent >>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>> ./part-013: line 8: atomic: command not found >>>> + systemctl start heat-container-agent >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>> alfredo.deluca@gmail.com>: >>>> >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > >
-- *Alfredo*
-- *Alfredo*
-- *Alfredo*
OK - and your floating ip 172.29.249.112 has access to the internet?
Am 02.02.2019 um 17:33 schrieb Alfredo De Luca alfredo.deluca@gmail.com:
[root@freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/local-ipv4 http://169.254.169.254/latest/meta-data/local-ipv4 10.0.0.5[root@freddo-5oyez3ot5pxi-master-0 scripts]#
[root@freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/public-ipv4 http://169.254.169.254/latest/meta-data/public-ipv4 172.29.249.112[root@freddo-5oyez3ot5pxi-master-0 scripts]#
172.29.249.112 is the Floating IP... which I use to connect to the master
On Sat, Feb 2, 2019 at 5:26 PM Clemens <clemens.hardewig@crandale.de mailto:clemens.hardewig@crandale.de> wrote: Hi Alfredo,
This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 http://169.254.169.254/latest/meta-data/local-ipv4, to obtain the local ip address; the second one to get the public ip address It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually?
BR C
Am 02.02.2019 um 17:26 schrieb Clemens clemens.hardewig@crandale.de:
Hi Alfredo,
This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 http://169.254.169.254/latest/meta-data/local-ipv4, to obtain the local ip address; the second one to get the public ip address It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually?
BR C
Am 02.02.2019 um 17:18 schrieb Alfredo De Luca <alfredo.deluca@gmail.com mailto:alfredo.deluca@gmail.com>:
Hi Clemens. Yes...you are right but not sure why the IPs are not correct
if [ -z "${KUBE_NODE_IP}" ]; then KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4 http://169.254.169.254/latest/meta-data/local-ipv4) fi
sans="IP:${KUBE_NODE_IP}"
if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4 http://169.254.169.254/latest/meta-data/public-ipv4)
I don't have that IP at all.
participants (5)
-
Alfredo De Luca
-
Clemens
-
Clemens Hardewig
-
Feilong Wang
-
Ignazio Cassano