Well - as you only sent a small potion of the log, I can still only guessing that the problem lies with your network config. As Ignazio said, the most straight way to install a K8s cluster based on the default template is to let magnum create a new network and router for you. This is achieved by leaving the private network field empty in horizon. It seems to be that your cluster has not started the basic Kubernetes services (Etcd, Kubernetes-apiserver, -controller-manager, …) sucessfully. I don’t know how you have started your cluster (CLI or horizon) so perhaps you could share that. Br c
Am 29.01.2019 um 16:07 schrieb Alfredo De Luca <alfredo.deluca@gmail.com>:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
+ '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz <http://127.0.0.1:8080/healthz> + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz <http://127.0.0.1:8080/healthz> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 <http://172.29.236.100/22>
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano <ignaziocassano@gmail.com <mailto:ignaziocassano@gmail.com>> wrote: Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig <clemens.hardewig@crandale.de <mailto:clemens.hardewig@crandale.de>> ha scritto: At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com <mailto:alfredo.deluca@gmail.com>>:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz <http://127.0.0.1:8080/healthz> + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz <http://127.0.0.1:8080/healthz> + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz <http://127.0.0.1:8080/healthz> + '[' ok = '' ']' + sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig <clemens.hardewig@crandale.de <mailto:clemens.hardewig@crandale.de>> wrote: Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca <alfredo.deluca@gmail.com <mailto:alfredo.deluca@gmail.com>>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters <https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 <https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
Alfredo
-- Alfredo
-- Alfredo