Hello, in openstack there are a lot of newtroks . Your 172.29 blabla bla network is probably the network where openstack endpoint are exposed , right ? If yes, that is not the network where virtual machine are attached. In your openstack ym must have also networks for virtual machines. When you create a magnum cluster, yum must specify an external netowrok used by virtual machine for download packages from internet and to be contacted . Magnum create private netowrk (probablly your 10.1.8 network) which is connected to the external network by a virtual router created by magnum heat template. Try to see your network topology in openstack dashboard. Ignazio Il giorno mar 29 gen 2019 alle ore 16:08 Alfredo De Luca < alfredo.deluca@gmail.com> ha scritto:
Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following
+ '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished [+]poststarthook/extensions/third-party-resources ok [-]poststarthook/rbac/bootstrap-roles failed: not finished healthz check failed' ']' + sleep 5
Not sure what to do. My configuration is ... eth0 - 10.1.8.113
But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22
Maybe that's the problem?
On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Alfredo, your external network is using proxy ? If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 Ignazio
Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < clemens.hardewig@crandale.de> ha scritto:
At least on fedora there is a second cloud Init log as far as I remember-Look into both
Br c
Von meinem iPhone gesendet
Am 29.01.2019 um 12:08 schrieb Alfredo De Luca <alfredo.deluca@gmail.com
:
thanks Clemens. I looked at the cloud-init-output.log on the master... and at the moment is doing the following....
++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5
Network ....could be but not sure where to look at
On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < clemens.hardewig@crandale.de> wrote:
Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... So, log files are the first step you could dig into... Br c Von meinem iPhone gesendet
Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < alfredo.deluca@gmail.com>:
Hi all. I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on
kube_masters <https://10.1.8.113/project/stacks/stack/6221608c-e7f1-4d76-b694-cdd7ec22c386/kube_masters/> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 <https://10.1.8.113/project/stacks/stack/b7204f0c-b9d8-4ef2-8f0b-afe4c077d039/> OS::Heat::ResourceGroup 16 minutes Create In Progress state changedcreate in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM.
any idea?
*Alfredo*
-- *Alfredo*
-- *Alfredo*