Hi all, I have changed the cert_manager_type fromĀ x509keypair to barbican and enabled tls and I have a cluster running now. Unfortunately, openstack coe cluster list shows an unhealthy cluster :( Jake, running the openstack stack list -n5 <stack_id> gives error, openstack stack list doesn't seem to take any extra argument. Yes, I can ssh to the nodes, I'll check the kube* services you mentioned. Thanks Jaime On 23/04/2024 15:48, Jake Yip wrote:
Hi Jaime,
You can check if all the resources in the heat stack was created properly.
$ openstack coe cluster show <cluster>
to get stack, then
$ openstack stack list -n5 <stack_id>
If you can SSH to the nodes, check out if services come up with `systemctl`. kubelet, kube-apiserver, etc should be up.
You can also check out the heat log on worker(s), /var/log/heat-config/*
Regards, Jake
On 23/4/2024 3:45 am, Jaime Ibar wrote:
Hi all,
I have installed magnum with no issues. After the installation, I tried to deploy a kubernetes cluster but after some time, the creation process times out and this always happens during the kube_cluster_deploy stage. I'm running 17.0.0 (Bobcat) so I tried with core os fedora-coreos-38.20230806.3.0 and kube_tag v1.26.8-rancher1 as stated in the documentation but no joy. I also tried different combinations of fedora core and kube_tag but same, no joy. I can ssh into the vms, podman exec into the containers but if I run kubectl version I get the following error The connection to the server localhost:8080 was refused - did you specify the right host or port? and tailing /var/log/heat-config/heat-config-script/ I get the following message ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Any idea what might be the problem?
TIA Jaime
-- salu2
Jaime
-- Jaime