<div dir="ltr">After some spelunking I found some error messages on instance in journalctl. Why error logs showing podman? <div><br></div><div><a href="https://paste.opendev.org/show/bp1iEBV2meihZmRtH2M1/">https://paste.opendev.org/show/bp1iEBV2meihZmRtH2M1/</a> <br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Aug 1, 2023 at 5:20 PM Satish Patel <<a href="mailto:satish.txt@gmail.com">satish.txt@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Folks,<br><div><br></div><div>I am running the Xena release and fedora-coreos-31.X image. My cluster is always throwing an error kube_masters CREATE_FAILED. </div><div><br></div><div>This is my template:</div><div><br></div><div>openstack coe cluster template create --coe kubernetes --image "fedora-coreos-35.20220116" --flavor gen.medium --master-flavor gen.medium --docker-storage-driver overlay2 --keypair jmp1-key --external-network net_eng_vlan_39 --network-driver flannel --dns-nameserver 8.8.8.8 --labels="container_runtime=containerd,cinder_csi_enabled=false" --labels kube_tag=v1.21.11-rancher1,hyperkube_prefix=<a href="http://docker.io/rancher/" target="_blank">docker.io/rancher/</a> k8s-new-template-31<br></div><div><br></div><div>Command to create cluster:</div><div><br></div><div>openstack coe cluster create --cluster-template k8s-new-template-31 --master-count 1 --node-count 2 --keypair jmp1-key mycluster31<br></div><div><br></div><div>Here is the output of heat stack </div><div><br></div><div>[root@ostack-eng-osa images]# heat resource-list mycluster31-bw5yi3lzkw45<br>WARNING (shell) "heat resource-list" is deprecated, please use "openstack stack resource list" instead<br>+-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+<br>| resource_name | physical_resource_id | resource_type | resource_status | updated_time |<br>+-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+<br>| api_address_floating_switch | | Magnum::FloatingIPAddressSwitcher | INIT_COMPLETE | 2023-08-01T20:55:49Z |<br>| api_address_lb_switch | | Magnum::ApiGatewaySwitcher | INIT_COMPLETE | 2023-08-01T20:55:49Z |<br>| api_lb | 99e0f887-fbe2-4b2f-b3a1-b1834c9a21c2 | file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_api.yaml | CREATE_COMPLETE | 2023-08-01T20:55:49Z |<br>| etcd_address_lb_switch | | Magnum::ApiGatewaySwitcher | INIT_COMPLETE | 2023-08-01T20:55:49Z |<br>| etcd_lb | d4ba15f3-8862-4f2b-a2cf-53eafd36d286 | file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_etcd.yaml | CREATE_COMPLETE | 2023-08-01T20:55:49Z |<br>| kube_cluster_config | | OS::Heat::SoftwareConfig | INIT_COMPLETE | 2023-08-01T20:55:49Z |<br>| kube_cluster_deploy | | OS::Heat::SoftwareDeployment | INIT_COMPLETE | 2023-08-01T20:55:49Z |<br>| kube_masters | 9ac8fc3e-a7d8-4eca-90c6-f66a8e0c43f0 | OS::Heat::ResourceGroup | CREATE_FAILED | 2023-08-01T20:55:49Z |<br>| kube_minions | | OS::Heat::ResourceGroup | INIT_COMPLETE | 2023-08-01T20:55:49Z |<br>| master_nodes_server_group | 19c9b300-f655-4db4-b03e-ea1479c541db | OS::Nova::ServerGroup | CREATE_COMPLETE | 2023-08-01T20:55:49Z |<br>| network | a908f229-fe8f-4ab8-b245-e8cf90c1b233 | file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/network.yaml | CREATE_COMPLETE | 2023-08-01T20:55:49Z |<br>| secgroup_kube_master | 79e6b233-1a18-48c4-8a4f-766819eb945f | OS::Neutron::SecurityGroup | CREATE_COMPLETE | 2023-08-01T20:55:49Z |<br>| secgroup_kube_minion | 2a908ffb-15bf-45c5-adad-6930b0313e94 | OS::Neutron::SecurityGroup | CREATE_COMPLETE | 2023-08-01T20:55:49Z |<br>| secgroup_rule_tcp_kube_minion | 95779e79-a8bc-4ed4-b035-fc21758bd241 | OS::Neutron::SecurityGroupRule | CREATE_COMPLETE | 2023-08-01T20:55:49Z |<br>| secgroup_rule_udp_kube_minion | 2a630b3e-51ca-4504-9013-353cbe7c581b | OS::Neutron::SecurityGroupRule | CREATE_COMPLETE | 2023-08-01T20:55:49Z |<br>| worker_nodes_server_group | d14b0630-95fa-46dc-81e3-2f90e62c7943 | OS::Nova::ServerGroup | CREATE_COMPLETE | 2023-08-01T20:55:49Z |<br>+-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+<br></div><div><br></div><div><br></div><div>I can ssh into an instance but am not sure what logs I should be chasing to find the proper issue. Any kind of help appreciated </div><div><br></div></div>
</blockquote></div>