[magnum][openstack-ansible][k8s] kube_masters CREATE_FAILED
Satish Patel
satish.txt at gmail.com
Tue Aug 1 21:30:17 UTC 2023
Hmm, what the heck is going on here. Wallaby? (I am running openstack Xena,
Am I using the wrong image?)
[root at mycluster31-bw5yi3lzkw45-master-0 ~]# podman ps
CONTAINER ID IMAGE
COMMAND CREATED STATUS PORTS
NAMES
e8b9a439194e
docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1
/usr/bin/start-he... 30 minutes ago Up 30 minutes ago
heat-container-agent
On Tue, Aug 1, 2023 at 5:27 PM Satish Patel <satish.txt at gmail.com> wrote:
> After some spelunking I found some error messages on instance in
> journalctl. Why error logs showing podman?
>
> https://paste.opendev.org/show/bp1iEBV2meihZmRtH2M1/
>
> On Tue, Aug 1, 2023 at 5:20 PM Satish Patel <satish.txt at gmail.com> wrote:
>
>> Folks,
>>
>> I am running the Xena release and fedora-coreos-31.X image. My cluster is
>> always throwing an error kube_masters CREATE_FAILED.
>>
>> This is my template:
>>
>> openstack coe cluster template create --coe kubernetes --image
>> "fedora-coreos-35.20220116" --flavor gen.medium --master-flavor
>> gen.medium --docker-storage-driver overlay2 --keypair jmp1-key
>> --external-network net_eng_vlan_39 --network-driver flannel
>> --dns-nameserver 8.8.8.8
>> --labels="container_runtime=containerd,cinder_csi_enabled=false" --labels
>> kube_tag=v1.21.11-rancher1,hyperkube_prefix=docker.io/rancher/
>> k8s-new-template-31
>>
>> Command to create cluster:
>>
>> openstack coe cluster create --cluster-template k8s-new-template-31
>> --master-count 1 --node-count 2 --keypair jmp1-key mycluster31
>>
>> Here is the output of heat stack
>>
>> [root at ostack-eng-osa images]# heat resource-list mycluster31-bw5yi3lzkw45
>> WARNING (shell) "heat resource-list" is deprecated, please use "openstack
>> stack resource list" instead
>>
>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+
>> | resource_name | physical_resource_id |
>> resource_type
>> | resource_status | updated_time
>> |
>>
>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+
>> | api_address_floating_switch | |
>> Magnum::FloatingIPAddressSwitcher
>> | INIT_COMPLETE | 2023-08-01T20:55:49Z
>> |
>> | api_address_lb_switch | |
>> Magnum::ApiGatewaySwitcher
>> | INIT_COMPLETE |
>> 2023-08-01T20:55:49Z |
>> | api_lb | 99e0f887-fbe2-4b2f-b3a1-b1834c9a21c2 |
>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_api.yaml
>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z |
>> | etcd_address_lb_switch | |
>> Magnum::ApiGatewaySwitcher
>> | INIT_COMPLETE |
>> 2023-08-01T20:55:49Z |
>> | etcd_lb | d4ba15f3-8862-4f2b-a2cf-53eafd36d286 |
>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_etcd.yaml
>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z |
>> | kube_cluster_config | |
>> OS::Heat::SoftwareConfig
>> | INIT_COMPLETE |
>> 2023-08-01T20:55:49Z |
>> | kube_cluster_deploy | |
>> OS::Heat::SoftwareDeployment
>> | INIT_COMPLETE |
>> 2023-08-01T20:55:49Z |
>> | kube_masters | 9ac8fc3e-a7d8-4eca-90c6-f66a8e0c43f0 |
>> OS::Heat::ResourceGroup
>> | CREATE_FAILED | 2023-08-01T20:55:49Z
>> |
>> | kube_minions | |
>> OS::Heat::ResourceGroup
>> | INIT_COMPLETE | 2023-08-01T20:55:49Z
>> |
>> | master_nodes_server_group | 19c9b300-f655-4db4-b03e-ea1479c541db |
>> OS::Nova::ServerGroup
>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z
>> |
>> | network | a908f229-fe8f-4ab8-b245-e8cf90c1b233 |
>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/network.yaml
>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z |
>> | secgroup_kube_master | 79e6b233-1a18-48c4-8a4f-766819eb945f |
>> OS::Neutron::SecurityGroup
>> | CREATE_COMPLETE |
>> 2023-08-01T20:55:49Z |
>> | secgroup_kube_minion | 2a908ffb-15bf-45c5-adad-6930b0313e94 |
>> OS::Neutron::SecurityGroup
>> | CREATE_COMPLETE |
>> 2023-08-01T20:55:49Z |
>> | secgroup_rule_tcp_kube_minion | 95779e79-a8bc-4ed4-b035-fc21758bd241 |
>> OS::Neutron::SecurityGroupRule
>> | CREATE_COMPLETE |
>> 2023-08-01T20:55:49Z |
>> | secgroup_rule_udp_kube_minion | 2a630b3e-51ca-4504-9013-353cbe7c581b |
>> OS::Neutron::SecurityGroupRule
>> | CREATE_COMPLETE |
>> 2023-08-01T20:55:49Z |
>> | worker_nodes_server_group | d14b0630-95fa-46dc-81e3-2f90e62c7943 |
>> OS::Nova::ServerGroup
>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z
>> |
>>
>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+
>>
>>
>> I can ssh into an instance but am not sure what logs I should be chasing
>> to find the proper issue. Any kind of help appreciated
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230801/bdc500bf/attachment.htm>
More information about the openstack-discuss
mailing list