[magnum][openstack-ansible][k8s] kube_masters CREATE_FAILED
Satish Patel
satish.txt at gmail.com
Thu Aug 3 14:03:08 UTC 2023
Thank you for reply folks,
But after attempting so many images I settled
on fedora-coreos-31.20200517.3.0-openstack.x86_64.qcow2 for Xena release.
Now everything is working fine. Look like magnum/openstack/fedoracore all
should be aligned on specific versions.
On Thu, Aug 3, 2023 at 3:51 AM Oliver Weinmann <oliver.weinmann at me.com>
wrote:
> Hi satish,
>
> For me it is working fine when using the following template:
>
> openstack coe cluster template create k8s-flan-small-35-1.21.11 \
> --image Fedora-CoreOS-35 \
> --keypair mykey \
> --external-network ext-net \
> --dns-nameserver 8.8.8.8 \
> --flavor m1.small \
> --master-flavor m1.small \
> --volume-driver cinder \
> --docker-volume-size 10 \
> --network-driver flannel \
> --docker-storage-driver overlay2 \
> --coe kubernetes \
> --labels kube_tag=v1.21.11-rancher1,hyperkube_prefix=docker.io/rancher/
>
>
> I just recently deployed Antelope 2023.1 with kolla-Ansible and here
> magnum works much better out of the box using the default settings. I have
> never managed to get containerd working in Yoga or Zed.
>
> You can find more info on my blog:
>
> ((
> https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/
> ))
>
> Cheers,
> Oliver
>
> Von meinem iPhone gesendet
>
> Am 03.08.2023 um 02:04 schrieb Nguyễn Hữu Khôi <nguyenhuukhoinw at gmail.com
> >:
>
>
> Hello Satish,
> You need install k8s from tar files by using labels below. I think our
> Magnum too old to use. Just my experience.
>
> containerd_tarball_url
> containerd_tarball_sha256
>
>
>
> Nguyen Huu Khoi
>
>
> On Wed, Aug 2, 2023 at 5:23 AM Satish Patel <satish.txt at gmail.com> wrote:
>
>> Hmm, what the heck is going on here. Wallaby? (I am running openstack
>> Xena, Am I using the wrong image?)
>>
>> [root at mycluster31-bw5yi3lzkw45-master-0 ~]# podman ps
>> CONTAINER ID IMAGE
>> COMMAND CREATED STATUS PORTS
>> NAMES
>> e8b9a439194e
>> docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1
>> /usr/bin/start-he... 30 minutes ago Up 30 minutes ago
>> heat-container-agent
>>
>>
>>
>> On Tue, Aug 1, 2023 at 5:27 PM Satish Patel <satish.txt at gmail.com> wrote:
>>
>>> After some spelunking I found some error messages on instance in
>>> journalctl. Why error logs showing podman?
>>>
>>> https://paste.opendev.org/show/bp1iEBV2meihZmRtH2M1/
>>>
>>> On Tue, Aug 1, 2023 at 5:20 PM Satish Patel <satish.txt at gmail.com>
>>> wrote:
>>>
>>>> Folks,
>>>>
>>>> I am running the Xena release and fedora-coreos-31.X image. My cluster
>>>> is always throwing an error kube_masters CREATE_FAILED.
>>>>
>>>> This is my template:
>>>>
>>>> openstack coe cluster template create --coe kubernetes --image
>>>> "fedora-coreos-35.20220116" --flavor gen.medium --master-flavor
>>>> gen.medium --docker-storage-driver overlay2 --keypair jmp1-key
>>>> --external-network net_eng_vlan_39 --network-driver flannel
>>>> --dns-nameserver 8.8.8.8
>>>> --labels="container_runtime=containerd,cinder_csi_enabled=false" --labels
>>>> kube_tag=v1.21.11-rancher1,hyperkube_prefix=docker.io/rancher/
>>>> k8s-new-template-31
>>>>
>>>> Command to create cluster:
>>>>
>>>> openstack coe cluster create --cluster-template k8s-new-template-31
>>>> --master-count 1 --node-count 2 --keypair jmp1-key mycluster31
>>>>
>>>> Here is the output of heat stack
>>>>
>>>> [root at ostack-eng-osa images]# heat resource-list
>>>> mycluster31-bw5yi3lzkw45
>>>> WARNING (shell) "heat resource-list" is deprecated, please use
>>>> "openstack stack resource list" instead
>>>>
>>>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+
>>>> | resource_name | physical_resource_id
>>>> | resource_type
>>>> | resource_status | updated_time
>>>> |
>>>>
>>>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+
>>>> | api_address_floating_switch |
>>>> | Magnum::FloatingIPAddressSwitcher
>>>> | INIT_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | api_address_lb_switch |
>>>> | Magnum::ApiGatewaySwitcher
>>>> | INIT_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | api_lb | 99e0f887-fbe2-4b2f-b3a1-b1834c9a21c2
>>>> |
>>>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_api.yaml
>>>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z |
>>>> | etcd_address_lb_switch |
>>>> | Magnum::ApiGatewaySwitcher
>>>> | INIT_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | etcd_lb | d4ba15f3-8862-4f2b-a2cf-53eafd36d286
>>>> |
>>>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_etcd.yaml
>>>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z |
>>>> | kube_cluster_config |
>>>> | OS::Heat::SoftwareConfig
>>>> | INIT_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | kube_cluster_deploy |
>>>> | OS::Heat::SoftwareDeployment
>>>> | INIT_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | kube_masters | 9ac8fc3e-a7d8-4eca-90c6-f66a8e0c43f0
>>>> | OS::Heat::ResourceGroup
>>>> | CREATE_FAILED |
>>>> 2023-08-01T20:55:49Z |
>>>> | kube_minions |
>>>> | OS::Heat::ResourceGroup
>>>> | INIT_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | master_nodes_server_group | 19c9b300-f655-4db4-b03e-ea1479c541db
>>>> | OS::Nova::ServerGroup
>>>> | CREATE_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | network | a908f229-fe8f-4ab8-b245-e8cf90c1b233
>>>> |
>>>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/network.yaml
>>>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z |
>>>> | secgroup_kube_master | 79e6b233-1a18-48c4-8a4f-766819eb945f
>>>> | OS::Neutron::SecurityGroup
>>>> | CREATE_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | secgroup_kube_minion | 2a908ffb-15bf-45c5-adad-6930b0313e94
>>>> | OS::Neutron::SecurityGroup
>>>> | CREATE_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | secgroup_rule_tcp_kube_minion | 95779e79-a8bc-4ed4-b035-fc21758bd241
>>>> | OS::Neutron::SecurityGroupRule
>>>> | CREATE_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | secgroup_rule_udp_kube_minion | 2a630b3e-51ca-4504-9013-353cbe7c581b
>>>> | OS::Neutron::SecurityGroupRule
>>>> | CREATE_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>> | worker_nodes_server_group | d14b0630-95fa-46dc-81e3-2f90e62c7943
>>>> | OS::Nova::ServerGroup
>>>> | CREATE_COMPLETE |
>>>> 2023-08-01T20:55:49Z |
>>>>
>>>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+
>>>>
>>>>
>>>> I can ssh into an instance but am not sure what logs I should be
>>>> chasing to find the proper issue. Any kind of help appreciated
>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230803/e9b24036/attachment-0001.htm>
More information about the openstack-discuss
mailing list