K8s 1.24 needs containerd so you need to add an additional label. You also need to cange the hyperkube prefix.
labels:
container_runtime: containerd
kube_tag: v1.24.16-rancher1
The following template works just fine for me under Antelope 2023.1 deployed with Kolla-Ansible:
openstack coe cluster template create k8s-flan-small-37-v1.24.16-containerd --image Fedora-CoreOS-37 --keypair mykey --external-network public --fixed-network demo-net --fixed-subnet demo-subnet --dns-nameserver <ip_of_your_dns> --flavor m1.kubernetes.small --master-flavor m1.kubernetes.small --volume-driver cinder --docker-volume-size 10 --network-driver flannel --docker-storage-driver overlay2 --coe kubernetes --labels kube_tag=v1.24.16-rancher1,hyperkube_prefix=
docker.io/rancher/,container_runtime=containerd
(2023.1) [vagrant@seed ~]$ openstack coe cluster template show k8s-flan-small-37-v1.24.16-containerd -f yaml
insecure_registry: '-'
labels:
container_runtime: containerd
kube_tag: v1.24.16-rancher1
updated_at: '2023-08-14T07:25:09+00:00'
floating_ip_enabled: true
fixed_subnet: demo-subnet
master_flavor_id: m1.kubernetes.small
uuid: bce946ef-6cf7-4153-b858-72b943c499a2
no_proxy: '-'
https_proxy: '-'
tls_disabled: false
keypair_id: mykey
public: false
http_proxy: '-'
docker_volume_size: 10
server_type: vm
external_network_id: 60335752-0c01-40b0-b152-365b23576309
cluster_distro: fedora-coreos
image_id: Fedora-CoreOS-37
volume_driver: cinder
registry_enabled: false
docker_storage_driver: overlay2
apiserver_port: '-'
name: k8s-flan-small-37-v1.24.16-containerd
created_at: '2023-08-10T15:16:00+00:00'
network_driver: flannel
fixed_network: demo-net
coe: kubernetes
flavor_id: m1.kubernetes.small
master_lb_enabled: false
dns_nameserver:
hidden: false
tags: '-'
(2023.1) [vagrant@seed ~]$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
containerd-j5ob3gd2dvqo-master-0 Ready master 78m v1.24.16 10.0.0.125 192.168.4.134 Fedora CoreOS 37.20221127.3.0 6.0.9-300.fc37.x86_64
containerd://1.4.4containerd-j5ob3gd2dvqo-node-0 Ready <none> 71m v1.24.16 10.0.0.71 192.168.4.132 Fedora CoreOS 37.20221127.3.0 6.0.9-300.fc37.x86_64
containerd://1.4.4
(2023.1) [vagrant@seed ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f4bcd98d7-9vjzk 1/1 Running 0 77m
kube-system coredns-7f4bcd98d7-wnc7h 1/1 Running 0 77m
kube-system csi-cinder-controllerplugin-dc7889b4f-vk4jd 6/6 Running 0 77m
kube-system csi-cinder-nodeplugin-l8dqq 3/3 Running 0 71m
kube-system csi-cinder-nodeplugin-zdjg6 3/3 Running 0 77m
kube-system dashboard-metrics-scraper-7866c78b8-d66mg 1/1 Running 0 77m
kube-system k8s-keystone-auth-c9xjs 1/1 Running 0 77m
kube-system kube-dns-autoscaler-8f9cf4c99-kq6j5 1/1 Running 0 77m
kube-system kube-flannel-ds-qbw9l 1/1 Running 0 71m
kube-system kube-flannel-ds-xrbmp 1/1 Running 0 77m
kube-system kubernetes-dashboard-d78dc6f78-2qklq 1/1 Running 0 77m
kube-system magnum-metrics-server-564c9cdd6d-2rxpc 1/1 Running 0 77m
kube-system npd-8xkbk 1/1 Running 0 70m
kube-system openstack-cloud-controller-manager-46qsl 1/1 Running 0 78m
I have a blog where I blog about OpenStack related stuff. One of my older blog posts is also about deploying K8s via Magnum:
Cheers,
Oliver