Hi Jay,Sorry, I had probably overlooked that you could already deploy a working v1.24 cluster. The versions of the other images and charts are indeed not in line with v1.24 but they seem to be working just fine. Magnum is certified vor v1.24 but it is hard to find any detailed info on what this actually means. I have not yet tried to deploy pods in my v1.24 test cluster but will definitely do so in the next days. I cloned the magnum git repo and looked for versions in the latest code but they seem to be just the same as of 2023.1. Jay Rhine <jay.rhine@rumble.com> schrieb am 14. Aug. 2023 um 15:35:Oliver,Thanks for the feedback! Changing the kube_tag and container_runtime is inline in what I have done so far. I agree this will successfully build a cluster. However, it leaves all the other image and chart tags at the original level. For example, that leaves us running the cinder csi plugin with tag “v1.23.0” where I think it would probably be more appropriate to run at v1.24.6 (the latest) or at least one of the v1.24.x versions. Going through and just increasing each tag and other version type parameter should be possible, but I was hoping that somewhere this is being tracked already so that the community might have an idea on what good supported combinations are.From what I can see in the existing heat templates for 2023.1, these are the default values that you will get if you don’t specify a label override … this is probably not a complete list because I just pulled out the _tag values. It’s probably also necessary to track at least containerd_version, kube_dashboard_versionmetrics_server_chart_tag: v3.7.0traefik_ingress_controller_tag: v1.7.28kube_tag: v1.23.3-rancher1master_kube_tag: v1.23.3-rancher1minion_kube_tag: v1.23.3-rancher1cloud_provider_tag: v1.23.1etcd_tag: v3.4.6coredns_tag: 1.6.6flannel_tag: v0.15.1flannel_cni_tag: v0.3.0metrics_scraper_tag: v1.0.4calico_tag: v3.21.2calico_kube_controllers_tag: v1.0.3octavia_ingress_controller_tag: v1.18.0prometheus_tag: v1.8.2grafana_tag: 5.1.5heat_container_agent_tag: wallaby-stable-1k8s_keystone_auth_tag: v1.18.0prometheus_operator_chart_tag: v8.12.13prometheus_adapter_chart_tag: 1.4.0tiller_tag: "v2.16.7"helm_client_tag: "v3.2.1"magnum_auto_healer_tag: v1.18.0cinder_csi_plugin_tag: v1.23.0csi_attacher_tag: v3.3.0csi_provisioner_tag: v3.0.0csi_snapshotter_tag: v4.2.1csi_resizer_tag: v1.3.0csi_node_driver_registrar_tag: v2.4.0csi_liveness_probe_tag: v2.5.0node_problem_detector_tag: v0.6.2nginx_ingress_controller_tag: 0.32.0nginx_ingress_controller_chart_tag: 4.0.17draino_tag: abf028aautoscaler_tag: v1.18.1Any help is appreciated.Thank you,JayOn Aug 14, 2023, at 4:59 AM, Oliver Weinmann <oliver.weinmann@me.com> wrote:Hi Jay,K8s 1.24 needs containerd so you need to add an additional label. You also need to cange the hyperkube prefix.labels: container_runtime: containerd hyperkube_prefix: docker.io/rancher/ kube_tag: v1.24.16-rancher1The following template works just fine for me under Antelope 2023.1 deployed with Kolla-Ansible:openstack coe cluster template create k8s-flan-small-37-v1.24.16-containerd --image Fedora-CoreOS-37 --keypair mykey --external-network public --fixed-network demo-net --fixed-subnet demo-subnet --dns-nameserver <ip_of_your_dns> --flavor m1.kubernetes.small --master-flavor m1.kubernetes.small --volume-driver cinder --docker-volume-size 10 --network-driver flannel --docker-storage-driver overlay2 --coe kubernetes --labels kube_tag=v1.24.16-rancher1,hyperkube_prefix=docker.io/rancher/,container_runtime=containerd(2023.1) [vagrant@seed ~]$ openstack coe cluster template show k8s-flan-small-37-v1.24.16-containerd -f yamlinsecure_registry: '-'labels: container_runtime: containerd hyperkube_prefix: docker.io/rancher/ kube_tag: v1.24.16-rancher1updated_at: '2023-08-14T07:25:09+00:00'floating_ip_enabled: truefixed_subnet: demo-subnetmaster_flavor_id: m1.kubernetes.smalluuid: bce946ef-6cf7-4153-b858-72b943c499a2no_proxy: '-'https_proxy: '-'tls_disabled: falsekeypair_id: mykeypublic: falsehttp_proxy: '-'docker_volume_size: 10server_type: vmexternal_network_id: 60335752-0c01-40b0-b152-365b23576309cluster_distro: fedora-coreosimage_id: Fedora-CoreOS-37volume_driver: cinderregistry_enabled: falsedocker_storage_driver: overlay2apiserver_port: '-'name: k8s-flan-small-37-v1.24.16-containerdcreated_at: '2023-08-10T15:16:00+00:00'network_driver: flannelfixed_network: demo-netcoe: kubernetesflavor_id: m1.kubernetes.smallmaster_lb_enabled: falsedns_nameserver:hidden: falsetags: '-'(2023.1) [vagrant@seed ~]$ kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEcontainerd-j5ob3gd2dvqo-master-0 Ready master 78m v1.24.16 10.0.0.125 192.168.4.134 Fedora CoreOS 37.20221127.3.0 6.0.9-300.fc37.x86_64 containerd://1.4.4containerd-j5ob3gd2dvqo-node-0 Ready <none> 71m v1.24.16 10.0.0.71 192.168.4.132 Fedora CoreOS 37.20221127.3.0 6.0.9-300.fc37.x86_64 containerd://1.4.4(2023.1) [vagrant@seed ~]$ kubectl get pods -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-7f4bcd98d7-9vjzk 1/1 Running 0 77mkube-system coredns-7f4bcd98d7-wnc7h 1/1 Running 0 77mkube-system csi-cinder-controllerplugin-dc7889b4f-vk4jd 6/6 Running 0 77mkube-system csi-cinder-nodeplugin-l8dqq 3/3 Running 0 71mkube-system csi-cinder-nodeplugin-zdjg6 3/3 Running 0 77mkube-system dashboard-metrics-scraper-7866c78b8-d66mg 1/1 Running 0 77mkube-system k8s-keystone-auth-c9xjs 1/1 Running 0 77mkube-system kube-dns-autoscaler-8f9cf4c99-kq6j5 1/1 Running 0 77mkube-system kube-flannel-ds-qbw9l 1/1 Running 0 71mkube-system kube-flannel-ds-xrbmp 1/1 Running 0 77mkube-system kubernetes-dashboard-d78dc6f78-2qklq 1/1 Running 0 77mkube-system magnum-metrics-server-564c9cdd6d-2rxpc 1/1 Running 0 77mkube-system npd-8xkbk 1/1 Running 0 70mkube-system openstack-cloud-controller-manager-46qsl 1/1 Running 0 78mI have a blog where I blog about OpenStack related stuff. One of my older blog posts is also about deploying K8s via Magnum:https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minut...