<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">Oliver,<div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>Thanks again. Yes, I’m pretty much in the same state as you. I’ve seen magnum is certified with v1.24, so I was hoping that more information would be available on what that really corresponding to as far as the individual packages. If you do find anything, I would appreciate it if you share it.</div><div><br></div><div>Thanks!</div><div><br></div><div>Jay</div><div><div><br>
</div>
<div><br><blockquote type="cite"><div>On Aug 15, 2023, at 3:56 AM, Oliver Weinmann <oliver.weinmann@me.com> wrote:</div><br class="Apple-interchange-newline"><div><div><div><div>Hi Jay,<br></div><div><br></div><div>Sorry, I had probably overlooked that you could already deploy a working v1.24 cluster. The versions of the other images and charts are indeed not in line with v1.24 but they seem to be working just fine. Magnum is certified vor v1.24 but it is hard to find any detailed info on what this actually means. I have not yet tried to deploy pods in my v1.24 test cluster but will definitely do so in the next days. I cloned the magnum git repo and looked for versions in the latest code but they seem to be just the same as of 2023.1. <br></div><div><br></div><blockquote type="cite"><div>Jay Rhine <jay.rhine@rumble.com> schrieb am 14. Aug. 2023 um 15:35:<br></div><div><br></div><div><br></div><div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div>Oliver,<br></div><div><br></div><div><span style="white-space:pre" class="Apple-tab-span"></span>Thanks for the feedback! Changing the kube_tag and container_runtime is inline in what I have done so far. I agree this will successfully build a cluster. However, it leaves all the other image and chart tags at the original level. For example, that leaves us running the cinder csi plugin with tag “v1.23.0” where I think it would probably be more appropriate to run at v1.24.6 (the latest) or at least one of the v1.24.x versions. Going through and just increasing each tag and other version type parameter should be possible, but I was hoping that somewhere this is being tracked already so that the community might have an idea on what good supported combinations are.<br></div><div><br></div><div>From what I can see in the existing heat templates for 2023.1, these are the default values that you will get if you don’t specify a label override … this is probably not a complete list because I just pulled out the _tag values. It’s probably also necessary to track at least containerd_version, kube_dashboard_version<br></div><div><br></div><div><div>metrics_server_chart_tag: v3.7.0<br></div><div>traefik_ingress_controller_tag: v1.7.28<br></div><div>kube_tag: v1.23.3-rancher1<br></div><div>master_kube_tag: v1.23.3-rancher1<br></div><div>minion_kube_tag: v1.23.3-rancher1<br></div><div>cloud_provider_tag: v1.23.1<br></div><div>etcd_tag: v3.4.6<br></div><div>coredns_tag: 1.6.6<br></div><div>flannel_tag: v0.15.1<br></div><div>flannel_cni_tag: v0.3.0<br></div><div>metrics_scraper_tag: v1.0.4<br></div><div>calico_tag: v3.21.2<br></div><div>calico_kube_controllers_tag: v1.0.3<br></div><div>octavia_ingress_controller_tag: v1.18.0<br></div><div>prometheus_tag: v1.8.2<br></div><div>grafana_tag: 5.1.5<br></div><div>heat_container_agent_tag: wallaby-stable-1<br></div><div>k8s_keystone_auth_tag: v1.18.0<br></div><div>prometheus_operator_chart_tag: v8.12.13<br></div><div>prometheus_adapter_chart_tag: 1.4.0<br></div><div>tiller_tag: "v2.16.7"<br></div><div>helm_client_tag: "v3.2.1"<br></div><div>magnum_auto_healer_tag: v1.18.0<br></div><div>cinder_csi_plugin_tag: v1.23.0<br></div><div>csi_attacher_tag: v3.3.0<br></div><div>csi_provisioner_tag: v3.0.0<br></div><div>csi_snapshotter_tag: v4.2.1<br></div><div>csi_resizer_tag: v1.3.0<br></div><div>csi_node_driver_registrar_tag: v2.4.0<br></div><div>csi_liveness_probe_tag: v2.5.0<br></div><div>node_problem_detector_tag: v0.6.2<br></div><div>nginx_ingress_controller_tag: 0.32.0<br></div><div>nginx_ingress_controller_chart_tag: 4.0.17<br></div><div>draino_tag: abf028a<br></div><div>autoscaler_tag: v1.18.1<br></div><div><br></div><div>Any help is appreciated.<br></div><div><br></div><div>Thank you,<br></div><div><br></div><div>Jay<br></div><div><div><br></div><blockquote type="cite"><div>On Aug 14, 2023, at 4:59 AM, Oliver Weinmann <oliver.weinmann@me.com> wrote:<br></div><div><br></div><div><div><div><div>Hi Jay,<br></div></div><div><br></div><div>K8s 1.24 needs containerd so you need to add an additional label. You also need to cange the hyperkube prefix.<br></div><div><br></div><div>labels:<br></div><div> container_runtime: containerd<br></div><div> hyperkube_prefix: <a href="http://docker.io/rancher/" rel="noopener noreferrer">docker.io/rancher/</a><br></div><div> kube_tag: v1.24.16-rancher1<br></div><div><br></div><div>The following template works just fine for me under Antelope 2023.1 deployed with Kolla-Ansible:<br></div><div><br></div><div>openstack coe cluster template create k8s-flan-small-37-v1.24.16-containerd --image Fedora-CoreOS-37 --keypair mykey --external-network public --fixed-network demo-net --fixed-subnet demo-subnet --dns-nameserver <ip_of_your_dns> --flavor m1.kubernetes.small --master-flavor m1.kubernetes.small --volume-driver cinder --docker-volume-size 10 --network-driver flannel --docker-storage-driver overlay2 --coe kubernetes --labels kube_tag=v1.24.16-rancher1,hyperkube_prefix=<a href="http://docker.io/rancher/,container_runtime=containerd" rel="noopener noreferrer">docker.io/rancher/,container_runtime=containerd</a><br></div><div><br></div><div>(2023.1) [vagrant@seed ~]$ openstack coe cluster template show k8s-flan-small-37-v1.24.16-containerd -f yaml<br></div><div>insecure_registry: '-'<br></div><div>labels:<br></div><div> container_runtime: containerd<br></div><div> hyperkube_prefix: <a href="http://docker.io/rancher/" rel="noopener noreferrer">docker.io/rancher/</a><br></div><div> kube_tag: v1.24.16-rancher1<br></div><div>updated_at: '2023-08-14T07:25:09+00:00'<br></div><div>floating_ip_enabled: true<br></div><div>fixed_subnet: demo-subnet<br></div><div>master_flavor_id: m1.kubernetes.small<br></div><div>uuid: bce946ef-6cf7-4153-b858-72b943c499a2<br></div><div>no_proxy: '-'<br></div><div>https_proxy: '-'<br></div><div>tls_disabled: false<br></div><div>keypair_id: mykey<br></div><div>public: false<br></div><div>http_proxy: '-'<br></div><div>docker_volume_size: 10<br></div><div>server_type: vm<br></div><div>external_network_id: 60335752-0c01-40b0-b152-365b23576309<br></div><div>cluster_distro: fedora-coreos<br></div><div>image_id: Fedora-CoreOS-37<br></div><div>volume_driver: cinder<br></div><div>registry_enabled: false<br></div><div>docker_storage_driver: overlay2<br></div><div>apiserver_port: '-'<br></div><div>name: k8s-flan-small-37-v1.24.16-containerd<br></div><div>created_at: '2023-08-10T15:16:00+00:00'<br></div><div>network_driver: flannel<br></div><div>fixed_network: demo-net<br></div><div>coe: kubernetes<br></div><div>flavor_id: m1.kubernetes.small<br></div><div>master_lb_enabled: false<br></div><div>dns_nameserver:<br></div><div>hidden: false<br></div><div>tags: '-'<br></div><div><br></div><div>(2023.1) [vagrant@seed ~]$ kubectl get nodes -o wide<br></div><div>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME<br></div><div>containerd-j5ob3gd2dvqo-master-0 Ready master 78m v1.24.16 10.0.0.125 192.168.4.134 Fedora CoreOS 37.20221127.3.0 6.0.9-300.fc37.x86_64 <a>containerd://1.4.4</a><br></div><div>containerd-j5ob3gd2dvqo-node-0 Ready <none> 71m v1.24.16 10.0.0.71 192.168.4.132 Fedora CoreOS 37.20221127.3.0 6.0.9-300.fc37.x86_64 <a>containerd://1.4.4</a><br></div><div><br></div><div>(2023.1) [vagrant@seed ~]$ kubectl get pods -A<br></div><div>NAMESPACE NAME READY STATUS RESTARTS AGE<br></div><div>kube-system coredns-7f4bcd98d7-9vjzk 1/1 Running 0 77m<br></div><div>kube-system coredns-7f4bcd98d7-wnc7h 1/1 Running 0 77m<br></div><div>kube-system csi-cinder-controllerplugin-dc7889b4f-vk4jd 6/6 Running 0 77m<br></div><div>kube-system csi-cinder-nodeplugin-l8dqq 3/3 Running 0 71m<br></div><div>kube-system csi-cinder-nodeplugin-zdjg6 3/3 Running 0 77m<br></div><div>kube-system dashboard-metrics-scraper-7866c78b8-d66mg 1/1 Running 0 77m<br></div><div>kube-system k8s-keystone-auth-c9xjs 1/1 Running 0 77m<br></div><div>kube-system kube-dns-autoscaler-8f9cf4c99-kq6j5 1/1 Running 0 77m<br></div><div>kube-system kube-flannel-ds-qbw9l 1/1 Running 0 71m<br></div><div>kube-system kube-flannel-ds-xrbmp 1/1 Running 0 77m<br></div><div>kube-system kubernetes-dashboard-d78dc6f78-2qklq 1/1 Running 0 77m<br></div><div>kube-system magnum-metrics-server-564c9cdd6d-2rxpc 1/1 Running 0 77m<br></div><div>kube-system npd-8xkbk 1/1 Running 0 70m<br></div><div>kube-system openstack-cloud-controller-manager-46qsl 1/1 Running 0 78m<br></div><div><br></div><div>I have a blog where I blog about OpenStack related stuff. One of my older blog posts is also about deploying K8s via Magnum:<br></div><div><br></div><div><a href="https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/" rel="noopener noreferrer">https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/</a><br></div><div><br></div><div>Cheers,<br></div><div>Oliver<br></div><div><br></div></div></div></blockquote></div></div></div></blockquote></div><div><br></div></div></div></blockquote></div><br></div></body></html>