Oliver, Thanks again. Yes, I’m pretty much in the same state as you. I’ve seen magnum is certified with v1.24, so I was hoping that more information would be available on what that really corresponding to as far as the individual packages. If you do find anything, I would appreciate it if you share it. Thanks! Jay
On Aug 15, 2023, at 3:56 AM, Oliver Weinmann <oliver.weinmann@me.com> wrote:
Hi Jay,
Sorry, I had probably overlooked that you could already deploy a working v1.24 cluster. The versions of the other images and charts are indeed not in line with v1.24 but they seem to be working just fine. Magnum is certified vor v1.24 but it is hard to find any detailed info on what this actually means. I have not yet tried to deploy pods in my v1.24 test cluster but will definitely do so in the next days. I cloned the magnum git repo and looked for versions in the latest code but they seem to be just the same as of 2023.1.
Jay Rhine <jay.rhine@rumble.com> schrieb am 14. Aug. 2023 um 15:35:
Oliver,
Thanks for the feedback! Changing the kube_tag and container_runtime is inline in what I have done so far. I agree this will successfully build a cluster. However, it leaves all the other image and chart tags at the original level. For example, that leaves us running the cinder csi plugin with tag “v1.23.0” where I think it would probably be more appropriate to run at v1.24.6 (the latest) or at least one of the v1.24.x versions. Going through and just increasing each tag and other version type parameter should be possible, but I was hoping that somewhere this is being tracked already so that the community might have an idea on what good supported combinations are.
From what I can see in the existing heat templates for 2023.1, these are the default values that you will get if you don’t specify a label override … this is probably not a complete list because I just pulled out the _tag values. It’s probably also necessary to track at least containerd_version, kube_dashboard_version
metrics_server_chart_tag: v3.7.0 traefik_ingress_controller_tag: v1.7.28 kube_tag: v1.23.3-rancher1 master_kube_tag: v1.23.3-rancher1 minion_kube_tag: v1.23.3-rancher1 cloud_provider_tag: v1.23.1 etcd_tag: v3.4.6 coredns_tag: 1.6.6 flannel_tag: v0.15.1 flannel_cni_tag: v0.3.0 metrics_scraper_tag: v1.0.4 calico_tag: v3.21.2 calico_kube_controllers_tag: v1.0.3 octavia_ingress_controller_tag: v1.18.0 prometheus_tag: v1.8.2 grafana_tag: 5.1.5 heat_container_agent_tag: wallaby-stable-1 k8s_keystone_auth_tag: v1.18.0 prometheus_operator_chart_tag: v8.12.13 prometheus_adapter_chart_tag: 1.4.0 tiller_tag: "v2.16.7" helm_client_tag: "v3.2.1" magnum_auto_healer_tag: v1.18.0 cinder_csi_plugin_tag: v1.23.0 csi_attacher_tag: v3.3.0 csi_provisioner_tag: v3.0.0 csi_snapshotter_tag: v4.2.1 csi_resizer_tag: v1.3.0 csi_node_driver_registrar_tag: v2.4.0 csi_liveness_probe_tag: v2.5.0 node_problem_detector_tag: v0.6.2 nginx_ingress_controller_tag: 0.32.0 nginx_ingress_controller_chart_tag: 4.0.17 draino_tag: abf028a autoscaler_tag: v1.18.1
Any help is appreciated.
Thank you,
Jay
On Aug 14, 2023, at 4:59 AM, Oliver Weinmann <oliver.weinmann@me.com> wrote:
Hi Jay,
K8s 1.24 needs containerd so you need to add an additional label. You also need to cange the hyperkube prefix.
labels: container_runtime: containerd hyperkube_prefix: docker.io/rancher/ <http://docker.io/rancher/> kube_tag: v1.24.16-rancher1
The following template works just fine for me under Antelope 2023.1 deployed with Kolla-Ansible:
openstack coe cluster template create k8s-flan-small-37-v1.24.16-containerd --image Fedora-CoreOS-37 --keypair mykey --external-network public --fixed-network demo-net --fixed-subnet demo-subnet --dns-nameserver <ip_of_your_dns> --flavor m1.kubernetes.small --master-flavor m1.kubernetes.small --volume-driver cinder --docker-volume-size 10 --network-driver flannel --docker-storage-driver overlay2 --coe kubernetes --labels kube_tag=v1.24.16-rancher1,hyperkube_prefix=docker.io/rancher/,container_runtime=containerd <http://docker.io/rancher/,container_runtime=containerd>
(2023.1) [vagrant@seed ~]$ openstack coe cluster template show k8s-flan-small-37-v1.24.16-containerd -f yaml insecure_registry: '-' labels: container_runtime: containerd hyperkube_prefix: docker.io/rancher/ <http://docker.io/rancher/> kube_tag: v1.24.16-rancher1 updated_at: '2023-08-14T07:25:09+00:00' floating_ip_enabled: true fixed_subnet: demo-subnet master_flavor_id: m1.kubernetes.small uuid: bce946ef-6cf7-4153-b858-72b943c499a2 no_proxy: '-' https_proxy: '-' tls_disabled: false keypair_id: mykey public: false http_proxy: '-' docker_volume_size: 10 server_type: vm external_network_id: 60335752-0c01-40b0-b152-365b23576309 cluster_distro: fedora-coreos image_id: Fedora-CoreOS-37 volume_driver: cinder registry_enabled: false docker_storage_driver: overlay2 apiserver_port: '-' name: k8s-flan-small-37-v1.24.16-containerd created_at: '2023-08-10T15:16:00+00:00' network_driver: flannel fixed_network: demo-net coe: kubernetes flavor_id: m1.kubernetes.small master_lb_enabled: false dns_nameserver: hidden: false tags: '-'
(2023.1) [vagrant@seed ~]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME containerd-j5ob3gd2dvqo-master-0 Ready master 78m v1.24.16 10.0.0.125 192.168.4.134 Fedora CoreOS 37.20221127.3.0 6.0.9-300.fc37.x86_64 containerd://1.4.4 <> containerd-j5ob3gd2dvqo-node-0 Ready <none> 71m v1.24.16 10.0.0.71 192.168.4.132 Fedora CoreOS 37.20221127.3.0 6.0.9-300.fc37.x86_64 containerd://1.4.4 <>
(2023.1) [vagrant@seed ~]$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7f4bcd98d7-9vjzk 1/1 Running 0 77m kube-system coredns-7f4bcd98d7-wnc7h 1/1 Running 0 77m kube-system csi-cinder-controllerplugin-dc7889b4f-vk4jd 6/6 Running 0 77m kube-system csi-cinder-nodeplugin-l8dqq 3/3 Running 0 71m kube-system csi-cinder-nodeplugin-zdjg6 3/3 Running 0 77m kube-system dashboard-metrics-scraper-7866c78b8-d66mg 1/1 Running 0 77m kube-system k8s-keystone-auth-c9xjs 1/1 Running 0 77m kube-system kube-dns-autoscaler-8f9cf4c99-kq6j5 1/1 Running 0 77m kube-system kube-flannel-ds-qbw9l 1/1 Running 0 71m kube-system kube-flannel-ds-xrbmp 1/1 Running 0 77m kube-system kubernetes-dashboard-d78dc6f78-2qklq 1/1 Running 0 77m kube-system magnum-metrics-server-564c9cdd6d-2rxpc 1/1 Running 0 77m kube-system npd-8xkbk 1/1 Running 0 70m kube-system openstack-cloud-controller-manager-46qsl 1/1 Running 0 78m
I have a blog where I blog about OpenStack related stuff. One of my older blog posts is also about deploying K8s via Magnum:
https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minut...
Cheers, Oliver