Kolla-ansible Yoga Magnum csi-cinder-controllerplugin-0 CrashLoopBackOff
Oliver Weinmann
oliver.weinmann at me.com
Tue Jan 3 13:57:10 UTC 2023
Dear all,
I have a strange issue with Magnum on Yoga deployed by kolla-ansible. I
noticed this on a prod cluster and so I deployed a fresh cluster from
scratch and here I'm facing the very same issue. When deploying a a new
K8s cluster the csi-cinder-controllerplugin-0 pod is in CrashLoopBackOff
state.
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-56448757b9-62lxw 1/1 Running 0 11d
kube-system coredns-56448757b9-6mvtr 1/1 Running 0 11d
kube-system csi-cinder-controllerplugin-0 4/5 CrashLoopBackOff 3339 (92s ago) 11d
kube-system csi-cinder-nodeplugin-88ttn 2/2 Running 0 11d
kube-system csi-cinder-nodeplugin-tnxpt 2/2 Running 0 11d
kube-system dashboard-metrics-scraper-67f57ff746-2vhgb 1/1 Running 0 11d
kube-system k8s-keystone-auth-5djxh 1/1 Running 0 11d
kube-system kube-dns-autoscaler-6d5b5dc777-mm7qs 1/1 Running 0 11d
kube-system kube-flannel-ds-795hj 1/1 Running 0 11d
kube-system kube-flannel-ds-p76rf 1/1 Running 0 11d
kube-system kubernetes-dashboard-7b88d986b4-5bhnf 1/1 Running 0 11d
kube-system magnum-metrics-server-6c4c77844b-jbpml 1/1 Running 0 11d
kube-system npd-sqsbr 1/1 Running 0 11d
kube-system openstack-cloud-controller-manager-c5prb 1/1 Running 0 11d
I searched the mailing list and noticed that I reported the very same issue last year:
https://lists.openstack.org/pipermail/openstack-discuss/2022-May/028517.html
but then I was either able to fix it or not able to reproduce it. I searched the web and found other users having the same issue:
https://storyboard.openstack.org/#!/story/2010023
https://github.com/kubernetes/cloud-provider-openstack/issues/1845
They suggest to change the csi_snapshotter_tag to v4.0.0. I haven't tried it yet, but I have the feeling that something in my deployment is not correct since it wants to use v1.2.2 which is a very old version.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 26m (x3382 over 12d) kubelet Pulling image "quay.io/k8scsi/csi-snapshotter:v1.2.2"
Warning BackOff 81s (x79755 over 12d) kubelet Back-off restarting failed container
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-admin-test-old-ygahdfhciitx-master-0 Ready master 12d v1.23.3 10.0.0.132 172.28.4.124 Fedora CoreOS 35.20220410.3.1 5.16.18-200.fc35.x86_64 docker://20.10.12
k8s-admin-test-old-ygahdfhciitx-node-0 Ready <none> 12d v1.23.3 10.0.0.160 172.28.4.128 Fedora CoreOS 35.20220410.3.1 5.16.18-200.fc35.x86_64 docker://20.10.12
I deployed on Rocky Linux 8.7 using latest Kolla-ansible for Yoga:
[vagrant at seed ~]$ kolla-ansible --version
14.7.1
[vagrant at seed ~]$ cat /etc/os-release
NAME="Rocky Linux"
VERSION="8.7 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.7"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.7 (Green Obsidian)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-8"
ROCKY_SUPPORT_PRODUCT_VERSION="8.7"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.7"
Can you please help or clarify?
Best Regards,
Oliver
More information about the openstack-discuss
mailing list