Magnum CSI Cinder Plugin broken in Yoga?
Hi, I just updated to Yoga in order to fix the problem with the broken metrics-server, but now I had problems deploying GITLAB and the errors lead to problems with mounting PVs. So I had a look at the csi-cinder-plugin and saw this: kolla-yoga) [oliweilocal@gedasvl99 images]$ kubectl get pods -n kube-system | grep -i csi csi-cinder-controllerplugin-0 4/5 CrashLoopBackOff 168 (4m53s ago) 14h csi-cinder-nodeplugin-7kh9q 2/2 Running 0 14h csi-cinder-nodeplugin-q5bfq 2/2 Running 0 14h csi-cinder-nodeplugin-x4vrk 2/2 Running 0 14h I re-deployed the cluster and the error stays. Is there anything known about this? I have a small Ceph Pacific cluster. I can do some testing with an NFS backend as well and see if the problem goes away. Best Regards, Oliver
Hi, Are you using containerd or default docker as CRI ? Ammad On Thu, May 12, 2022 at 10:13 AM Oliver Weinmann <oliver.weinmann@me.com> wrote:
Hi,
I just updated to Yoga in order to fix the problem with the broken metrics-server, but now I had problems deploying GITLAB and the errors lead to problems with mounting PVs.
So I had a look at the csi-cinder-plugin and saw this:
kolla-yoga) [oliweilocal@gedasvl99 images]$ kubectl get pods -n kube-system | grep -i csi csi-cinder-controllerplugin-0 4/5 CrashLoopBackOff 168 (4m53s ago) 14h csi-cinder-nodeplugin-7kh9q 2/2 Running 0 14h csi-cinder-nodeplugin-q5bfq 2/2 Running 0 14h csi-cinder-nodeplugin-x4vrk 2/2 Running 0 14h
I re-deployed the cluster and the error stays. Is there anything known about this? I have a small Ceph Pacific cluster. I can do some testing with an NFS backend as well and see if the problem goes away.
Best Regards,
Oliver
-- Regards, Syed Ammad Ali
Hi Ammad, Thanks for your quick reply. I deployed Openstack Yoga using kolla-ansible. I did a standard magnum k8s cluster deploy: (kolla-yoga) [oliweilocal@gedasvl99 ~]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-test-35-lr3ysuuiolme-master-0 Ready master 15h v1.23.3 10.0.0.230 172.28.4.128 Fedora CoreOS 35.20220410.3.1 5.16.18-200.fc35.x86_64 docker://20.10.12 k8s-test-35-lr3ysuuiolme-node-0 Ready <none> 15h v1.23.3 10.0.0.183 172.28.4.120 Fedora CoreOS 35.20220410.3.1 5.16.18-200.fc35.x86_64 docker://20.10.12 k8s-test-35-lr3ysuuiolme-node-1 Ready <none> 15h v1.23.3 10.0.0.49 172.28.4.125 Fedora CoreOS 35.20220410.3.1 5.16.18-200.fc35.x86_64 docker://20.10.12 Seems to be docker. It seems it is failing to pull an image: (kolla-yoga) [oliweilocal@gedasvl99 ~]$ kubectl get events -n kube-system LAST SEEN TYPE REASON OBJECT MESSAGE 17m Normal LeaderElection configmap/cert-manager-cainjector-leader-election gitlab-certmanager-cainjector-75f8fbb78d-xvm8s_61655618-fbe8-4070-b179-64e60a1ad067 became leader 17m Normal LeaderElection lease/cert-manager-cainjector-leader-election gitlab-certmanager-cainjector-75f8fbb78d-xvm8s_61655618-fbe8-4070-b179-64e60a1ad067 became leader 17m Normal LeaderElection configmap/cert-manager-controller gitlab-certmanager-774db6b45f-nkmck-external-cert-manager-controller became leader 17m Normal LeaderElection lease/cert-manager-controller gitlab-certmanager-774db6b45f-nkmck-external-cert-manager-controller became leader 39s Warning BackOff pod/csi-cinder-controllerplugin-0 Back-off restarting failed container 30m Normal BackOff pod/csi-cinder-controllerplugin-0 Back-off pulling image "quay.io/k8scsi/csi-snapshotter:v1.2.2" I'm not 100% sure yet whether the problem with the csi-plugin affects my Gitlab deployment, but I just installed a NFS provisioner and the Gitlab deployment was successful. I will now try the very same thing again using the csi provisioner. Cheers, Oliver Am 12.05.2022 um 07:39 schrieb Ammad Syed:
Hi,
Are you using containerd or default docker as CRI ?
Ammad
On Thu, May 12, 2022 at 10:13 AM Oliver Weinmann <oliver.weinmann@me.com> wrote:
Hi,
I just updated to Yoga in order to fix the problem with the broken metrics-server, but now I had problems deploying GITLAB and the errors lead to problems with mounting PVs.
So I had a look at the csi-cinder-plugin and saw this:
kolla-yoga) [oliweilocal@gedasvl99 images]$ kubectl get pods -n kube-system | grep -i csi csi-cinder-controllerplugin-0 4/5 CrashLoopBackOff 168 (4m53s ago) 14h csi-cinder-nodeplugin-7kh9q 2/2 Running 0 14h csi-cinder-nodeplugin-q5bfq 2/2 Running 0 14h csi-cinder-nodeplugin-x4vrk 2/2 Running 0 14h
I re-deployed the cluster and the error stays. Is there anything known about this? I have a small Ceph Pacific cluster. I can do some testing with an NFS backend as well and see if the problem goes away.
Best Regards,
Oliver
-- Regards,
Syed Ammad Ali
Hi Ammad, sorry for the late response. I reinstalled my Yoga cluster and now all seems to be fine: kubectl get pods -n kube-system | grep -i csi csi-cinder-controllerplugin-0 5/5 Running 0 4h50m csi-cinder-nodeplugin-b7pcq 2/2 Running 0 4h50m csi-cinder-nodeplugin-bd5sb 2/2 Running 0 4h46m csi-cinder-nodeplugin-kghsb 2/2 Running 0 4h45m Cheers, Oliver Am 12.05.2022 um 07:04 schrieb Oliver Weinmann:
kubectl get pods -n kube-system | grep -i csi
participants (2)
-
Ammad Syed
-
Oliver Weinmann