<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hi Ammad,</p>
    <p>Thanks for your quick reply. I deployed Openstack Yoga using
      kolla-ansible. I did a standard magnum k8s cluster deploy:</p>
    <p>(kolla-yoga) [oliweilocal@gedasvl99 ~]$ kubectl get nodes -o wide<br>
      NAME                                STATUS   ROLES    AGE  
      VERSION   INTERNAL-IP   EXTERNAL-IP   
      OS-IMAGE                        KERNEL-VERSION           
      CONTAINER-RUNTIME<br>
      k8s-test-35-lr3ysuuiolme-master-0   Ready    master   15h  
      v1.23.3   10.0.0.230    172.28.4.128   Fedora CoreOS
      35.20220410.3.1   5.16.18-200.fc35.x86_64   docker://20.10.12<br>
      k8s-test-35-lr3ysuuiolme-node-0     Ready    <none>   15h  
      v1.23.3   10.0.0.183    172.28.4.120   Fedora CoreOS
      35.20220410.3.1   5.16.18-200.fc35.x86_64   docker://20.10.12<br>
      k8s-test-35-lr3ysuuiolme-node-1     Ready    <none>   15h  
      v1.23.3   10.0.0.49     172.28.4.125   Fedora CoreOS
      35.20220410.3.1   5.16.18-200.fc35.x86_64   docker://20.10.12<br>
    </p>
    <p>Seems to be docker. It seems it is failing to pull an image:</p>
    <p>(kolla-yoga) [oliweilocal@gedasvl99 ~]$ kubectl get events -n
      kube-system<br>
      LAST SEEN   TYPE      REASON          
      OBJECT                                              MESSAGE<br>
      17m         Normal    LeaderElection  
      configmap/cert-manager-cainjector-leader-election  
gitlab-certmanager-cainjector-75f8fbb78d-xvm8s_61655618-fbe8-4070-b179-64e60a1ad067
      became leader<br>
      17m         Normal    LeaderElection  
      lease/cert-manager-cainjector-leader-election      
gitlab-certmanager-cainjector-75f8fbb78d-xvm8s_61655618-fbe8-4070-b179-64e60a1ad067
      became leader<br>
      17m         Normal    LeaderElection  
      configmap/cert-manager-controller                  
      gitlab-certmanager-774db6b45f-nkmck-external-cert-manager-controller
      became leader<br>
      17m         Normal    LeaderElection  
      lease/cert-manager-controller                      
      gitlab-certmanager-774db6b45f-nkmck-external-cert-manager-controller
      became leader<br>
      39s         Warning   BackOff         
      pod/csi-cinder-controllerplugin-0                   Back-off
      restarting failed container<br>
      30m         Normal    BackOff         
      pod/csi-cinder-controllerplugin-0                   Back-off
      pulling image "quay.io/k8scsi/csi-snapshotter:v1.2.2"<br>
    </p>
    <p>I'm not 100% sure yet whether the problem with the csi-plugin
      affects my Gitlab deployment, but I just installed a NFS
      provisioner and the Gitlab deployment was successful. I will now
      try the very same thing again using the csi provisioner.</p>
    <p>Cheers,</p>
    <p>Oliver<br>
    </p>
    <div class="moz-cite-prefix">Am 12.05.2022 um 07:39 schrieb Ammad
      Syed:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAKOoz50q8C1sKyC4BhtbpX283nUGZ1vPrX9--1oK2KYEurpiLw@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">Hi,
        <div><br>
        </div>
        <div>Are you using containerd or default docker as CRI ?</div>
        <div><br>
        </div>
        <div>Ammad</div>
      </div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Thu, May 12, 2022 at 10:13
          AM Oliver Weinmann <<a href="mailto:oliver.weinmann@me.com"
            moz-do-not-send="true" class="moz-txt-link-freetext">oliver.weinmann@me.com</a>>
          wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px
          0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
          <br>
          I just updated to Yoga in order to fix the problem with the
          broken <br>
          metrics-server, but now I had problems deploying GITLAB and
          the errors <br>
          lead to problems with mounting PVs.<br>
          <br>
          So I had a look at the csi-cinder-plugin and saw this:<br>
          <br>
          kolla-yoga) [oliweilocal@gedasvl99 images]$ kubectl get pods
          -n <br>
          kube-system | grep -i csi<br>
          csi-cinder-controllerplugin-0                4/5
          CrashLoopBackOff   168 <br>
          (4m53s ago)   14h<br>
          csi-cinder-nodeplugin-7kh9q                  2/2
          Running            <br>
          0                 14h<br>
          csi-cinder-nodeplugin-q5bfq                  2/2
          Running            <br>
          0                 14h<br>
          csi-cinder-nodeplugin-x4vrk                  2/2
          Running            <br>
          0                 14h<br>
          <br>
          I re-deployed the cluster and the error stays. Is there
          anything known <br>
          about this? I have a small Ceph Pacific cluster. I can do some
          testing <br>
          with an NFS backend as well and see if the problem goes away.<br>
          <br>
          Best Regards,<br>
          <br>
          Oliver<br>
          <br>
          <br>
        </blockquote>
      </div>
      <br clear="all">
      <div><br>
      </div>
      -- <br>
      <div dir="ltr" class="gmail_signature">Regards,
        <div><br>
        </div>
        <div><br>
        </div>
        <div>Syed Ammad Ali</div>
      </div>
    </blockquote>
  </body>
</html>