[kuryr][tacker] fails to bump the k8s version to v1.26
Michał Dulko
mdulko at redhat.com
Thu Jul 13 10:52:57 UTC 2023
On Thu, 2023-07-13 at 10:16 +0000, Ayumu Ueha (Fujitsu) wrote:
> Hi Michał,
>
> Thanks for your reply,
>
> I trying it in my local machine and check logs, but I am not familiar
> with troubleshooting with kubelet/kubeadm, so I do not know why it
> errored as for now.
> We will continue to investigate, but it would be helpful if you could
> share any small findings.
>
> For your reference, I will attach a log of the kubelet.service when
> it failed in my local environment.
Looks like kube-apiserver and kube-scheduler are failing to start.
Checking logs of them would be my next idea. You might want to use
`crictl` to find them.
> Thank you.
>
> Best Regards,
> Ueha
>
> -----Original Message-----
> From: Michał Dulko <mdulko at redhat.com>
> Sent: Thursday, July 13, 2023 3:48 PM
> To: openstack-discuss at lists.openstack.org
> Subject: Re: [kuryr][tacker] fails to bump the k8s version to v1.26
>
> On Tue, 2023-07-04 at 07:53 +0000, Ayumu Ueha (Fujitsu) wrote:
> >
> >
> >
> > Hi kuryr-kubetenetes team,
> >
> > Tacker uses the kuryr-kubernetes as the setup for the kubernetes
> > Environment.
> > In the Bobcat version, we will bump the supported version of
> > kubenetes
> > to 1.26.6, and when I tried to bump the version in devstack's
> > local.conf by the patch [1], the following error occurred. (please
> > refer the details to Zuul log
> > [2])
> > ==============================
> > [wait-control-plane] Waiting for the kubelet to boot up the control
> > plane as static Pods from directory "/etc/kubernetes/manifests".
> > This
> > can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
> >
> > Unfortunately, an error has occurred:
> > timed out waiting for the condition
> >
> > This error is likely caused by:
> > - The kubelet is not running
> > - The kubelet is unhealthy due to a misconfiguration
> > of
> > the node in some way (required cgroups disabled)
> >
> > If you are on a systemd-powered system, you can try to troubleshoot
> > the error with the following commands:
> > - 'systemctl status kubelet'
> > - 'journalctl -xeu kubelet'
> >
> > Additionally, a control plane component may have crashed or exited
> > when started by the container runtime.
> > To troubleshoot, list all containers using your preferred container
> > runtimes CLI.
> > Here is one example how you may list all running Kubernetes
> > containers
> > by using crictl:
> > - 'crictl --runtime-endpoint
> > unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
> > Once you have found the failing container, you can
> > inspect its logs with:
> > - 'crictl --runtime-endpoint
> > unix:///var/run/crio/crio.sock logs CONTAINERID'
> > error execution phase wait-control-plane: couldn't initialize a
> > Kubernetes cluster ==============================
> >
> > I know it is not yet supporting version 1.26 of the K8S at the
> > kuryr-
> > kubernetes as for now, but do you know any way to avoid the above
> > error?
> > I suspect this is due to a change from version 1.25, but I'm not
> > sure
> > which one is affecting...
> >
> > Also, when will you support the K8S v 1.26 at the kuryr-kubernetes?
> > Will the Bobcat release support it?
> > Please kindly let me know if you know anything. Thank you.
>
> Kuryr-kubernetes itself supports K8s 1.26 and 1.27 as we test it with
> these versions as part of OpenShift. What doesn't seem to work is the
> DevStack plugin, I guess due to some breaking changes in kubelet or
> kubeadm.
>
> I can advise to try to deploy DevStack with these settings on your
> local machine and then you should have access to all the logs
> required to figure out what is causing the problem with the version
> bump.
>
> > [1] https://review.opendev.org/c/openstack/tacker/+/886935
> > Change the parameters:
> > KURYR_KUBERNETES_VERSION: 1.25.6 to 1.26.6
> > CRIO_VERSION: 1.25 to 1.26
> > [2]
> > https://zuul.opendev.org/t/openstack/build/1a4061da74e640368da133ba219
> > b54d9/log/controller-k8s/logs/devstacklog.txt#9761-9816
> >
> > Best Regards,
> > Ueha
>
>
More information about the openstack-discuss
mailing list