Yoga Magnum: csi-cinder-controllerplugin-0 fails to initialize properly because of obsolete API
Hi, We recently upgraded our cluster to Yoga and since then we cannot successfully start pods in clusters using K8s 1.23 that require a volume. The volume is properly created but attachment fails because it is trying to use v1beta1.CSINode and v1beta1.VolumeAttachment that no longer exists. I found a reference to this in https://github.com/kubernetes/cloud-provider-openstack/issues/1845 but the way to fix it is unclear. I tried to use last version of CSI-related stuff from registry.k8s.io (playing with labels and source), but I then got another problem which may be related (it is my guess) to the fact that I'm using too recent versions. Is somebody using sucessfully Magnum Yoga/K8bs 1.23 combination sucessfully and what is the trick to do it? Thanks in advance for any help. Best regards, Michel
Hi Oliver, Thanks for your excellent post! You described very well everything that need to be done... and all the steps I went through... But I have not seen how you fixed the CSI problem. Is it enough to define the csi_snapshotter_tag? I tried this this morning but was not able to find the version I was supposed to use. BTW, I see that you are using flannel as the network driver. I'm using calico, not sure it makes any difference for this problem anyway. Cheers, Michel Le 15/03/2024 à 17:09, Oliver Weinmann a écrit :
Hi Michel,
Maybe my old blogpost can help you:
* https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minut... <https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/>
Best regards, Oliver
Von meinem iPhone gesendet
Am 15.03.2024 um 16:04 schrieb Michel Jouvin <michel.jouvin@ijclab.in2p3.fr>:
Hi,
We recently upgraded our cluster to Yoga and since then we cannot successfully start pods in clusters using K8s 1.23 that require a volume. The volume is properly created but attachment fails because it is trying to use v1beta1.CSINode and v1beta1.VolumeAttachment that no longer exists. I found a reference to this in https://github.com/kubernetes/cloud-provider-openstack/issues/1845 but the way to fix it is unclear. I tried to use last version of CSI-related stuff from registry.k8s.io (playing with labels and source), but I then got another problem which may be related (it is my guess) to the fact that I'm using too recent versions.
Is somebody using sucessfully Magnum Yoga/K8bs 1.23 combination sucessfully and what is the trick to do it?
Thanks in advance for any help. Best regards,
Michel
Oliver, Ah ok, so we are at the same point! There was one sentence saying that you manage to fix the problems and we should keep reading, I probably misread/misunderstood... One of the problem with tags that are compatible with the new APi is that they are not in quay.io, so need to patch the default location for the tags you modify, except if you have your own registry with everything, pointed by container_infra_prefix (but it is another story!). I'm still interested to here from somebody who is able to run something more recent than 1.21 with Magnum and which OpenStack version they use... Thanks for your answers. Michel Le 15/03/2024 à 18:38, Oliver Weinmann a écrit :
Hi Michel,
Sorry just read my post again. I never managed to fix 1.23 under yoga. I remember asking the same question to the mailing list back then and the solutions provided didn’t work.
I think you need to change a lot more tags than just snapshotter.
Von meinem iPhone gesendet
Am 15.03.2024 um 18:32 schrieb Oliver Weinmann <oliver.weinmann@me.com>:
Hi Michel,
It’s been quite some time but as far as I can remember I only changed the snapshotter tag.
What FedoraCoreOs version and k8s are you using?
Cheers, Oliver
Von meinem iPhone gesendet
Am 15.03.2024 um 18:05 schrieb Michel Jouvin <michel.jouvin@ijclab.in2p3.fr>:
Hi Oliver,
Thanks for your excellent post! You described very well everything that need to be done... and all the steps I went through... But I have not seen how you fixed the CSI problem. Is it enough to define the csi_snapshotter_tag? I tried this this morning but was not able to find the version I was supposed to use.
BTW, I see that you are using flannel as the network driver. I'm using calico, not sure it makes any difference for this problem anyway.
Cheers,
Michel
Le 15/03/2024 à 17:09, Oliver Weinmann a écrit :
Hi Michel,
Maybe my old blogpost can help you:
* https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minut... <https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/>
Best regards, Oliver
Von meinem iPhone gesendet
Am 15.03.2024 um 16:04 schrieb Michel Jouvin <michel.jouvin@ijclab.in2p3.fr>:
Hi,
We recently upgraded our cluster to Yoga and since then we cannot successfully start pods in clusters using K8s 1.23 that require a volume. The volume is properly created but attachment fails because it is trying to use v1beta1.CSINode and v1beta1.VolumeAttachment that no longer exists. I found a reference to this in https://github.com/kubernetes/cloud-provider-openstack/issues/1845 but the way to fix it is unclear. I tried to use last version of CSI-related stuff from registry.k8s.io (playing with labels and source), but I then got another problem which may be related (it is my guess) to the fact that I'm using too recent versions.
Is somebody using sucessfully Magnum Yoga/K8bs 1.23 combination sucessfully and what is the trick to do it?
Thanks in advance for any help. Best regards,
Michel
Oliver, Thanks for this helpful information. We always delayed having a local registry but I think you are right, we probably need to do it to make easier working around problems like the ones we have with Magnum. Michel Le 16/03/2024 à 07:21, Oliver Weinmann a écrit :
Hi Michel,
I found my old post again:
* https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032149... <https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032149.html>
I think you need to create a local registry where you pull all the docker images upfront, since magnum tried to pull the csi snapshotter image from a wrong source. Also your k8s cluster might need containerd.
For the local registry you can check my other blogposts:
* https://www.roksblog.de/openstack-magnum-insecure-registry/ <https://www.roksblog.de/openstack-magnum-insecure-registry/>
Hope this helps. Unfortunately I don’t have a yoga cluster up and running at the moment to test it.
Von meinem iPhone gesendet
Am 15.03.2024 um 18:38 schrieb Oliver Weinmann <oliver.weinmann@me.com>:
Hi Michel,
Sorry just read my post again. I never managed to fix 1.23 under yoga. I remember asking the same question to the mailing list back then and the solutions provided didn’t work.
I think you need to change a lot more tags than just snapshotter.
Von meinem iPhone gesendet
Am 15.03.2024 um 18:32 schrieb Oliver Weinmann <oliver.weinmann@me.com>:
Hi Michel,
It’s been quite some time but as far as I can remember I only changed the snapshotter tag.
What FedoraCoreOs version and k8s are you using?
Cheers, Oliver
Von meinem iPhone gesendet
Am 15.03.2024 um 18:05 schrieb Michel Jouvin <michel.jouvin@ijclab.in2p3.fr>:
Hi Oliver,
Thanks for your excellent post! You described very well everything that need to be done... and all the steps I went through... But I have not seen how you fixed the CSI problem. Is it enough to define the csi_snapshotter_tag? I tried this this morning but was not able to find the version I was supposed to use.
BTW, I see that you are using flannel as the network driver. I'm using calico, not sure it makes any difference for this problem anyway.
Cheers,
Michel
Le 15/03/2024 à 17:09, Oliver Weinmann a écrit :
Hi Michel,
Maybe my old blogpost can help you:
* https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minut... <https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/>
Best regards, Oliver
Von meinem iPhone gesendet
Am 15.03.2024 um 16:04 schrieb Michel Jouvin <michel.jouvin@ijclab.in2p3.fr>:
Hi,
We recently upgraded our cluster to Yoga and since then we cannot successfully start pods in clusters using K8s 1.23 that require a volume. The volume is properly created but attachment fails because it is trying to use v1beta1.CSINode and v1beta1.VolumeAttachment that no longer exists. I found a reference to this in https://github.com/kubernetes/cloud-provider-openstack/issues/1845 but the way to fix it is unclear. I tried to use last version of CSI-related stuff from registry.k8s.io (playing with labels and source), but I then got another problem which may be related (it is my guess) to the fact that I'm using too recent versions.
Is somebody using sucessfully Magnum Yoga/K8bs 1.23 combination sucessfully and what is the trick to do it?
Thanks in advance for any help. Best regards,
Michel
Hi, For the record, it seems I managed to fix the CSI issue to create K8s 1.23 clusters with Yoga Magnum (K8s 1.23 being the supported version, according to the release notes). With the information provided by Oliver and a few others posts, I understood that I should try to run much newer CSI sidecars than those provided by Yoga Magnum as sidecar versions tend to be bound more to K8s versions than Magnum ones. As I didn't have a local registry already setup, I decided to patch /usr/lib/python3.6/site-packages/magnum/drivers/common/templates/kubernetes/fragments/enable-cinder-csi.sh to use "registry.k8s.io/sig-storage/" as the default registry source for sidecars to be able to use recent versions. The version I use are (defined as labels in cluster template): cinder_csi_plugin_tag=v1.26.4 csi_snapshotter_tag=v6.2.1 csi_attacher_tag=v4.2.0 csi_node_driver_registrar_tag=v2.10.0 csi-provisioner_tag=v3.2.2 csi_resizer_tag=v1.8.0 With these CSI sidecar versions, I was able to get the CSI-related pods but had to fix the cluster role csi-snapshotter-role (in the .sh file mentioned above) to add the right to patch "volumeattachments/status" (without this the error message is explicit in the pod csi-cinder-controllerplugin-0 log. With these mods, I have been able to create several ephemeral and persistent volumes so it seems it is working. There may be additional details I have not seen but I'm confident that it will possible to solve them, should they arise... If it helps, I attach the patch for enable-cinder-csi.sh without any warranty that it doesn't hurt in some situations (in particular it probably breaks templates using older versions of K8s without the appropriate labels as the old versions of the sidecars will not be found in registry.k8s.io)... If you plan to use it, be aware that it is at your own risk... Note that the only required part of the patch is the one about the clusterrole: a cleaner solution for installing the last version of the sidecars would be to have a local registry and copy the required versions in it. Best regards, Michel Le 16/03/2024 à 14:45, Oliver Weinmann a écrit :
Hi Michel,
Yes a local registry can be helpful. I highly recommend upgrading to a newer Openstack release or try if you can patch magnum in Yoga to use CAPI. It is so much better and just works.
I was able to patch it in 2023.1 and 2023.2.
* https://www.roksblog.de/openstack-magnum-cluster-api-driver/ <https://www.roksblog.de/openstack-magnum-cluster-api-driver/>
This will not help you with existing clusters but it is absolutely helpful for any new cluster that you deploy.
You can easily migrate your pods from the old clusters to the new ones deployed with CAPI using Velero.
* https://www.roksblog.de/kubernetes-backup-with-velero/ <https://www.roksblog.de/kubernetes-backup-with-velero/>
Have a nice weekend
Von meinem iPhone gesendet
Am 16.03.2024 um 11:36 schrieb Michel Jouvin <michel.jouvin@ijclab.in2p3.fr>:
Oliver,
Thanks for this helpful information. We always delayed having a local registry but I think you are right, we probably need to do it to make easier working around problems like the ones we have with Magnum.
Michel
Le 16/03/2024 à 07:21, Oliver Weinmann a écrit :
Hi Michel,
I found my old post again:
* https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032149... <https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032149.html>
I think you need to create a local registry where you pull all the docker images upfront, since magnum tried to pull the csi snapshotter image from a wrong source. Also your k8s cluster might need containerd.
For the local registry you can check my other blogposts:
* https://www.roksblog.de/openstack-magnum-insecure-registry/ <https://www.roksblog.de/openstack-magnum-insecure-registry/>
Hope this helps. Unfortunately I don’t have a yoga cluster up and running at the moment to test it.
Von meinem iPhone gesendet
Am 15.03.2024 um 18:38 schrieb Oliver Weinmann <oliver.weinmann@me.com>:
Hi Michel,
Sorry just read my post again. I never managed to fix 1.23 under yoga. I remember asking the same question to the mailing list back then and the solutions provided didn’t work.
I think you need to change a lot more tags than just snapshotter.
Von meinem iPhone gesendet
Am 15.03.2024 um 18:32 schrieb Oliver Weinmann <oliver.weinmann@me.com>:
Hi Michel,
It’s been quite some time but as far as I can remember I only changed the snapshotter tag.
What FedoraCoreOs version and k8s are you using?
Cheers, Oliver
Von meinem iPhone gesendet
Am 15.03.2024 um 18:05 schrieb Michel Jouvin <michel.jouvin@ijclab.in2p3.fr>:
Hi Oliver,
Thanks for your excellent post! You described very well everything that need to be done... and all the steps I went through... But I have not seen how you fixed the CSI problem. Is it enough to define the csi_snapshotter_tag? I tried this this morning but was not able to find the version I was supposed to use.
BTW, I see that you are using flannel as the network driver. I'm using calico, not sure it makes any difference for this problem anyway.
Cheers,
Michel
Le 15/03/2024 à 17:09, Oliver Weinmann a écrit : > Hi Michel, > > Maybe my old blogpost can help you: > > * https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minut... > <https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/> > > Best regards, > Oliver > > Von meinem iPhone gesendet > >> Am 15.03.2024 um 16:04 schrieb Michel Jouvin >> <michel.jouvin@ijclab.in2p3.fr>: >> >> Hi, >> >> We recently upgraded our cluster to Yoga and since then we >> cannot successfully start pods in clusters using K8s 1.23 that >> require a volume. The volume is properly created but attachment >> fails because it is trying to use v1beta1.CSINode and >> v1beta1.VolumeAttachment that no longer exists. I found a >> reference to this in >> https://github.com/kubernetes/cloud-provider-openstack/issues/1845 >> but the way to fix it is unclear. I tried to use last version >> of CSI-related stuff from registry.k8s.io (playing with labels >> and source), but I then got another problem which may be >> related (it is my guess) to the fact that I'm using too recent >> versions. >> >> Is somebody using sucessfully Magnum Yoga/K8bs 1.23 combination >> sucessfully and what is the trick to do it? >> >> Thanks in advance for any help. Best regards, >> >> Michel >>
Hi Michel, Yeah local registry will be helpful; we have started running our registry after images disappeared from provider's repo. There is really no way around that except pin your own images. By the way if you are running K8S 1.23.x you may way to use the 1.23.x plugins, Magnum tests those [1]. There isn't a Yoga label but Antelope uses the same, so you can refer to Antelope label e.g. cloud_provider_tag=v1.23.4, cinder_csi_plugin_tag=v1.23.4, k8s_keystone_auth_tag=v1.23.4, ... By the way anything <v1.25 is EOL at this point, and I there are so many changes between 1.21 -> 1.23 -> 1.25 that I really recommend >1.25 (probably 1.26 or 1.27) with Bobcat/Caracal. Hope that helps. [1] https://docs.openstack.org/magnum/latest/user/#supported-versions Regards, Jake On 18/3/2024 9:59 am, Michel Jouvin wrote:
Hi,
For the record, it seems I managed to fix the CSI issue to create K8s 1.23 clusters with Yoga Magnum (K8s 1.23 being the supported version, according to the release notes). With the information provided by Oliver and a few others posts, I understood that I should try to run much newer CSI sidecars than those provided by Yoga Magnum as sidecar versions tend to be bound more to K8s versions than Magnum ones.
As I didn't have a local registry already setup, I decided to patch /usr/lib/python3.6/site-packages/magnum/drivers/common/templates/kubernetes/fragments/enable-cinder-csi.sh to use "registry.k8s.io/sig-storage/" as the default registry source for sidecars to be able to use recent versions. The version I use are (defined as labels in cluster template):
cinder_csi_plugin_tag=v1.26.4 csi_snapshotter_tag=v6.2.1 csi_attacher_tag=v4.2.0 csi_node_driver_registrar_tag=v2.10.0 csi-provisioner_tag=v3.2.2 csi_resizer_tag=v1.8.0
With these CSI sidecar versions, I was able to get the CSI-related pods but had to fix the cluster role csi-snapshotter-role (in the .sh file mentioned above) to add the right to patch "volumeattachments/status" (without this the error message is explicit in the pod csi-cinder-controllerplugin-0 log.
With these mods, I have been able to create several ephemeral and persistent volumes so it seems it is working. There may be additional details I have not seen but I'm confident that it will possible to solve them, should they arise...
If it helps, I attach the patch for enable-cinder-csi.sh without any warranty that it doesn't hurt in some situations (in particular it probably breaks templates using older versions of K8s without the appropriate labels as the old versions of the sidecars will not be found in registry.k8s.io)... If you plan to use it, be aware that it is at your own risk... Note that the only required part of the patch is the one about the clusterrole: a cleaner solution for installing the last version of the sidecars would be to have a local registry and copy the required versions in it.
Best regards,
Michel
Le 16/03/2024 à 14:45, Oliver Weinmann a écrit :
Hi Michel,
Yes a local registry can be helpful. I highly recommend upgrading to a newer Openstack release or try if you can patch magnum in Yoga to use CAPI. It is so much better and just works.
I was able to patch it in 2023.1 and 2023.2.
* https://www.roksblog.de/openstack-magnum-cluster-api-driver/ <https://www.roksblog.de/openstack-magnum-cluster-api-driver/>
This will not help you with existing clusters but it is absolutely helpful for any new cluster that you deploy.
You can easily migrate your pods from the old clusters to the new ones deployed with CAPI using Velero.
* https://www.roksblog.de/kubernetes-backup-with-velero/ <https://www.roksblog.de/kubernetes-backup-with-velero/>
Have a nice weekend
Von meinem iPhone gesendet
Am 16.03.2024 um 11:36 schrieb Michel Jouvin <michel.jouvin@ijclab.in2p3.fr>:
Oliver,
Thanks for this helpful information. We always delayed having a local registry but I think you are right, we probably need to do it to make easier working around problems like the ones we have with Magnum.
Michel
Le 16/03/2024 à 07:21, Oliver Weinmann a écrit :
Hi Michel,
I found my old post again:
* https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032149... <https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032149.html>
I think you need to create a local registry where you pull all the docker images upfront, since magnum tried to pull the csi snapshotter image from a wrong source. Also your k8s cluster might need containerd.
For the local registry you can check my other blogposts:
* https://www.roksblog.de/openstack-magnum-insecure-registry/ <https://www.roksblog.de/openstack-magnum-insecure-registry/>
Hope this helps. Unfortunately I don’t have a yoga cluster up and running at the moment to test it.
Von meinem iPhone gesendet
Am 15.03.2024 um 18:38 schrieb Oliver Weinmann <oliver.weinmann@me.com>:
Hi Michel,
Sorry just read my post again. I never managed to fix 1.23 under yoga. I remember asking the same question to the mailing list back then and the solutions provided didn’t work.
I think you need to change a lot more tags than just snapshotter.
Von meinem iPhone gesendet
Am 15.03.2024 um 18:32 schrieb Oliver Weinmann <oliver.weinmann@me.com>:
Hi Michel,
It’s been quite some time but as far as I can remember I only changed the snapshotter tag.
What FedoraCoreOs version and k8s are you using?
Cheers, Oliver
Von meinem iPhone gesendet
> Am 15.03.2024 um 18:05 schrieb Michel Jouvin > <michel.jouvin@ijclab.in2p3.fr>: > > > > Hi Oliver, > > Thanks for your excellent post! You described very well > everything that need to be done... and all the steps I went > through... But I have not seen how you fixed the CSI problem. Is > it enough to define the csi_snapshotter_tag? I tried this this > morning but was not able to find the version I was supposed to use. > > BTW, I see that you are using flannel as the network driver. I'm > using calico, not sure it makes any difference for this problem > anyway. > > Cheers, > > Michel > > Le 15/03/2024 à 17:09, Oliver Weinmann a écrit : >> Hi Michel, >> >> Maybe my old blogpost can help you: >> >> * https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minut... <https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/> >> >> Best regards, >> Oliver >> >> Von meinem iPhone gesendet >> >>> Am 15.03.2024 um 16:04 schrieb Michel Jouvin >>> <michel.jouvin@ijclab.in2p3.fr>: >>> >>> Hi, >>> >>> We recently upgraded our cluster to Yoga and since then we >>> cannot successfully start pods in clusters using K8s 1.23 that >>> require a volume. The volume is properly created but attachment >>> fails because it is trying to use v1beta1.CSINode and >>> v1beta1.VolumeAttachment that no longer exists. I found a >>> reference to this in >>> https://github.com/kubernetes/cloud-provider-openstack/issues/1845 but the way to fix it is unclear. I tried to use last version of CSI-related stuff from registry.k8s.io (playing with labels and source), but I then got another problem which may be related (it is my guess) to the fact that I'm using too recent versions. >>> >>> Is somebody using sucessfully Magnum Yoga/K8bs 1.23 combination >>> sucessfully and what is the trick to do it? >>> >>> Thanks in advance for any help. Best regards, >>> >>> Michel >>>
Hi Jake, Thanks for this information, I was not aware of the labels refering to a K8s version, it's handy, I'll test them. I'm convinced we should be running something more recent but for some historical reasons we have been very late with the OpenStack upgrades (we basically did none during 3 years) and we are keeping up now. In 6 months we managed to get from Train to Yoga, including the EL7 -> EL8 upgrade (which was in fact the big part as it requires to live migrate all VMs before reinstalling computes if you want an impact-less upgrade; in fact we moved from Ussuri to Yoga in 1 month, one upgrade a week, without service interruption!). We now have to plan the EL8 -> EL9 upgrade to move further but our goal is to have Bobcat this Spring to be able to run 1.26 or more... Cheers, Michel Le 19/03/2024 à 08:12, Jake Yip a écrit :
Hi Michel,
Yeah local registry will be helpful; we have started running our registry after images disappeared from provider's repo. There is really no way around that except pin your own images.
By the way if you are running K8S 1.23.x you may way to use the 1.23.x plugins, Magnum tests those [1]. There isn't a Yoga label but Antelope uses the same, so you can refer to Antelope label e.g.
cloud_provider_tag=v1.23.4, cinder_csi_plugin_tag=v1.23.4, k8s_keystone_auth_tag=v1.23.4, ...
By the way anything <v1.25 is EOL at this point, and I there are so many changes between 1.21 -> 1.23 -> 1.25 that I really recommend
1.25 (probably 1.26 or 1.27) with Bobcat/Caracal.
Hope that helps.
[1] https://docs.openstack.org/magnum/latest/user/#supported-versions
Regards, Jake
On 18/3/2024 9:59 am, Michel Jouvin wrote:
Hi,
For the record, it seems I managed to fix the CSI issue to create K8s 1.23 clusters with Yoga Magnum (K8s 1.23 being the supported version, according to the release notes). With the information provided by Oliver and a few others posts, I understood that I should try to run much newer CSI sidecars than those provided by Yoga Magnum as sidecar versions tend to be bound more to K8s versions than Magnum ones.
As I didn't have a local registry already setup, I decided to patch /usr/lib/python3.6/site-packages/magnum/drivers/common/templates/kubernetes/fragments/enable-cinder-csi.sh to use "registry.k8s.io/sig-storage/" as the default registry source for sidecars to be able to use recent versions. The version I use are (defined as labels in cluster template):
cinder_csi_plugin_tag=v1.26.4 csi_snapshotter_tag=v6.2.1 csi_attacher_tag=v4.2.0 csi_node_driver_registrar_tag=v2.10.0 csi-provisioner_tag=v3.2.2 csi_resizer_tag=v1.8.0
With these CSI sidecar versions, I was able to get the CSI-related pods but had to fix the cluster role csi-snapshotter-role (in the .sh file mentioned above) to add the right to patch "volumeattachments/status" (without this the error message is explicit in the pod csi-cinder-controllerplugin-0 log.
With these mods, I have been able to create several ephemeral and persistent volumes so it seems it is working. There may be additional details I have not seen but I'm confident that it will possible to solve them, should they arise...
If it helps, I attach the patch for enable-cinder-csi.sh without any warranty that it doesn't hurt in some situations (in particular it probably breaks templates using older versions of K8s without the appropriate labels as the old versions of the sidecars will not be found in registry.k8s.io)... If you plan to use it, be aware that it is at your own risk... Note that the only required part of the patch is the one about the clusterrole: a cleaner solution for installing the last version of the sidecars would be to have a local registry and copy the required versions in it.
Best regards,
Michel
Le 16/03/2024 à 14:45, Oliver Weinmann a écrit :
Hi Michel,
Yes a local registry can be helpful. I highly recommend upgrading to a newer Openstack release or try if you can patch magnum in Yoga to use CAPI. It is so much better and just works.
I was able to patch it in 2023.1 and 2023.2.
* https://www.roksblog.de/openstack-magnum-cluster-api-driver/ <https://www.roksblog.de/openstack-magnum-cluster-api-driver/>
This will not help you with existing clusters but it is absolutely helpful for any new cluster that you deploy.
You can easily migrate your pods from the old clusters to the new ones deployed with CAPI using Velero.
* https://www.roksblog.de/kubernetes-backup-with-velero/ <https://www.roksblog.de/kubernetes-backup-with-velero/>
Have a nice weekend
Von meinem iPhone gesendet
Am 16.03.2024 um 11:36 schrieb Michel Jouvin <michel.jouvin@ijclab.in2p3.fr>:
Oliver,
Thanks for this helpful information. We always delayed having a local registry but I think you are right, we probably need to do it to make easier working around problems like the ones we have with Magnum.
Michel
Le 16/03/2024 à 07:21, Oliver Weinmann a écrit :
Hi Michel,
I found my old post again:
* https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032149... <https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032149.html>
I think you need to create a local registry where you pull all the docker images upfront, since magnum tried to pull the csi snapshotter image from a wrong source. Also your k8s cluster might need containerd.
For the local registry you can check my other blogposts:
* https://www.roksblog.de/openstack-magnum-insecure-registry/ <https://www.roksblog.de/openstack-magnum-insecure-registry/>
Hope this helps. Unfortunately I don’t have a yoga cluster up and running at the moment to test it.
Von meinem iPhone gesendet
Am 15.03.2024 um 18:38 schrieb Oliver Weinmann <oliver.weinmann@me.com>:
Hi Michel,
Sorry just read my post again. I never managed to fix 1.23 under yoga. I remember asking the same question to the mailing list back then and the solutions provided didn’t work.
I think you need to change a lot more tags than just snapshotter.
Von meinem iPhone gesendet
> Am 15.03.2024 um 18:32 schrieb Oliver Weinmann > <oliver.weinmann@me.com>: > > Hi Michel, > > It’s been quite some time but as far as I can remember I only > changed the snapshotter tag. > > > What FedoraCoreOs version and k8s are you using? > > Cheers, > Oliver > > Von meinem iPhone gesendet > >> Am 15.03.2024 um 18:05 schrieb Michel Jouvin >> <michel.jouvin@ijclab.in2p3.fr>: >> >> >> >> Hi Oliver, >> >> Thanks for your excellent post! You described very well >> everything that need to be done... and all the steps I went >> through... But I have not seen how you fixed the CSI problem. >> Is it enough to define the csi_snapshotter_tag? I tried this >> this morning but was not able to find the version I was >> supposed to use. >> >> BTW, I see that you are using flannel as the network driver. >> I'm using calico, not sure it makes any difference for this >> problem anyway. >> >> Cheers, >> >> Michel >> >> Le 15/03/2024 à 17:09, Oliver Weinmann a écrit : >>> Hi Michel, >>> >>> Maybe my old blogpost can help you: >>> >>> * https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minut... >>> <https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/> >>> >>> >>> Best regards, >>> Oliver >>> >>> Von meinem iPhone gesendet >>> >>>> Am 15.03.2024 um 16:04 schrieb Michel Jouvin >>>> <michel.jouvin@ijclab.in2p3.fr>: >>>> >>>> Hi, >>>> >>>> We recently upgraded our cluster to Yoga and since then we >>>> cannot successfully start pods in clusters using K8s 1.23 >>>> that require a volume. The volume is properly created but >>>> attachment fails because it is trying to use v1beta1.CSINode >>>> and v1beta1.VolumeAttachment that no longer exists. I found a >>>> reference to this in >>>> https://github.com/kubernetes/cloud-provider-openstack/issues/1845 >>>> but the way to fix it is unclear. I tried to use last version >>>> of CSI-related stuff from registry.k8s.io (playing with >>>> labels and source), but I then got another problem which may >>>> be related (it is my guess) to the fact that I'm using too >>>> recent versions. >>>> >>>> Is somebody using sucessfully Magnum Yoga/K8bs 1.23 >>>> combination sucessfully and what is the trick to do it? >>>> >>>> Thanks in advance for any help. Best regards, >>>> >>>> Michel >>>>
participants (3)
-
Jake Yip
-
Michel Jouvin
-
Oliver Weinmann