Hey Oliver: It seems that the error message you’re sharing is because of the delete of a node group l, not related to the creation Feel free to join our Slack channel for help there too. Thanks Mohammed Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: Oliver Weinmann <oliver.weinmann@me.com> Sent: Monday, December 18, 2023 2:30:20 AM To: kmceliker@gmail.com <kmceliker@gmail.com> Cc: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: Re: about magnum capi for production You don't often get email from oliver.weinmann@me.com. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> Hi all, I’m also trying to get the Vexxhost CAPI driver working under Kolla-Ansible. Many thanks to Nguyen Huu Khoi Github page. This was a very good starting point. My goal is to collect all the info to get it working in a single place (my blog). Currently the info is pretty much scattered on different websites, I managed to create the cluster template but the creation fails immediately. This is the error that I get in magnum-conductor.log: ==> /var/log/kolla/magnum/magnum-conductor.log <== 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib64/python3.9/site-packages/magnum/service/periodic.py", line 100, in update_status 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall ng.destroy() 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib64/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall return fn(self, *args, **kwargs) 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib64/python3.9/site-packages/magnum/objects/nodegroup.py", line 175, in destroy 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall self.dbapi.destroy_nodegroup(self.cluster_id, self.uuid) 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib64/python3.9/site-packages/magnum/db/sqlalchemy/api.py", line 832, in destroy_nodegroup 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall raise exception.NodeGroupNotFound(nodegroup=nodegroup_id) 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall magnum.common.exception.NodeGroupNotFound: Nodegroup 4277e9e6-5c3e-4cce-a1cf-1f5e8c2f0689 could not be found. 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall (2023.1) [vagrant@seed ~]$ openstack coe cluster list +--------------------------------------+--------------------------------------+---------+------------+--------------+-----------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status | +--------------------------------------+--------------------------------------+---------+------------+--------------+-----------------+---------------+ | b4ce540f-78a9-4c5d-a687-e992b3bd19a7 | k8s-flan-small-37-v1.23.3-containerd | mykey | 2 | 1 | CREATE_COMPLETE | HEALTHY | | e8acc6da-f937-4e8f-9df8-1728a8079ed0 | k8s-v1.24.16 | mykey | 2 | 1 | CREATE_FAILED | None | +--------------------------------------+--------------------------------------+---------+------------+--------------+-----------------+---------------+ (2023.1) [vagrant@seed ~]$ openstack coe nodegroup list k8s-v1.24.16 +--------------------------------------+----------------+---------------------+--------------------------------------+------------+--------------------+--------+ | uuid | name | flavor_id | image_id | node_count | status | role | +--------------------------------------+----------------+---------------------+--------------------------------------+------------+--------------------+--------+ | 21c10537-e3d3-44cf-8e58-731cfeb5b9fe | default-master | m1.kubernetes.small | 9d989f56-359b-4d6a-a914-926e0ea938d7 | 1 | CREATE_IN_PROGRESS | master | | 12dec017-38cc-42d9-b944-649ae356907d | default-worker | m1.kubernetes.small | 9d989f56-359b-4d6a-a914-926e0ea938d7 | 2 | CREATE_IN_PROGRESS | worker | +--------------------------------------+----------------+---------------------+--------------------------------------+------------+--------------------+————+ Cheers, Oliver [On 17. Dec 2023, at 22:43, kmceliker@gmail.com wrote: Here is an example of a CAPI deployment code for OpenStack, mate - using the clusterctl tool and the cluster-template.yaml file. This code will create a cluster named capi-openstack with one control plane node and three worker nodes, using the ubuntu-2204 image and the m1.medium flavor. You need to replace the placeholders with your own values # Install clusterctl curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.0.1/clus... -o clusterctl chmod +x ./clusterctl sudo mv ./clusterctl /usr/local/bin/clusterctl # Set environment variables export OPENSTACK_CLOUD=<openstack-cloud> export OPENSTACK_USERNAME=<openstack-username> export OPENSTACK_PASSWORD=<openstack-password> export OPENSTACK_DOMAIN_NAME=<openstack-domain-name> export OPENSTACK_PROJECT_ID=<openstack-project-id> export OPENSTACK_SSH_KEY_NAME=<openstack-ssh-key-name> export OPENSTACK_DNS_NAMESERVERS=<openstack-dns-nameservers> export OPENSTACK_EXTERNAL_NETWORK_ID=<openstack-external-network-id> export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=m1.medium export OPENSTACK_NODE_MACHINE_FLAVOR=m1.medium export OPENSTACK_IMAGE_NAME=ubuntu-2204 export KUBERNETES_VERSION=v1.23.15 # Initialize clusterctl clusterctl init --infrastructure openstack # Create cluster clusterctl config cluster capi-openstack --kubernetes-version $KUBERNETES_VERSION --control-plane-machine-count=1 --worker-machine-count=3 > cluster-template.yaml clusterctl create cluster --kubeconfig ~/.kube/config --infrastructure openstack:v0.6.0 --bootstrap kubeadm:v0.4.4 --control-plane kubeadm:v0.4.4 --cluster capi-openstack --namespace default --from cluster-template.yaml Also you can turn on Enable to Magnum Report Log on OpenStack to provide with us or Take a look deep dively as the following link; https://github.com/kubernetes-sigs/cluster-api-provider-openstack Best, Kerem Çeliker Head of Cloud Architecture tr.linkedin.com/in/keremceliker