I am getting this error now 2023-12-18 14:48:32.912 18 ERROR oslo_messaging.rpc.server resource = self.get_object() 2023-12-18 14:48:32.912 18 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/magnum_cluster_api/resources.py", line 163, in get_object 2023-12-18 14:48:32.912 18 ERROR oslo_messaging.rpc.server assert CONF.cluster_template.kubernetes_allowed_network_drivers == ["calico"] 2023-12-18 14:48:32.912 18 ERROR oslo_messaging.rpc.server AssertionError 2023-12-18 14:48:32.912 18 ERROR oslo_messaging.rpc.server On Mon, Dec 18, 2023 at 5:11 AM Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com> wrote:
This is not about doc much. Because we have too many cloud environments. If we have a good blog, are you sure what you write will work well with all environments? Nguyen Huu Khoi
On Mon, Dec 18, 2023 at 5:01 PM Satish Patel <satish.txt@gmail.com> wrote:
Hi Oliver,
I am 100% with you, There isn't any good technical blog or document about integration with magnum-capi drivers. There is good information but not good technical details with simple steps to follow. How to troubleshoot components etc. for the last few days I am struggling and still have no luck. I have tried all possible combinations. If my setup works then I will surely write good blogs.
On Mon, Dec 18, 2023 at 3:33 AM Oliver Weinmann <oliver.weinmann@me.com> wrote:
Hi all,
I’m also trying to get the Vexxhost CAPI driver working under Kolla-Ansible. Many thanks to Nguyen Huu Khoi Github page. This was a very good starting point. My goal is to collect all the info to get it working in a single place (my blog). Currently the info is pretty much scattered on different websites, I managed to create the cluster template but the creation fails immediately. This is the error that I get in magnum-conductor.log:
==> /var/log/kolla/magnum/magnum-conductor.log <== 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib64/python3.9/site-packages/magnum/service/periodic.py", line 100, in update_status 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall ng.destroy() 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib64/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall return fn(self, *args, **kwargs) 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib64/python3.9/site-packages/magnum/objects/nodegroup.py", line 175, in destroy 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall self.dbapi.destroy_nodegroup(self.cluster_id, self.uuid) 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib64/python3.9/site-packages/magnum/db/sqlalchemy/api.py", line 832, in destroy_nodegroup 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall raise exception.NodeGroupNotFound(nodegroup=nodegroup_id) 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall magnum.common.exception.NodeGroupNotFound: Nodegroup 4277e9e6-5c3e-4cce-a1cf-1f5e8c2f0689 could not be found. 2023-12-17 23:27:03.482 7 ERROR oslo.service.loopingcall
(2023.1) [vagrant@seed ~]$ openstack coe cluster list
+--------------------------------------+--------------------------------------+---------+------------+--------------+-----------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status |
+--------------------------------------+--------------------------------------+---------+------------+--------------+-----------------+---------------+ | b4ce540f-78a9-4c5d-a687-e992b3bd19a7 | k8s-flan-small-37-v1.23.3-containerd | mykey | 2 | 1 | CREATE_COMPLETE | HEALTHY | | e8acc6da-f937-4e8f-9df8-1728a8079ed0 | k8s-v1.24.16 | mykey | 2 | 1 | CREATE_FAILED | None |
+--------------------------------------+--------------------------------------+---------+------------+--------------+-----------------+---------------+ (2023.1) [vagrant@seed ~]$ openstack coe nodegroup list k8s-v1.24.16
+--------------------------------------+----------------+---------------------+--------------------------------------+------------+--------------------+--------+ | uuid | name | flavor_id | image_id | node_count | status | role |
+--------------------------------------+----------------+---------------------+--------------------------------------+------------+--------------------+--------+ | 21c10537-e3d3-44cf-8e58-731cfeb5b9fe | default-master | m1.kubernetes.small | 9d989f56-359b-4d6a-a914-926e0ea938d7 | 1 | CREATE_IN_PROGRESS | master | | 12dec017-38cc-42d9-b944-649ae356907d | default-worker | m1.kubernetes.small | 9d989f56-359b-4d6a-a914-926e0ea938d7 | 2 | CREATE_IN_PROGRESS | worker |
+--------------------------------------+----------------+---------------------+--------------------------------------+------------+--------------------+————+
Cheers, Oliver
[On 17. Dec 2023, at 22:43, kmceliker@gmail.com wrote:
Here is an example of a CAPI deployment code for OpenStack, mate - using the clusterctl tool and the cluster-template.yaml file. This code will create a cluster named capi-openstack with one control plane node and three worker nodes, using the ubuntu-2204 image and the m1.medium flavor. You need to replace the placeholders with your own values
# Install clusterctl curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.0.1/clus... -o clusterctl chmod +x ./clusterctl sudo mv ./clusterctl /usr/local/bin/clusterctl
# Set environment variables export OPENSTACK_CLOUD=<openstack-cloud> export OPENSTACK_USERNAME=<openstack-username> export OPENSTACK_PASSWORD=<openstack-password> export OPENSTACK_DOMAIN_NAME=<openstack-domain-name> export OPENSTACK_PROJECT_ID=<openstack-project-id> export OPENSTACK_SSH_KEY_NAME=<openstack-ssh-key-name> export OPENSTACK_DNS_NAMESERVERS=<openstack-dns-nameservers> export OPENSTACK_EXTERNAL_NETWORK_ID=<openstack-external-network-id> export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=m1.medium export OPENSTACK_NODE_MACHINE_FLAVOR=m1.medium export OPENSTACK_IMAGE_NAME=ubuntu-2204 export KUBERNETES_VERSION=v1.23.15
# Initialize clusterctl clusterctl init --infrastructure openstack
# Create cluster clusterctl config cluster capi-openstack --kubernetes-version $KUBERNETES_VERSION --control-plane-machine-count=1 --worker-machine-count=3 > cluster-template.yaml clusterctl create cluster --kubeconfig ~/.kube/config --infrastructure openstack:v0.6.0 --bootstrap kubeadm:v0.4.4 --control-plane kubeadm:v0.4.4 --cluster capi-openstack --namespace default --from cluster-template.yaml
Also you can turn on Enable to Magnum Report Log on OpenStack to provide with us or Take a look deep dively as the following link; https://github.com/kubernetes-sigs/cluster-api-provider-openstack
Best, Kerem Çeliker Head of Cloud Architecture tr.linkedin.com/in/keremceliker