[octavia] [octavia-ingress-controller][k8s]
Hi, We are following https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/octa... to create octavia ingress controller. we are in openstack yoga version, and octavia ingress control version as ( registry.k8s.io/provider-os/octavia-ingress-controller:v1.28.1) we are getting the following error from octavia ingress controller pod =========== ❯ k logs octavia-ingress-controller-0 -n kube-system 2024/01/10 17:47:16 Running command: Command env: (log-file=, also-stdout=false, redirect-stderr=true) Run from directory: Executable path: /bin/octavia-ingress-controller Args (comma-delimited): /bin/octavia-ingress-controller,--config=/etc/config/octavia-ingress-controller-config.yaml 2024/01/10 17:47:16 Now listening for interrupts INFO [2024-01-10T17:47:16Z] Using config file file=/etc/config/octavia-ingress-controller-config.yaml W0110 17:47:16.597288 12 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. FATAL [2024-01-10T17:47:16Z] failed to initialize openstack client error="Get \"/\": unsupported protocol scheme \"\"" 2024/01/10 17:47:16 running command: exit status 1 ======= the following are our service account/configmap/deployment yaml files: ============== serviceaccount.yaml ----- kind: ServiceAccount apiVersion: v1 metadata: name: octavia-ingress-controller namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: octavia-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: octavia-ingress-controller namespace: kube-system --- config.yaml ---- kind: ConfigMap apiVersion: v1 metadata: name: octavia-ingress-controller-config namespace: kube-system data: config: | cluster-name: cedev17 openstack: auth_url: https://srelab501.wpc.az1.eng.pdx.wd:5000 project_domain_name: Default user_domain_name: Default project_name: cedev17.t501.eng.pdx.wd project_id: 94ba42c68e1346189b666f17e49e22f5 username: admin user-id: b0f2b611d99e444cbe1c1fa068940411 password: password region_name: RegionOne cacert: /etc/pki/tls/certs/ca-bundle.crt octavia: subnet-id: 37abc0d8-f5ab-4109-8864-622ab4b47b1f floating-network-id: 42d4ba58-ccd6-407b-a887-5727ee7fe275 manage-security-groups: false provider: amphora ----- --- kind: StatefulSet apiVersion: apps/v1 metadata: name: octavia-ingress-controller namespace: kube-system labels: k8s-app: octavia-ingress-controller spec: replicas: 1 selector: matchLabels: k8s-app: octavia-ingress-controller serviceName: octavia-ingress-controller template: metadata: labels: k8s-app: octavia-ingress-controller spec: serviceAccountName: octavia-ingress-controller tolerations: - effect: NoSchedule # Make sure the pod can be scheduled on master kubelet. operator: Exists - key: CriticalAddonsOnly # Mark the pod as a critical add-on for rescheduling. operator: Exists - effect: NoExecute operator: Exists imagePullSecrets: - name: regcred containers: - name: octavia-ingress-controller image: docker-dev-artifactory.workday.com/wpc5/dev/octavia-ingress-controller:v1.28.1 imagePullPolicy: IfNotPresent args: - /bin/octavia-ingress-controller - --config=/etc/config/octavia-ingress-controller-config.yaml volumeMounts: - mountPath: /etc/kubernetes name: kubernetes-config readOnly: true - name: ingress-config mountPath: /etc/config hostNetwork: true volumes: - name: kubernetes-config hostPath: path: /etc/kubernetes type: Directory - name: ingress-config configMap: name: octavia-ingress-controller-config items: - key: config path: octavia-ingress-controller-config.yaml ------ Thank you in advance for help! -Johnny
Hi, This looks like an issue with pod's connectivity to OpenStack API and Keystone in particular. I'd try debugging it by replacing StatefulSet's command with `sleep inf`, logging into the pod and investigating connectivity to OpenStack API from there. In general you'll get more help by raising an issue in GitHub. cloud- provider-openstack belongs to K8s, not OpenStack. Thanks, Michał On Thu, 2024-01-11 at 10:06 -0800, Johnny Yang wrote:
Hi,
We are following https://github.com/kubernetes/cloud-provider-openstack/blob /master/docs/octavia-ingress-controller/using-octavia-ingress- controller.md to create octavia ingress controller. we are in openstack yoga version, and octavia ingress control version as (registry.k8s.io/provider-os/octavia-ingress-controller:v1.28.1)
we are getting the following error from octavia ingress controller pod =========== ❯ k logs octavia-ingress-controller-0 -n kube-system 2024/01/10 17:47:16 Running command: Command env: (log-file=, also-stdout=false, redirect-stderr=true) Run from directory: Executable path: /bin/octavia-ingress-controller Args (comma-delimited): /bin/octavia-ingress-controller,-- config=/etc/config/octavia-ingress-controller-config.yaml 2024/01/10 17:47:16 Now listening for interrupts INFO [2024-01-10T17:47:16Z] Using config file file=/etc/config/octavia-ingress-controller-config.yaml W0110 17:47:16.597288 12 client_config.go:618] Neither -- kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. FATAL [2024-01-10T17:47:16Z] failed to initialize openstack client error="Get \"/\": unsupported protocol scheme \"\"" 2024/01/10 17:47:16 running command: exit status 1 =======
the following are our service account/configmap/deployment yaml files: ============== serviceaccount.yaml ----- kind: ServiceAccount apiVersion: v1 metadata: name: octavia-ingress-controller namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: octavia-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: octavia-ingress-controller namespace: kube-system ---
config.yaml ---- kind: ConfigMap apiVersion: v1 metadata: name: octavia-ingress-controller-config namespace: kube-system data: config: | cluster-name: cedev17 openstack: auth_url: https://srelab501.wpc.az1.eng.pdx.wd:5000 project_domain_name: Default user_domain_name: Default project_name: cedev17.t501.eng.pdx.wd project_id: 94ba42c68e1346189b666f17e49e22f5 username: admin user-id: b0f2b611d99e444cbe1c1fa068940411 password: password region_name: RegionOne cacert: /etc/pki/tls/certs/ca-bundle.crt octavia: subnet-id: 37abc0d8-f5ab-4109-8864-622ab4b47b1f floating-network-id: 42d4ba58-ccd6-407b-a887-5727ee7fe275 manage-security-groups: false provider: amphora -----
--- kind: StatefulSet apiVersion: apps/v1 metadata: name: octavia-ingress-controller namespace: kube-system labels: k8s-app: octavia-ingress-controller spec: replicas: 1 selector: matchLabels: k8s-app: octavia-ingress-controller serviceName: octavia-ingress-controller template: metadata: labels: k8s-app: octavia-ingress-controller spec: serviceAccountName: octavia-ingress-controller tolerations: - effect: NoSchedule # Make sure the pod can be scheduled on master kubelet. operator: Exists - key: CriticalAddonsOnly # Mark the pod as a critical add-on for rescheduling. operator: Exists - effect: NoExecute operator: Exists imagePullSecrets: - name: regcred containers: - name: octavia-ingress-controller image: docker-dev-artifactory.workday.com/wpc5/dev/octavia- ingress-controller:v1.28.1 imagePullPolicy: IfNotPresent args: - /bin/octavia-ingress-controller - --config=/etc/config/octavia-ingress-controller- config.yaml volumeMounts: - mountPath: /etc/kubernetes name: kubernetes-config readOnly: true - name: ingress-config mountPath: /etc/config hostNetwork: true volumes: - name: kubernetes-config hostPath: path: /etc/kubernetes type: Directory - name: ingress-config configMap: name: octavia-ingress-controller-config items: - key: config path: octavia-ingress-controller-config.yaml ------
Thank you in advance for help!
-Johnny
participants (2)
-
Johnny Yang
-
Michał Dulko