From what I see, you likely don't have these labels applied. Please apply
As Robson mentioned, please make sure you have the correct labels applied to your nodes. Depending how you're deploying Kubernetes, you may have to manually add these labels. The default node selector key/value labels for the ingress controller can be found here: https://github.com/openstack/openstack-helm-infra/blob/master/ingress/values... them and see if this fixes your issue. Steve On Wed, May 29, 2019 at 2:38 PM Robson Ramos Barreto < robson.rbarreto@gmail.com> wrote:
Hi,
I'm trying with helm too and I have these node labels:
kubectl label nodes --all openstack-control-plane=enabled kubectl label nodes --all openstack-compute-node=enabled kubectl label nodes --all openvswitch=enabled kubectl label nodes --all linuxbridge=enabled kubectl label nodes --all ceph-mon=enabled kubectl label nodes --all ceph-osd=enabled kubectl label nodes --all ceph-mds=enabled kubectl label nodes --all ceph-rgw=enabled kubectl label nodes --all ceph-mgr=enabled
From kubectl describe pod you are able to see the needed node selector
Regards
On Wed, May 29, 2019 at 3:57 PM Akki yadav <yadav.akshay58@gmail.com> wrote:
Hello team, I am running the following command in order to deploy the ingress controller: - ./tools/deployment/multinode/020-ingress.sh It makes two pods ingress-error-pages-899888c7-pnxhv and ingress-error-pages-899888c7-w5d5b which remains in pending state and the script ends after 900 seconds stating in "kubectl describe" that : - Warning FailedScheduling 60s (x3 over 2m12s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
As only default labels are associated with the cluster nodes, do i need to add any specific one ? If yes, Please tell me where and how to add the same.
Please guide me how to resolve this issue. Thanks in advance.
Regards, Akshay