Hi Mohammed,

After digging into it I found autoscaler is not getting scheduled to run any nodes. Getting the following error.  Full output is here - https://paste.opendev.org/show/bmJY3ZrRf07S1Q6OTaeA/ 

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  40m                default-scheduler  0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
  Warning  FailedScheduling  15m (x5 over 35m)  default-scheduler  0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

Do you think it has the following flag?

Node-Selectors:  openstack-control-plane=enabled


On Thu, Jan 4, 2024 at 9:39 AM Satish Patel <satish.txt@gmail.com> wrote:
Hi Mohammed,

Yes, I am using a CAPI driver. In k8s management cluster my autoscaler is in pending status. Did I miss something? 

root@os2-capi-01:~# kubectl get deploy -A
NAMESPACE                           NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager       1/1     1            1           8d
capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager   1/1     1            1           8d
capi-system                         capi-controller-manager                         1/1     1            1           8d
capo-system                         capo-controller-manager                         1/1     1            1           8d
cert-manager                        cert-manager                                    1/1     1            1           8d
cert-manager                        cert-manager-cainjector                         1/1     1            1           8d
cert-manager                        cert-manager-webhook                            1/1     1            1           8d
kube-system                         calico-kube-controllers                         1/1     1            1           8d
kube-system                         coredns                                         2/2     2            2           8d
kube-system                         dns-autoscaler                                  1/1     1            1           8d
magnum-system                       kube-2lke8-autoscaler                           0/1     1            0           4h25m
magnum-system                       kube-d6n1t-autoscaler                           0/1     1            0           14h
magnum-system                       kube-dmmks-autoscaler                           0/1     1            0           14h

## It is in pending status 

root@os2-capi-01:~# kubectl get pod -n magnum-system
NAME                                     READY   STATUS    RESTARTS   AGE
kube-2lke8-autoscaler-77bd94cc6f-5xbjg   0/1     Pending   0          4h25m
kube-d6n1t-autoscaler-7486955bd4-fwfc9   0/1     Pending   0          14h
kube-dmmks-autoscaler-596f9d48c-wzlbk    0/1     Pending   0          14h

## Logs are empty.. 

root@os2-capi-01:~# kubectl logs kube-2lke8-autoscaler-77bd94cc6f-5xbjg -n magnum-system
root@os2-capi-01:~# kubectl logs kube-dmmks-autoscaler-596f9d48c-wzlbk -n magnum-system

On Thu, Jan 4, 2024 at 1:16 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi Satish:

If this is based for CAPI driver, it provisions a cluster-autoscaler instance which deploys new nodes when there are pods in Pending state. 


Thanks
Mohammed

From: Satish Patel <satish.txt@gmail.com>
Sent: Wednesday, January 3, 2024 3:43:40 PM
To: OpenStack Discuss <openstack-discuss@lists.openstack.org>
Subject: [magnum] autoscaling feature not working
 
Folks,

I am trying to set the auto scaling feature in magnum "auto_scaling_enabled '' true but somehow the auto scaling is not working. truly speaking I don't understand the workflow of scaling up and down of worker nodes. 

How does magnum know that it requires scaling up and down without monitoring? On what basis will it trigger that action?