I think this is a bug which we solved in the latest version of the Cluster API driver for Magnum
I suggest updating to the latest and trying on a new cluster.Get Outlook for iOS
From: Satish Patel <satish.txt@gmail.com>
Sent: Wednesday, January 10, 2024 11:51:24 PM
To: Mohammed Naser <mnaser@vexxhost.com>
Cc: OpenStack Discuss <openstack-discuss@lists.openstack.org>
Subject: Re: [magnum] autoscaling feature not workingHi Mohammed,
Yes! "kubectl label node/openstack-control-plane=enabled" fixed autoscaler scheduling issue and now I can see deployment is running. Thank you for the command.
For the testing I have deployed ngnix sample app and created 100s of replica to see autoscale add more worker nodes or not.
# kubectl scale deployment --replicas=150 nginx-deployment
As soon as I noticed my pods stuck in pending autoscaler started create new nova vms but as soon as vm up suddenly it deleted vm (This is happening in loops create vm and delete vm)
I0111 04:35:13.427995 1 orchestrator.go:310] Final scale-up plan: [{MachineDeployment/magnum-system/kube-al0rs-default-worker-7cxtx 1->2 (max: 4)}]
I0111 04:35:13.428019 1 orchestrator.go:582] Scale-up: setting group MachineDeployment/magnum-system/kube-al0rs-default-worker-7cxtx size to 2
I0111 04:35:49.576173 1 static_autoscaler.go:405] 1 unregistered nodes present
Here is the full logs of autoscaler - https://pastebin.com/AMQbGhRj
Do you know what could be wrong here? or any clue where I should look for culprits.
On Wed, Jan 10, 2024 at 4:09 PM Mohammed Naser <mnaser@vexxhost.com> wrote:
Yes, this is an assumption we’re making since we’re running things with Atmosphere.
Can you running `kubectl label node/openstack-control-plane=enabled`
Thanks