[magnum] [kolla-ansible] [kayobe] [Victoria] Magnum Kubernetes cluster failure recovery

Tony Pearce tonyppe at gmail.com
Thu Sep 2 03:50:44 UTC 2021


Thanks Feilong and Sven.

> If so, cluster resize should be able to bring the cluster back. And you
can just resize the cluster to the current node number.  For that case,
magnum should be able to fix the heat stack.

I thought this too. But when I try and run "check stack" under heat it
fails. The log for this failure is that the resource is missing, ie one of
the nodes is not there (which I know about).
I tried the cluster resize from horizon, to resize the cluster to the valid
size / current size (without the additional node failure which is not
there) and horizon immediately fails this with a red error in the corner of
the web page. There's no log printed within magnum or heat logs at all. And
the horizon error is not really helpful with error "*Error: *Unable to
resize given cluster id: 1a8e1ed9-64b3-41b1-ab11-0f01e66da1d7.".

> Are you using the magnum auto healing feature by chance?

The "repair unhealthy nodes" option was chosen for this I believe. But I
didnt set up the cluster so I am not sure.

Based on your replies, I discovered how to initiate the cluster resize
using the CLI. After issuing the command, the missing node was rebuilt
immediately. This then appears like some sort of issue with horizon only.
I wanted to get the resized cluster operating successfully before I
replied, but though it re-deployed the missing node, the cluster resize
went timed out and failed. Aside from a quick 30 min investigation on this
I've not been able to do much more with that and it's been abandoned.

Thanks all the same for your help.

Tony Pearce

On Thu, 12 Aug 2021 at 05:06, feilong <feilong at catalystcloud.nz> wrote:

> Let me try to explain it from a design perspective:
>
> 1. Auto scaler: Now cluster auto scaler talks to Magnum resize API
> directly to scale, see
>
> https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/magnum/magnum_manager_impl.go#L399
>
> 2. Auto healer: As you know auto scaler only cares about the worker
> node, it won't scale the master nodes. However, auto healer can repair
> both master nodes and worker nodes. With worker nodes repairing, Magnum
> auto healer uses magnum resize API. But because the magnum resize api
> doesn't support master nodes resizing, so the master nodes repairing is
> done by Heat stack update. magnum auto healer will mark some resources
> of the master node as unhealthy, then call Heat stack update to rebuild
> those resources.
>
>
> On 11/08/21 10:25 pm, Sven Kieske wrote:
> > On Mi, 2021-08-11 at 10:16 +0000, Sven Kieske wrote:
> >> the problem is, that the kubernetes autoscaler directly talks to the
> openstack api, e.g.
> >> nova for creating and destroying instances.
> > Nevermind I got that wrong.
> >
> > The autoscaler talks to heat, so there should no problem (but heat trips
> itself up on some error conditions).
> > I was in fact talking about the magnum auto healer (
> https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/magnum-auto-healer/using-magnum-auto-healer.md
> )
> > which seems to circumvent heat and talks directly with nova.
> >
> > Are you using the magnum auto healing feature by chance?
> >
> > HTH
> >
> --
> Cheers & Best regards,
>
> ------------------------------------------------------------------------------
> Feilong Wang (王飞龙) (he/him)
> Head of Research & Development
>
> Catalyst Cloud
> Aotearoa's own
>
> Mob: +64 21 0832 6348 | www.catalystcloud.nz
> Level 6, 150 Willis Street, Wellington 6011, New Zealand
>
> CONFIDENTIALITY NOTICE: This email is intended for the named recipients
> only.
> It may contain privileged, confidential or copyright information. If you
> are
> not the named recipient, any use, reliance upon, disclosure or copying of
> this
> email or its attachments is unauthorised. If you have received this email
> in
> error, please reply via email or call +64 21 0832 6348.
>
> ------------------------------------------------------------------------------
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210902/7464e576/attachment-0001.html>


More information about the openstack-discuss mailing list