Again,

Why not deploy Kind to „kickstart“ a multi master k8s cluster in Openstack and then turn this cluster to a management cluster for CAPI?

At least this is what I understand from the docs. 

kind can be used for creating a local Kubernetes cluster for development environments or for the creation of a temporary bootstrap cluster used to provision a target management cluster on the selected infrastructure provider.


Von meinem iPhone gesendet

Am 04.01.2024 um 14:11 schrieb Satish Patel <satish.txt@gmail.com>:


Agreed with Michal, 

I read same and that is why decided to go with kubespray with 3 nodes for production and HA for capi 

On Thu, Jan 4, 2024 at 7:38 AM Michal Arbet <michal.arbet@ultimum.io> wrote:
Hi,

Because from https://kind.sigs.k8s.io/  i understood that it is for testing only or for CI 

"""
kind is a tool for running local Kubernetes clusters using Docker container “nodes”.
kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
"""

So, what about production usage ? Is it ok ? What about HA ? 
I would lke to turn on/off nodes and wants to everytime all working ... is kind  ready for it ? 

Thanks 

Michal Arbet
Openstack Engineer

Ultimum Technologies a.s.
Na Poříčí 1047/26, 11000 Praha 1
Czech Republic

+420 604 228 897 
michal.arbet@ultimum.io
https://ultimum.io



st 3. 1. 2024 v 11:00 odesílatel Oliver Weinmann <oliver.weinmann@me.com> napsal:
Hi,

Why use kubespray? Wouldn’t it be easiest to use the kind cluster to create a new cluster using cluster api and then turn this cluster to a new management cluster?

Cheers,
Oliver

Von meinem iPhone gesendet

Am 03.01.2024 um 10:41 schrieb Michal Arbet <michal.arbet@ultimum.io>:


Hi, can u share kubespray group vars ?

And btw what about k3s ? 

On Tue, Jan 2, 2024, 17:45 Satish Patel <satish.txt@gmail.com> wrote:
Bumping up Just in case folks missed it in holiday time. 

On Tue, Dec 26, 2023 at 11:14 PM Satish Patel <satish.txt@gmail.com> wrote:
Folks,

In the lab I have deployed a magnum-capi driver with a single node kind cluster and it works great. Now I want to take it to production and make sure my capi management cluster runs on a HA environment of k8s and survives all kinds of failure. 

I have deployed 3 node k8s using kubespray and deployed CAPI applications on top of it using clusterctl init command but it default created a replica of 1/1 for all pods. How do I tell clusterctl to create a replica with 3 for HA and how does data replication work? 

How do I take backup/restore of data in k8s cluster using clusterctl for disaster recovery? 

I have checked official documents but it's a little confusing so I'm asking question here to clear my doubts. 

Cheers! 
~S