Import existing K8S cluster to KubeOne

running kubeone install --force -m (...) -v --debug in an attempt to import an existing Kubernetes cluster to KubeOne fails. The install tasks looks like it’s going to a loop at:

[192.168.3.12] I0924 12:25:21.515103   61499 round_trippers.go:438] GET https://<ip>:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-2y7k90 404 Not Found in 20 milliseconds
[192.168.3.12] I0924 12:25:21.527354   61499 round_trippers.go:438] POST https://<ip>:6443/api/v1/namespaces/kube-system/secrets 201 Created in 11 milliseconds
[192.168.3.12] 2y7k90.u355d8vud7l86u03
[192.168.3.12] + exit 0
INFO[12:25:21 UTC] Building Kubernetes clientset…               
INFO[12:25:21 UTC] Check if cluster needs any repairs…          
WARN[12:25:26 UTC] Task failed, error was: context deadline exceeded 
WARN[12:25:31 UTC] Retrying task…                               
INFO[12:25:31 UTC] Check if cluster needs any repairs…          
WARN[12:25:36 UTC] Task failed, error was: context deadline exceeded 
WARN[12:25:46 UTC] Retrying task…                               
INFO[12:25:46 UTC] Check if cluster needs any repairs…          
WARN[12:25:51 UTC] Task failed, error was: context deadline exceeded 
WARN[12:26:11 UTC] Retrying task…                               
INFO[12:26:11 UTC] Check if cluster needs any repairs…          
WARN[12:26:17 UTC] Task failed, error was: context deadline exceeded 
WARN[12:26:57 UTC] Retrying task…                               
INFO[12:26:57 UTC] Check if cluster needs any repairs…          
WARN[12:27:02 UTC] Task failed, error was: context deadline exceeded

Could someone help me out understand what’s going on there? The kubeone output is not that clear for me in terms to what’s not working.

Please note that I am running KubeOne version compiled from the follwoing branch https://github.com/kubermatic/kubeone/compare/rhel-own-docker

Hello,

The task that’s failing is an attempt to contact etcd

Please check if there is no firewall involved and that every node can reach every other etcd.

Another reason why etcd connection can timeout is: bad tls certificates, that don’t match.

Indeed the issue seems related to etcd.Looking at a fresh deploy of KubeOne, etcd is deployed as a pod in the kube-system ns.

In my particular scenraio, where the K8S cluster is already in place - etcd deployment type is host and it seems that kubeone install --force ... does not deploy etcd… when I’m looking at the current state of my ‘transition’ cluster - there is no etcd pod in kube-system ns.