I currently am trying to evaluate a solution where i use AKS clusters to have a way of running KubeEdge. KubeEdge brings k8s to your IoT device as k8s node. Then you can deploy some workload on those nodes.
What i currently did was:
Create two AKS nodes:
seed. In the main cluster i installed Kubermatic Kubermetes Platform. I then created a new project and manually added the kubeconfig of the
seed cluster to it.
I installed kubeedge on the
My seed cluster has one AKS worker node running the
cloudcore of KubeEdge.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-agentpool-23303655-vmss000000 Ready agent 41h v1.19.11 $ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cloudcore LoadBalancer 10.0.41.116 x.x.x.x 10002:30002/TCP,10000:30000/TCP 114m
I’m able to connect a KubeEdge device to the cloudcore and therefore the cluster.
However, all my KubeEdge nodes do have the role
master meaning that some master related stuff isrunning on my edge devices. I don’t want this.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-agentpool-23303655-vmss000000 Ready agent 41h v1.19.11 node0 Ready agent,edge,master 60m v1.19.3-kubeedge-v1.7.1 node1 Ready agent,edge,master 27m v1.19.3-kubeedge-v1.7.1 node2 Ready agent,edge,master 27m v1.19.3-kubeedge-v1.7.1 node3 Ready agent,edge,master 27m v1.19.3-kubeedge-v1.7.1 $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-66c464876b-pn6jv-x-kube-system-x-tenant1 0/1 Running 0 64m 172.18.0.2 node0 <none> <none> tenant1-0 2/2 Running 0 131m 10.244.0.24 aks-agentpool-23303655-vmss000000 <none> <none>
Does anyone here can tell me what is going wrong and how to prevent those nodes from becoming masters?