New Kubermatic CLI

Currently the documentation details a manual installation process for Kubermatic and its components. This works pretty well, though has some friction on every minor release, as we usually have some small mandatory migration steps (for example 2.13 to 2.14).

We want to improve this process by providing a program similar to KubeOne that handles installation, upgrades as well as maybe some misc tasks.

Proposal

We propose to add a new binary called kubermatictl that provides:

  • mostly automated installation (a few things will always require manual intervention, like setting up complexer storage systems or integrating customized LoadBalancer support);
  • handle upgrades between Kubermatic releases;
  • perform both steps in a single deploy command (similar to kubeone’ apply command);
  • absorb the few utility functions that kubermatic-operator-util is currently providing (defaulting configuration files and such);
  • maybe absorb the noderesource-documentor

If everything is set up, the workflow would be:

  • kubermatictl installs nginx, cert-manager and Kubermatic Operator into a cluster;
  • Kubermatic Operator then manages/reconciles the actual Kubermatic master and seed installation

To prevent confusion, we should avoid the following names:

  • operator-installer
  • installer-operator

Some of these have been floating around in various channels and have caused miscommunication already. Naming is apparently harder than expected :wink:

Implementation

There is an open Pull Request https://github.com/kubermatic/kubermatic/pull/5685 that would move our old kubermatic-installer into the Kubermatic repository. This will make it much, much easier to keep it in-sync (the old installer’s Achilles’ heel was that it was pretty much always outdated) and allow us to use it during all e2e tests (i.e. hopefully remove most of the ci-setup-kubermatic-in-kind.sh logic).

The code is far from perfect or complete, but it’s a starting point. It still has some DNA from when it was “only” an installer, so it will take some time to slowly merge other things into it.

The short-term goal is to merge it and get us to use it ASAP for our e2e tests. After that it has become an integral part and we cannot forget to keep it up-to-date anymore.

What do you think? Is it worth combining these things into a single binary? Do we really want to call it that or kubermatic-cli or k8ctl or something else?

2 Likes

kubermatictl sounds like a great way to operate whole life-cycle of the Kubermatic enabled cluster. I suppose we can find a lot of inspiration in projects like linkerd (CLI tool to manage Linkerd) and istioctl.

I wounder how is the plan to handle different versions of Kubermatic?
If the CLI is matching to one kubermatic-version, it would maybe make sense, to provider a docker image as well. So an enduser could quickly do something like:
docker run -it -v $(pwd):/k8c kubermatic/k8ctl deploy /install

Another point what came in my mind is, that we need somehow a possibility to hook into some steps. Maybe we could provide some generic hooks on the different phases where e.g.
a custom chart or a custom shell script could get executed

For example istioctl or linkerd has 1:1 mapping of version, i.e. new version = new binary.

Would be fine for me. A matching docker image for quick switching between versions would be awesome :slight_smile: