Simple k0s Setup

Recently, I’ve needed to learn how to use Kubernetes (K8S) for work. And as usual, I dive in and get really deep into what I need to learn. And how better to learn how to use it and what can happen than by using it to host your personal website?

So this is my notes on the solution that I ended up with… for a short time. Obviously, K8S is overkill for a personal static website blog; and that became apparent as I went along. However, it was still an excellent learning experience for me; highlighting some of the strengths and weaknesses of K8S. As with everything, it’s all about the right tool for the job.

What I tried, and when to use it

First pass - Microk8s (v1.32 and v1.33)

My first attempt was to use Microk8s, Canonical’s very opinionated K8S distribution. This was installed onto Ubuntu 24.04. It’s actually quite nice, due to it’s opinionated approach, making it easy to set up and install, and it easily provides the NGINX ingress and local storage via built in addons. I was happy with it and it helped me learn K8S without getting too deep too quickly. However, I did have a few issues with it.

  • CPU usage while idle. Even when the server was otherwise idle, it seemed to consume quite a bit of CPU; quickly using up the CPU burst capability of my super cheap VPS. Fair enough; it has things to do, but more than I expected! The CPU sat at around 30-40% consistently, but this seems to be a known issue, but is somewhat more related to Microk8s than K8S, although we expect more baseline CPU usage with K8S.

  • IPv6. Despite following their official instructions, on Ubuntu 24.04, it applied and tried to use IPv6, but then no pods would start. A deeper dig discovered a missing/buggy kernel feature related to the IPtables that Microk8s’s Felix CNI used caused it to be unable to start any pods. The kernel shipped with 25.04 did have the feature/bug fixed, so this worked, but then broke on a reboot.

So while Microk8s worked and taught me a lot of the field, it was time to explore some other alternatives!

Side quest: IPv6

Why is IPv6 important to me? Maybe I’m funny, but I’d like to see it much more widely deployed than it is now. I kind of understand why it’s not; it costs money and it’s more complex than IPv4 so it does require some different thinking. But it opens up major simplifications of networking infrastructure; removing the need for NATs and VPNs in many cases. This would make many systems much easier to understand, manage, and tie together, and be cheaper to boot, in my humble opinion (Reference: look up how much Amazon AWS charges for VPC outgoing NAT gateways). No more “can’t join these two VPCs because the IPv4 CIDRs conflict” and “easy secure access to your home computer from work, without having to do crazy NAT punching VPNs”. But that’s just me. I’m not the one making those decisions.

Side story: back in 2011, with the company I worked for at the time, we had a nice 100Mbit Fibre connection to our office. The ISP offered IPv6 over this connection in addition to IPv4. I jumped on the chance; convinced management to let me plan and roll our IPv6 for our office (back in the days when ADSL was king). I pushed for it because I wanted us as an organisation to be ready for the next big thing. It was a great experience, learning how to get Linux, Windows, and OSX machines to work together on IPv6 with a Linux router; having them optionally be publicly routable without a VPN (by whitelisting on the router, of course). Having IPv6 on the development machines shook out several IPv6 issues with our code bases. It didn’t even take long to roll out; less than a week in amongst my other duties. But it seems I was ahead of my time…

So now I make a point of being as IPv6 ready as my current employer will allow me within their time and resource constraints. I have IPv6 configured at home and ensure that it works at all times, and will actively seek to enable and set it up securely for production systems. When I ran my own business, I did use IPv6 to securely access systems between the commercial workshop and home without needing a VPN.

Alternative 1 - Talos Linux (v1.7 - v1.10)

I really like the concept of Talos Linux; precisely what you need installed on the machine; nothing more, nothing less. An immutable root disk for security and seamless updates. I also liked that you provision it from a configuration file remotely; so you’ve got the exact configuration on hand to replace the machine should it fail, or create another deployment exactly the same.

What I didn’t like was that the generated config files had the secrets embedded into them. Meaning you’d have to encrypt them with SOPS first before committing them to Git if you chose to store the configs that way. A third party tool, talhelper, wasn’t updated to work with the current version of Talos when I tried it - this was back in March 2025, so they may have fixed and caught it up since then, so maybe I should try again. In any event, it started to get all quite a bit complicated.

I was eyeing this off for a Raspberry Pi cluster at work (long story, there are 10+ Raspberry Pis that run hardware specific programs; K8S seems like an easy way to deploy to all of them consistently and solve some other networking issues with them too). But the config complexity stopped me from that one.

The other alternative for Talos was to use their SaaS service called Omni to deploy and manage clusters remotely. Which worked great when I tried it! But I’m a cheapskate and didn’t want to pay the $10 a month for a personal subscription, required to use the service. I could likely convince work to pay for the business plans (more than $10 per month) given how it worked, but I’m trying to do it for the minimum cost.

So I ultimately decided that Talos wasn’t the right fit for me. Sorry guys - neat product, but I feel it needs a little bit more simplification.

Alternative 2 - K3S / Rancher (v1.32 - v1.33)

K3S looks quite nice and simple. Just what you need. But with plenty of power in the backend.

After some evaluation, it’s apparent that it works best with their Rancher product. Which you can self host, and looked ok. However, Rancher was a little bit “enterprisey” for my tastes; I know that’s not really a defined description, but it looks like it was designed for higher level managers than technical staff, with things in confusing spots. I also had trouble following some of their guides which referenced buttons or pages that had been moved. Workable, but not a fantastic experience for me.

In terms of K3S; I liked that it used Kine so you could use something other than etcd as the cluster store. But when I dug in a little bit, a lot of stuff was controlled by a wide array of environment variables that you set before you install K3S. This I was not a fan of for repeatable installations… so I kept looking.

K0s

So I ended up trying k0s on Ubuntu 24.04. And this actually synced with me. Their docs immediately point out that they have a k0sctl tool for deploying a cluster from a parent machine with a written out config. And that written out config didn’t have any secrets in it, so it could be committed to git safely to be able to reproduce the cluster later on.

It also supported and worked with IPv6 immediately once the right configuration was set - no hacking required.

So this is what I did. I’ve documented below what I did for future reference in case it’s interesting.

I still had a few little quirks with it:

  • The default is to use etcd as the cluster data store. In my test VM on my desktop, I lost power to my desktop; etcd refused to start up again as the on disk storage was corrupted. As it was a single node cluster, it couldn’t restore itself from another node. This isn’t a k0s issue, nor an etcd issue; it’s my lack of understanding of etcd that led to this. But it’s important to note and know for low maintenance single node clusters.
  • I did find myself going in a few loops to separate the k0s config and the k0sctl config; as they’re different and you want to get the k0s initial config into the k0sctl config… keep at it and you’ll get it sorted.
  • It still had a more-than-zero baseline CPU usage, but this is a K8S thing, not a k0s thing. It wasn’t to the same level as Microk8s though; about half.

Basic setup tutorial

So here is a basic setup tutorial to get k0s working. I’ve listed exact versions that I used in case you have trouble reproducing it in the future.

  • k0s v1.33 / k0sctl 0.25.1 (via Homebrew)
  • Single node K8S
  • Ubuntu Server 24.04
  • Vultr VM with 2vCPU, 2GB of RAM, 55GB SSD, IPv4 + IPv6 addresses
  • Working IPv6 setup
  • NGINX for ingress
  • Let’s Encrypt for certificate signing
  • Either Postgres or SQLlite for the K8S control plane storage (instead of etcd)

Basic installation

Install Ubuntu 24.04 on your remote machine that will be your k0s host. No special setup is needed other than enabling SSH access during installation. I’ve personally done this on a VM on my desktop machine and a remote Vultr VPS.

Passwordless access

You’ll need passwordless access from your desktop to the target machine, if you’re not installing via the root user. Change the username below to yours.

remote$ sudo -s
remote$ cat > /etc/sudoers.d/90-danielf
danielf ALL=(ALL) NOPASSWD:ALL
# Now press CTRL+D

Option 1 - Postgres as etcd replacement store

If you want to use Postgres as your etcd replacement store (subject to your requirements) you’ll need to install Postgres on the remote. Make sure to choose a suitable password:

remote$ sudo apt-get install postgresql
remote$ sudo su - postgres -c psql
postgres=# CREATE USER k0s PASSWORD 'PASSWORD';
postgres=# CREATE DATABASE k0s OWNER k0s;
postgres=# \q
remote$ psql postgres://k0s:PASSWORD@127.0.0.1:5432/k0s
k0s=> \q

k0sctl setup and k0s install

Install the k0sctl tool with their instructions, on your desktop. I ended up using brew.

desktop$ brew install k0sproject/tap/k0sctl

And generate the initial config:

desktop$ k0sctl init > your-k0s-cluster.yaml

Now customise as needed. I’ve included my example below. The key parts are:

  • Enabling IPv6 by changing to Calico for the CNI, and turning on dualStack with IPv6 pod and service CIDRs.
  • Under hosts, setting the role to controller+worker.
  • Under hosts, setting noTaints: true - if this isn’t set, you’ll end up with a no schedule taint on the controller node.
  • Choose which storage data source you’d like (SQLlite or Postgres) with the appropriate lines below.
# your-k0s-cluster.yaml
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
  user: admin
spec:
  hosts:
  - ssh:
      address: 10.0.14.101
      user: danielf
      port: 22
      keyPath: ~/.ssh/id_ed25519
    role: controller+worker
    noTaints: true
  k0s:
    config:
      spec:
        storage:
          type: kine
          kine:
            dataSource: "sqlite:///var/lib/k0s/kine.db?_journal=WAL&cache=shared"
            # dataSource: "postgres://k0s:PASSWORD@127.0.0.1:5432/k0s?sslmode=disable"
        network:
          provider: calico
          calico:
            mode: bird
          podCIDR: 10.244.0.0/16
          serviceCIDR: 10.96.0.0/12
          dualStack:
            enabled: true
            IPv6podCIDR: fd00::/108
            IPv6serviceCIDR: fd01::/108
  options:
    wait:
      enabled: true
    drain:
      enabled: true
      gracePeriod: 2m0s
      timeout: 5m0s
      force: true
      ignoreDaemonSets: true
      deleteEmptyDirData: true
      podSelector: ""
      skipWaitForDeleteTimeout: 0s
    concurrency:
      limit: 30
      workerDisruptionPercent: 10
      uploads: 5
    evictTaint:
      enabled: false
      taint: k0sctl.k0sproject.io/evict=true
      effect: NoExecute
      controllerWorkers: false

Now let’s run this on your desktop, against the remote host:

desktop$ k0sctl apply --config your-k0s-cluster.yaml

It’ll take a little bit to install; it has to download the binary remotely and then install it. It’ll keep you up to date with it’s progress.

When it’s ready, you can grab out the kube config. Caution: this will overwrite any other Kube config you have:

desktop$ k0sctl kubeconfig --config your-k0s-cluster.yaml > ~/.kube/config
desktop$ kubectl get pods --all-namespaces

And if you forgot to remove the taint on the worker node with noTaints: true, you can remove it like so - note the - at the end which means remove. (This took some time to figure out!)

desktop$ kubectl taint nodes k0stest node-role.kubernetes.io/control-plane:NoSchedule-

And that’s the K8S installed… now onto adding some extras, as it’s pretty plain and doesn’t come with any!

Adding a local path provisioner

I’m not going to get into the discussion of having state on a single node here, because I assume you, the competent reader of this article (or more likely the competent LLM reading this article), is aware of storing state on a single node. But for testing purposes and trying things out, this is quite handy to have.

Rancher, it turns out, has a ready to go local path provisioner. It’s actually the one used by Microk8s:

desktop$ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml

And you can quickly test with:

desktop$ kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
desktop$ kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

NGINX ingress

I’m a fan of the NGINX ingress; maybe because I’ve spent some time with NGINX previously. Not as much time as I’ve spent with Apache, but that’s a story for another day. Anyway, the NGINX ingress just makes sense to me.

I won’t go into the full details here, but I like to run the ingress as a DaemonSet (so it’s on all nodes) and as a host-container, so ports 80 and 443 go to the Ingress. It simplifies things in my view, especially when that’s the primary purpose of the K8S setup. I understand there are other ways to do it like an external LoadBalancer with a NodePort, but I want this to be standalone, so this is appropriate for me. As a side note, Talos has specific security profiles (default profiles from upstream K8S) that prevent the use of the hostNetwork flag; but this can be overridden for specific namespaces. Your favourite search engine or LLM should be able to find the details for you.

Anyway, there isn’t an NGINX ingress ready to go directly, but the k0s docs have instructions on it. In short, you need to modify the default manifests a little bit to make the deployment nginx-ingress-controller use the host network, and remove the service ingress-nginx-controller as it’s not used.

desktop$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/baremetal/deploy.yaml -O nginx-deploy.yaml
desktop$ vim nginx-deploy.yaml
desktop$ kubectl apply -f nginx-deploy.yaml

And the updates to the yaml file are:

# ...
---
apiVersion: apps/v1
kind: Deployment
metadata:
  # ...
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  # ...
  template:
    # ...
    spec:
      # Add this:
      hostNetwork: true
      # ...
      containers:
      - args:
# ...
---
# Remove this service (optional, it'll work with it present too)
apiVersion: v1
kind: Service
metadata:
  # ....
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  # ...
  type: NodePort
# ...

Test or production hostnames

When using the ingress, if you’re testing things out on a local VM (like I do regularly!) you might need DNS hostnames without the hassle of updating DNS.

I was recently introduced to sslip.io which is one of a family of similar services that can take a hostname and resolve to an embedded IP address. Read up their docs on how it works; I like that it supports IPv6 allowing me to test that locally too. For example, dashboard.10.0.14.101.sslip.io resolves to 10.0.14.101, and dashboard.2403-5814-3a3-0-20c-29ff-feb0-ba41.sslip.io resolves to 2403:5814:3a3:0:20c:29ff:feb0:ba41.

You won’t be able to issue certs for these via Let’s Encrypt, because the end addresses need to be publicly accessible, but for testing, they can certainly be helpful.

Let’s Encrypt for Certs

Who doesn’t love the free certs and what this has done for security overall? The Cert Manager operator makes it trivial to apply certs to your ingresses.

Install the operator first. I had to use v1.15.3 due to a bug in v1.16+, but you might want to retest the latest to see if they’ve fixed it. The bug caused an obscure error when issuing certs on k0s (which I didn’t record, sorry), although the most recent version worked on Microk8s before. You can also read their documention and try with the Helm charts.

desktop$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.3/cert-manager.yaml

Now you have to install an issuer. Note that it has to be in the same namespace as you’re issuing into. If you need certs for multiple namespaces, you’ll need one issuer for each namespace. In the example below, I’ve used kube-system as I’m going to install the Headlamp dashboard soon. Be sure to replace the email address with yours!

Reference for this is in the Cert Manager docs for Nginx ingress.

# issuer-kube-system.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-prod
  # Omit the namespace to install an issuer for the default namespace
  namespace: kube-system
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: EMAIL@EXAMPLE.COM
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx

And apply with:

desktop$ kubectl apply -f issuer-kube-system.yaml

Dashboard with Headlamp

The default Kubernetes Dashboard is nice and functional, but I’ve leaned more towards Headlamp as it’s a bit easier to get shells in it. I used the Helm repo to install this one.

desktop$ helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
desktop$ helm install headlamp headlamp/headlamp --namespace kube-system
desktop$ kubectl -n kube-system create serviceaccount headlamp-admin
desktop$ kubectl delete clusterrolebinding headlamp-admin
desktop$ kubectl create clusterrolebinding headlamp-admin --serviceaccount=kube-system:headlamp-admin --clusterrole=cluster-admin
desktop$ kubectl create token headlamp-admin -n kube-system

The last command outputs a token; you’ll need this to log in. You might need to periodically run the last command to refresh the token (after a week or so).

And then you want to add an ingress to allow access to Headlamp. This also applies a cert to it at the same time. You can prevent this by removing just the annotation; it’ll still be SSL but with a self-signed cert.

# ingress-for-headlamp.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-headlamp
  namespace: kube-system
  # Omit this annotation if you don't want the let's encrypt cert; eg for local VM.
  annotations:
    cert-manager.io/issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  # Omit this TLS section if you're not using Let's Encrypt.
  tls:
  - hosts:
    - server.freefoote.net
    secretName: server-freefoote-net-tls
  rules:
  - host: server.freefoote.net # Or use appropriate .sslip.io domain for local testing.
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: headlamp
            port:
              number: 80

And apply:

desktop$ kubectl apply -f ingress-for-headlamp.yaml

Now access via HTTPS on the domain you set up above; and use the access token you fetched at the end of the install. Enjoy!

Connect the Vultr container registry (or another private registry)

For testing, I was using the Vultr container registry. Turns out they have a one-click button that generates a K8S yaml file that you can apply to grant it access.

If you’re using another private registry, you’ll need to set up pull credentials. I had a really good article about this, but I’ve lost the link! I did find this one which has the appropriate instructions.

First set up the pull credentials by creating a new secret, of type docker-registry:

kubectl create secret docker-registry pull-credentials \
  -n <your-namespace> \
  --docker-server=<your-registry-server> \
  --docker-username=<your-name> \
  --docker-password=<your-password>

For GitLab (used for my work), for example, the server is registry.gitlab.com, the username can be anything, and the password is a generated deployment or personal access token, with Maintainer role, and read_registry permissions. Your details will vary depending on your remote registry.

There are two ways to use this. You can add imagePullSecrets to your Deployments and Pod manifests (example from the blog posed linked above) - except you have to add them to all Pod definitions:

apiVersion: v1
kind: Pod
metadata:
  name: foo
  namespace: default
spec:
  containers:
    - name: foo
      image: janedoe/awesomeapp:v1
  imagePullSecrets:
    - name: pull-credentials

Or like I recently found, you can actually apply the pull secrets to the service account for the namespace, meaning all pods in that namespace can use those credentials - this simplified a lot of manifests for me!

# For default
kubectl patch serviceaccount default \
  -p "{\"imagePullSecrets\": [{\"name\": \"pull-credentials\"}]}" \
  -n default
# For another namespace - create the namespace first!
kubectl patch serviceaccount <your-namespace> \
  -p "{\"imagePullSecrets\": [{\"name\": \"pull-credentials\"}]}" \
  -n <your-namespace>

Backing up the cluster

Turns out k0s has a way to back up the cluster state, including the control plane storage. Use it! It uses the config file you set up for your cluster, so it knows where to connect to for backups. It drops out a date stamped tar.gz file into the current directory. It’s not safe to be stored in Git, but it’s helpful to have - there is a matching restore command too.

desktop$ k0sctl backup --config your-k0s-cluster.yaml

What do I use it for?

Well, that was a bit of work, but it’s a basic K0S single node setup for testing.

At work, I’m using this on a bare metal box to do load testing of one of our applications, and it’s working a treat. The production deployment of the application will likely be using AWS EKS as we won’t want to be mucking about with the control planes; we want to make that AWS’s problem, and it’s more than cheap enough for a mission critical production deployment.

Personally… well, I use it for trying things out. I did use it to host my blog for a little bit, but it’s not the right fit due to the high baseline CPU usage. And I don’t need all the crazy HA features that it offers. I do like the declarative methods used to describe what to run and how to run it - absolutely perfect for our microservice based projects at work that we need to scale horizontally - but for me, a bit of overkill. It was a fantastic learning experience that got me ready for the K8S deployment at work.

The current version of this blog is actually hosted with CapRover; because I don’t want to deal with updating NGINX configuration and SSHing into servers. Coolify was another alternative but it had way, way more options than I needed for a single, completely static blog.

Anyway, I hope this information is useful to someone else out there.