- Published on
Basic microk8s setup with addons
- Authors
- Name
- Elijah Scheele
- @zanycadence
Introduction
Kubernetes aka k8s, is a great container orchestration tool that I've used for personal and professional projects. I frequently end up with several containerized applications - some are long-running services like web apps or databases, and others are on-demand jobs. Most cloud providers provide a managed k8s service (EKS from Amazon, GKE from Google, AKS from Microsoft, among others), and there are a variety of solutions for running k8s locally, including minikube, k3s, and microk8s.
As most of my nodes that I'll add to the cluster aren't too resource-constrained and I tend to default to using Ubuntu Linux, microk8s is my go-to for setting up a local k8s cluster. In this post, I'll walk through installing microk8s and setting up some useful addons, including persistent storage with NFS, a local container registry, and GPU support. I'll also go over adding and removing nodes to a microk8s cluster.
Installing microk8s
Installing microk8s is done using the snap package manager. I'm running Ubuntu 22.04 on my main node and snap is enabled by default. To install microk8s, run
sudo snap install microk8s --classic --channel=1.28/stable
I'm running 1.28/stable
as it supports the addons I want to use, but if you need a different version, you can find the available channels with
snap info microk8s
By default, your user probably won't belong to the microk8s
group, so you'll need to run a few commands to be able to run microk8s
commands without sudo
.
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
newgrp microk8s
The first command adds your user to the microk8s
group, the second command changes the ownership of the ~/.kube
directory to your user, and the third command is necessary to run microk8s commands without having to log out for the group changes to take effect. I also tend to add alias kubectl='microk8s kubectl'
and alias helm='microk8s helm'
to my ~/.bashrc
if I'm only working with a local k8s cluster. If you don't set these aliases, you'll need to prefix the kubectl
and helm
commands below with microk8s
to run them.
Once the permissions are set properly, you can check the progress of the install with
microk8s status --wait-ready
This can take a few minutes to complete, but once the install is done, you can add addons to the cluster. Addons are pre-packaged commonly used k8s features/services that can be added without too much work. For local development, I tend to just have the dns
, helm3
, gpu
, and registry
addons enabled. The dns and helm3 addons are typically installed by default, and I add the others manually. I then use flux to manage the remaining infrastructure as code - but that's a topic for another post.
NFS for Persistent Volumes
By default, microk8s doesn't have a default storage class for persistent volumes. There is a hostpath
storage addon that can be enabled, but it doesn't work across multiple nodes in a microk8s cluster. Instead, I use the NFS to provide persistent storage for my clusters. On my main node, I have multiple NFS shares that correspond to different types of storage drives, including NVMe, SSDs, and some slower spinning disks. Configuring NFS is outside the scope of this article, but it's pretty straightforward. Just make sure that /etc/exports
has entries that allow the nodes on your cluster to access the network shares.
Installing the NFS CSI Driver
To enable NFS support for microk8s, you'll first need to add the helm chart repository for the csi-driver-nfs
chart. To do so, run
helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm repo update
Once the helm chart repository is added and updated, install the chart with
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs \
--namespace kube-system \
--set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet
To watch the status of the chart installation, run
kubectl --namespace=kube-system get pods \
--selector="app.kubernetes.io/instance=csi-driver-nfs" \
--watch
and once the pods are running, you can verify that the csidriver is installed with
kubectl get csidriver
Creating the Storage Class(es)
Once the driver is installed, you can create the storage class(es) that will be used as persistent volumes. A sample storage class yaml file follows:
# sc-nfs-nvme.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nvme-nfs-csi # Name of the storage class
provisioner: nfs.csi.k8s.io
parameters:
server: XXX.XXX.XXX.XXX # IP address of the NFS server
share: /XXX/XXX/XXX # Path to the NFS share
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.1
You'll want to substitute the name, server IP, and share path with appropriate values. Apply it with
kubectl apply -f sc-nfs-nvme.yaml
Of course, substituting the appropriate filename for your storage class config. kubectl get sc
should then show the new storage class as available. Rinse and repeat for any other NFS shares. Adding storageclass.kubernetes.io/is-default-class: "true"
to the metadata.annotations
section of the storage class yaml file will make a storage class the default for the cluster which is useful if a PersistentVolumeClaim doesn't specify a storage class, but you can only have one default storage class on a cluster.
Enabling the Local Registry
Having a local registry is useful for development and testing. To enable the registry
addon and to specify the StorageClass the registry will use, run
microk8s enable registry --storageclass=sdb-nfs-csi --size=200Gi
Modify the --storageclass
and --size
flags to match the storage class you want to use and the amount of disk space you want to dedicate to the registry. To monitor the progress of the registry installation, run
kubectl get all -n container-registry
One thing to note is that this registry is insecure by default. You'll likely need to configure whatever tool you're using to build images to push and pull from an insecure registry. I'm using podman
and needed to add the registry to /etc/containers/registries.conf
with the following:
[[registry]]
location="XXX.XXX.XXX.XXX:32000"
insecure=true
where XXX.XXX.XXX.XXX
is the IP address of the node running the registry.
Running podman info
shows the insecure registry as available, and I can successfully push and pull images from it.
Enable the GPU Addon
This used to be much harder, but now it works pretty smoothly. I have NVIDIA gpus on a few of my nodes and to make them available to the k8s cluster, we just need to install the gpu addon.
microk8s enable gpu
This takes a little bit of time, but once it's done you can verify that it's deployed with
kubectl logs -n gpu-operator-resources lapp=nvidia-operator-validator -c nvidia-operator-validator
Once the validations are successful, you can run a test workload with
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: cuda-vector-add
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1
EOF
It takes a couple of minutes to pull the image, but once it's running/run, you can view the logs with
kubectl logs -f cuda-vector-add
Add Nodes to the Cluster
Adding nodes to a microk8s cluster is straightforward. On your head node, run
microk8s add-node
This will print out a command to run on the node you want to add to the cluster. If your node is resource-constrained, it's better to add the --worker
flag so it doens't run the control plane components. Run the connection command on the node you want to add to the cluster, and on your main node run
kubectl get nodes
to monitor the progress of the node joining the cluster. GPU discovery takes a little bit of time, but once it's complete, it'll add annotations to the node that show the available gpus. Removing the node is done by running microk8s leave
on the node you want to remove, and then running microk8s remove-node <node-name>
on the main node.