- Published on
Adding flux to microk8s for GitOps
- Authors
- Name
- Elijah Scheele
- @zanycadence
Introduction
Flux is a GitOps tool that can be used for automating a number of continuous deployment tasks. One of the main ways I've used flux is to automate the deployment and management of deployments across kubernetes clusters. Flux is incredibly powerful and integrates with a number of tools - one of my favorite features is the ability to configure a local cluster in a git repository, and once the local cluster is working as intended, make a few changes so that the k8s cluster can be replicated across multiple cloud k8s services. In this post, I'll walk through the process of installing flux into a local microk8s cluster and adding a resource for flux to manage.
Flux CLI Installation
The Flux CLI is the first thing that needs to be installed. Luckily, the installation instructions are quite thorough and easy to follow. Since I'm using an Ubuntu machine, I'll use the bash script to install the CLI.
curl -s https://fluxcd.io/install.sh | sudo bash
You can of course download the install script and inspect it before running it, but I've never had an issue with it. Once the CLI is installed, adding autocompletions to the shell is as simple as adding the following line to your shell's configuration file (in my case ~/.bashrc
).
. <(flux completion bash)
Run source ~/.bashrc
to apply the changes to your current shell session. After installing the CLI, the next step is to bootstrap flux using one of the provided bootstrap methods. I'm going to use a GitHub repository to store the flux configuration.
Bootstrapping Flux with GitHub
Bootstrapping flux with GitHub is fairly straightforward. A Personal Access Token (PAT) is required for flux to authenticate with GitHub. Alternatively, you can bootstrap flux with a private ssh key by following the instructions for a git server.
I'm going to use a PAT for simplicity. Once I've got the PAT, I can export it as an environment variable either in my current shell session or in my ~/.bashrc
file with
export GITHUB_TOKEN=<gh-token>
export GITHUB_USER=<gh-username>
If after setting the environment variables you can check the pre-installation requirements with
flux check --pre
On a fresh microk8s installation, I had to create a ~/.kube/config
file with the cluster configuration on the main node with
kubectl config view --raw > ~/.kube/config
Once I did that, the flux check --pre
command passed. On the main node, run
flux bootstrap github \
--token-auth \
--owner=$GITHUB_USER \
--repository=flux_cluster \
--branch=main \
--path=clusters/microk8s \
--personal
The docs for flux bootstrap github
are pretty comprehensive - the main flag I want to highlight is --path
. With this, I can specify a directory to store the clusters configuration - which is handy for multi-tenancy setups. On a cloud k8s instance, I can simply bootstrap flux with a different path and re-use most of my local cluster configuration. Once the bootstrap process is complete, it should output something like
► connecting to github.com
► cloning branch "main" from Git repository "https://github.com/zanycadence/flux_cluster.git"
✔ cloned repository
► generating component manifests
✔ generated component manifests
✔ committed component manifests to "main" ("fca490d480fe22e792d515e1c28c965888ee268a")
► pushing component manifests to "https://github.com/zanycadence/flux_cluster.git"
► installing components in "flux-system" namespace
✔ installed components
✔ reconciled components
► determining if source secret "flux-system/flux-system" exists
► generating source secret
► applying source secret "flux-system/flux-system"
✔ reconciled source secret
► generating sync manifests
✔ generated sync manifests
✔ committed sync manifests to "main" ("0945e65bf6938610d9b14bc24b2f6016a9dca5d4")
► pushing sync manifests to "https://github.com/zanycadence/flux_cluster.git"
► applying sync manifests
✔ reconciled sync configuration
◎ waiting for GitRepository "flux-system/flux-system" to be reconciled
✔ GitRepository reconciled successfully
◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
✔ Kustomization reconciled successfully
► confirming components are healthy
✔ helm-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ notification-controller: deployment ready
✔ source-controller: deployment ready
✔ all components are healthy
Running kubectl get all -n flux-system
should show the flux components installed and running in the microk8s cluster without any issues. At this point, flux is installed and ready to manage resources in the cluster. There is a good tutorial explaining how to add a repository to flux, but I'll go through the process of setting up customizable and reproducible resources with the NFS configuration I used in my earlier microk8s post.
Adding NFS StorageClass(es) to Flux
The first step to adding resources to a flux managed cluster is to clone the repository locally. Once that's done, we can add two top level directories to the repository apps/
and infra/
. The apps/
directory will contain the various resource configurations and kustomizations for the applications that will be deployed on the cluster, while the infra/
directory will contain the infrastructure components (StorageClasses, Ingresses, Secrets Providers, etc...) that the clusters will use. In each directory, I'll create a base
directory that contains the common configurations for the resources, and a directory for each cluster that will apply kustomizations to the base resources. So cd
into the flux repository and create the directories with
mkdir -p apps/{base,microk8s} infra/{base,microk8s}
touch apps/{base,microk8s}/.keep
I touch
the blank .keep
files to ensure that the apps
directories are committed. We'll create an apps.yaml
and infra.yaml
file in the clusters/microk8s
directory - these files will be used by flux to apply the resources in the respective directories. The apps.yaml
file contains the following content
# clusters/microk8s/apps.yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: apps
namespace: flux-system
spec:
interval: 5m
retryInterval: 1m
timeout: 5m
sourceRef:
kind: GitRepository
name: flux-system
path: ./apps/microk8s
prune: true
wait: true
and the infra.yaml
file contains the following content
# clusters/microk8s/infra.yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: infra
namespace: flux-system
spec:
interval: 5m
retryInterval: 1m
timeout: 5m
sourceRef:
kind: GitRepository
name: flux-system
path: ./infra/microk8s
prune: true
wait: true
Create a directory to store the NFS StorageClass(es) with
mkdir -p infra/base/sc/nfs
In the nfs
directory, create a Kubernetes config file sc-nfs.yaml
with the appropriate content
# sc-nfs-nvme.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nvme-nfs-csi # Name of the storage class
provisioner: nfs.csi.k8s.io
parameters:
server: XXX.XXX.XXX.XXX # IP address of the NFS server
share: /XXX/XXX/XXX # Path to the NFS share
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.1
and a kustomization.yaml
file in the same directory with the following content
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- sc-nfs.yaml
Next, create a kustomization.yaml
file in the infra/microk8s
directory with the following content
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/sc/nfs
Add all of the files to the git repository and push the changes to the remote repository with
git add . && git commit -m "Add NFS StorageClasses"
git push
Run flux get kustomizations --watch
and you should see the infra and apps kustomizations get applied! If you were to manually delete any one of the flux managed nfs resources, flux would automatically re-apply the resource to the cluster on its next update.
Going Further
Admittedly, this example is a little too simple - ideally I would've put the csi-driver-nfs
helm chart as a resource to be monitored and installed prior to applying the StorageClass kustomizations. However, this post has gotten a little long, and I plan on describing how to use flux to manage more complex resources in a later post.