Contents

GitOps With Flux

Flux is a collection of tools for keeping Kubernetes in sync with sources of configuration files. Basically it is desired state configuration for Kubernetes. Out of the box it offers integrations with tools such as Kustomize and Helm, source control such as GitHub and GitLab, and also offers notification and monitoring integrations.

Installing Flux

Flux offers installation steps for a couple of platforms. Below is my suggested ways of installing.

Homebrew

Probably the easiest way to install Flux. Simply run the following command:

1
brew install fluxcd/tap/flux

Nix-Shell

If you are using my Nix setup you can install Flux by changing your shell.nix to resemble the following:

1
2
3
4
5
6
7
8
9
let
  sources = import ./nix/sources.nix { };
  pkgs = import sources.nixpkgs { };
in
pkgs.mkShell {
  buildInputs = [
    pkgs.fluxcd
  ];
}

Getting Things Ready

As we will be using GitHub in this example, the first step is to create a new personal access token. As per the Flux documentation, if you are using a new repository, the token will require full repo permissions. However, if you are using an existing repository, the token will require admin permissions so that it can create a deploy key.

Once you have the token, run the following commands to export some configuration that we will be using later:

1
2
3
export GITHUB_TOKEN=<your-token>
export GITHUB_USER=<your-username>
export GITHUB_REPO=<repository-name>

Bootstrapping Our Cluster

To bootstrap our cluster we only need to run the following command:

1
2
3
4
5
6
flux bootstrap github \
  --owner=$GITHUB_USER \
  --repository=$GITHUB_REPO \
  --branch=main \
  --path=./clusters/flux-is-awesome \
  --personal

Once all the fancy output is done, you should see a new deploy key and some files added to to your repository. On the Kubernetes side of things you should see that your cluster has a new namespace called flux-system. In that namespace you should have the following deployments:

  • source-controller
  • kustomize-controller
  • helm-controller
  • notification-controller
Note
The above controllers are not the only controllers that Flux offers, but they are the defaults. Check the documentation for more information on the other available controllers.

Source Controller

The source controller is in charge of the retrieving all the source files from your repositories. It also provides a couple of CRDs to interact with the configured Git and Helm repositories.

It provides the following features:

  • Validate source definitions
  • Authenticate to sources (SSH, user/password, API token)
  • Validate source authenticity (PGP)
  • Detect source changes based on update policies (semver)
  • Fetch resources on-demand and on-a-schedule
  • Package the fetched resources into a well-known format (tar.gz, yaml)
  • Make the artifacts addressable by their source identifier (sha, version, ts)
  • Make the artifacts available in-cluster to interested 3rd parties
  • Notify interested 3rd parties of source changes and availability (status conditions, events, hooks)

Kustomize Controller

The role of the Kustomize controller is to run continuous delivery pipelines for infrastructure and workloads defined in Kubernetes manifests and assembled with Kustomize.

It provides the following features:

  • Reconciles the cluster state from multiple sources (provided by source-controller)
  • Generates manifests with Kustomize (from plain Kubernetes yamls or Kustomize overlays)
  • Validates manifests against Kubernetes API
  • Impersonates service accounts (multi-tenancy RBAC)
  • Health assessment of the deployed workloads
  • Runs pipelines in a specific order (depends-on relationship)
  • Prunes objects removed from source (garbage collection)
  • Reports cluster state changes (alerting provided by notification-controller)

Helm Controller

The Helm controller allows you to declaratively manage Helm chart sources and releases.

It provides the following features:

  • Watches for HelmRelease objects and generates HelmChart objects
  • Supports HelmChart artifacts produced from HelmRepository and GitRepository sources
  • Fetches artifacts produced by source-controller from HelmChart objects
  • Watches HelmChart objects for revision changes (including semver ranges for charts from HelmRepository sources)
  • Performs automated Helm actions, including Helm tests, rollbacks and uninstalls
  • Offers extensive configuration options for automated remediation (rollback, uninstall, retry) on failed Helm install, upgrade or test actions
  • Runs Helm install/upgrade in a specific order, taking into account the depends-on relationship defined in a set of HelmRelease objects
  • Prunes Helm releases removed from cluster (garbage collection)
  • Reports Helm releases statuses (alerting provided by notification-controller)
  • Built-in Kustomize compatible Helm post renderer, providing support for strategic merge, JSON 6902 and images patches

Notification Controller

The notification controller specializes in handling inbound and outbound events.

The controller can handle events from external systems like GitHub, GitLab, Harbor, and Jenkins. It also dispatches to external systems like Slack, Microsoft Teams, and Discord.

Let’s Deploy Something

Clone the repo you used in the above bootstrap command. We will be adding additional configuration to the same repo.

For this demo we will be configuring Kong Ingress Controller and HTTPBin. I have created a GitHub repository (geevcookie/gitops-with-flux) with all the config used for reference.

Folder Structure

After you clone the repo you should have the following folder structure:

1
2
└── clusters
    └── local

Not a lot happening here. We are going to aim for something more like this:

1
2
3
4
5
6
7
8
9
├── apps
│   ├── base        # base configuration for all apps
│   └── local       # "local" cluster specific configuration for apps
├── infrastructure
│   ├── base        # base configuration for all infrastructure
│   ├── local       # "local" cluster specific configuration for apps
│   └── sources     # source configuration for helm charts and git repos
└── clusters
    └── local       # all system configuration for local cluster

Create the folders above in preparation for our config files.

Using Helm to Install Kong

Before we can get Flux to install the Helm chart for Kong, we need to configure a source for it. Flux currently supports the following sources for Helm deployments:

Add the following content to ./infrastructure/sources/kong.yaml:

1
2
3
4
5
6
7
8
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
  name: kong
  namespace: flux-system
spec:
  interval: 10m0s
  url: https://charts.konghq.com

The above manifest tells the Source Controller to fetch the Helm repository index every 10 minutes from the official Kong chart repository at https://charts.konghq.com. We also need to configure the Kustomization, so add the following to ./infrastructure/sources/kustomization.yaml:

1
2
3
4
5
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: flux-system
resources:
  - kong.yaml

Now that the source is configured we can get started on the deployment. I prefer to create base configurations and then override them per environment. Create the following files for the base Kong configuration:

./infrastructure/base/kong/namespace.yaml:

1
2
3
4
apiVersion: v1
kind: Namespace
metadata:
  name: kong

./infrastructure/base/kong/release.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: kong
  namespace: kong
spec:
  chart:
    spec:
      chart: kong
      version: "2.6.4"
      sourceRef:
        kind: HelmRepository
        name: kong
        namespace: flux-system
  interval: 1m0s
  targetNamespace: kong

And lastly, ./infrastructure/base/kong/kustomization.yaml:

1
2
3
4
5
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - namespace.yaml
  - release.yaml

Lastly, we need to tie all of the above together. The last infrastructure file required for our local cluster is the Kustomization. Add the following content to ./infrastructure/local/kustomization.yaml:

1
2
3
4
5
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../sources
  - ../base/kong

Now that all the config for Kong is in place, we need to include it with our local cluster configuration. As with most things in Flux, this will require another Kustomization. Create a new file at ./clusters/local/infrastructure.yaml and add the following content:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: infrastructure
  namespace: flux-system
spec:
  interval: 10m0s
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure/local
  prune: true

The above Kustomization has a new prune property. When set to true it ensures that resources are deleted from the cluster if they are removed from the configuration.

Commit and push all the new changes and files. If you did everything right, the new resources will start spinning up within a few seconds. Run the following command to test if Kong is up and accessible:

1
curl localhost

If Kong is reachable the response should be similar to: {“message”:“no Route matched with those values”}

Deploy HTTPBin

Unlike Kong, there is no official Helm chart for HTTPBin. We will be using standard Kubernetes manifests. As there is nothing special with the configuration, simply create the following files:

./apps/base/httpbin/namespace.yaml:

1
2
3
4
apiVersion: v1
kind: Namespace
metadata:
  name: httpbin

./apps/base/httpbin/deployment.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
  namespace: httpbin
spec:
  selector:
    matchLabels:
      app: httpbin
  template:
    metadata:
      labels:
        app: httpbin
    spec:
      containers:
        - name: httpbin
          image: kennethreitz/httpbin
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
              protocol: TCP

./apps/base/httpbin/service.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  namespace: httpbin
spec:
  selector:
    app: httpbin
  type: ClusterIP
  ports:
    - port: 80
      protocol: TCP
      targetPort: 80

./apps/base/httpbin/ingress.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: httpbin
  namespace: httpbin
  annotations:
    konghq.com/strip-path: "true"
    kubernetes.io/ingress.class: kong
spec:
  rules:
    - http:
        paths:
          - path: /httpbin
            pathType: Prefix
            backend:
              service:
                name: httpbin
                port:
                  number: 80

./apps/base/httpbin/kustomization.yaml:

1
2
3
4
5
6
7
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - namespace.yaml
  - deployment.yaml
  - service.yaml
  - ingress.yaml

And finally, ./apps/local/kustomization.yaml:

1
2
3
4
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../base/httpbin

Like before, we need to tie all of this together and add it to our local cluster configuration. Add the following to ./clusters/local/apps.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: apps
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: infrastructure
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./apps/local
  prune: true

Commit the above changes and new files and push it to your repository. It shouldn’t take too long before the application is deployed. As per ingress.yaml we have configured Kong to forward all traffic with a prefix of /httpbin to our HTTPBin deployment. You can test if the application is running with the following command:

1
curl localhost/httpbin/get

You will receive a response similar to this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Connection": "keep-alive",
    "Host": "localhost",
    "User-Agent": "curl/7.82.0",
    "X-Forwarded-Host": "localhost",
    "X-Forwarded-Path": "/httpbin/get",
    "X-Forwarded-Prefix": "/httpbin/"
  },
  "origin": "172.17.0.1",
  "url": "http://localhost/get"
}

Upgrading Kong

To show off the power of what we just configured let’s change the version of Kong to see how Flux handles it. Change the content of ./infrastructure/base/kong/release.yaml to match the following:

 6
 7
 8
 9
10
spec:
  chart:
    spec:
      chart: kong
      version: "2.7.0"

Once committed and pushed, monitor the pods in the Kong namespace. You should see that Flux automatically updates the Helm deployment with the latest version.

Sharing Deployment Keys

If you are planning on using this configuration on more than one cluster you will quickly notice that the bootstrap command constantly overwrites the deploy keys on the repo. This leads to one of the clusters losing the ability to sync. Luckily there is a way to specify your own deployment keys instead of letting Flux generate them for you.

First, let’s generate some new keys:

1
ssh-keygen -t ecdsa -b 521 -f ./identity

We will also require some signatures from GitHub. The easiest way is by running the following command:

1
ssh-keyscan github.com > known_hosts

Lastly, run the following command to create the secret that Flux will use to communicate with GitHub:

1
kubectl -n flux-system create secret generic flux-system --from-file=./identity --from-file=./identity.pub --from-file=./known_hosts

I highly recommend storing these keys somewhere secure like Azure Key Vault and creating a script to handle the downloading and creation of the secret.

Metrics

Flux includes Prometheus integration and offers official Grafana dashboards.

/images/flux-cluster-dashboard.png
Flux Cluster Dashboard

Review the official documentation for a guide on how to install the monitoring stack.

Conclusion

Flux makes it really easy to start using GitOps on your clusters. It offers an expansive set of features for almost all use cases. The inclusion of monitoring features as well as the integrations offered by the notification controller also eliminates the common GitOps observability problems.

The true power of Flux shows once you use it on more than one cluster. The ability to deploy an application on multiple clusters with a single configuration and a simple git push is incredibly powerful.

Check back for future posts on additional Flux features and configuration tips.