CD Tricks for Kubernetes Deployment + ConfigMap

CD Tricks for Kubernetes Deployment + ConfigMap

It is common to extract the application configuration to a separate file as a runtime dependency of the container image that includes the application binary. As a result, the same image can be used (thus “promoted”) across different deployment environments, from dev to staging and prod. Kubernetes offers native support to do exact so, but not without some caveats that I hope to carve out for you.

The Kubernetes Deployment is an API object that manages a replica set of Pods. A Pod is a collection of one or more containers and is the smallest atomic unit to be provisioned. A replica set includes a set of identical Pods and ensures the number of replicas conforms to the desired state. The Deployment enables rolling upgrades of your applications with zero downtime by gradually scale out a new replica set of the app with the new version and scale down the old replica set. Hence, the Deployment object has been the de facto way to manage the application life cycles.

ConfigMap is another Kubernetes object that essentially represents a set of key-value pairs that represents a configuration. It can also represent a file with the key as the file name and the value as its content. To be accessible to the application, a ConfigMap can be mounted to a Pod as a volume in the file system of the container. You may create/update a ConfigMap with

1
2
3
4
kubectl create configmap myconfig \
    --from-file=/path/to/config.yaml \
    --dry-run -o yaml \
    | kubectl apply -f -

The first trick to share is the dry run piping into apply. If there is an existing configmap with the same name, kubectl create will fail, but there isn’t a way for us to update configmap from file like we could with kubectl create. The dry run pipe make updates possible.

And then the deployment manifest may look like something like this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: app/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp
          image: myapp:1.0.0-alpine
          args: ["--config", "etc/myapp/config.yaml"]
          volumeMounts:
            - name: myconfig
              mountPath: /etc/myapp
        volumes:
          - name: myconfig
            configMap:
              name: myconfig

But what if we only want to update the configmap while using the same container image? Although such an update will be immediately reflected in the container file system (i.e., reading the config file again after the update will retrieve the latest write), most applications only load the config file during initialization. The challenge becomes how to instruct the application deployment to pick up the latest config file with zero downtime.

Recall the Deployment object manages the replica set of application containers. The key here is to trigger another Deployment rollout, so the new pods created will pick up the latest config file. Updating the configmap solely will NOT trigger a Deployment rollout. The trick is to include a CONFIG_HASH in the pod template. When its value changes, a Deployment rollout is triggered.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: app/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp
          image: myapp:1.0.0-alpine
          args: ["--config", "etc/myapp/config.yaml"]
          env:
            - name: CONFIG_HASH
              value: ${CONFIG_HASH}
          volumeMounts:
            - name: myconfig
              mountPath: /etc/myapp
        volumes:
          - name: myconfig
            configMap:
              name: myconfig

The final deployment script becomes

1
2
3
4
5
6
7
8
9
10
kubectl create configmap myconfig \
    --from-file=/path/to/config.yaml \
    --dry-run -o yaml \
    | kubectl apply -f -

export CONFIG_HASH=$( \
    cat /path/to/config.yaml \
    | shasum | cut -d' ' -f 1)

envsubst < deploy.yaml | kubectl apply -f -