Using Kaniko for Container Builds on Kubernetes

Using Kaniko for Container Builds on Kubernetes
Photo by Dominik Lückmann / Unsplash

CI/CD with Kubernetes is really at least two pipelines, potentially using multiple tools to get the job done.

  • CI! A pipeline to test, build, and push an image to a remote image registry
  • CD! A pipeline to deploy that image to Kubernetes by either creating or updating manifests or changing Helm charts

This post focuses on the CI part, specifically building an image. This will vary from company to company based on their applications, but the high level steps will be the same. The code that is changed in version control must be tested, a Dockerfile will be used to build a container, and the resulting container image (assuming all tests pass) will be tagged and pushed to a container image registry. I recommended you use multi-stage builds whenever possible to make smaller containers, and to maintain semantic versioning of images (ex: 1.0.2-GIT_SHA). If you’re planning on building on Kubernetes, there are several ways to go.

Mount the Docker socket

One option: Mount the host Docker socket (/var/run/docker.sock) into your Pod and use the host Docker Daemon to execute your builds. Don’t do this. Container images you build might interfere with things running on the host, and it’s a security risk.

Docker-In-Docker & Docker-outside-of-Docker

Another option: use a sidecar container in your build pod that exposes a docker daemon on “localhost”, used to build your container. Since your host is running Docker, this means you are running a Docker Daemon in a container running on the host’s Docker.

This is slightly better than mounting the host’s Docker directly, as storage for this Docker is ephemeral, container images won’t conflict with anything potentially running on the host, and you have networking. However to get this to work the docker-in-docker container has to run as privileged. This is usually a non-starter for highly secure environments, or if you’re using strict PodSecurityPolicies (which you should).

Kaniko

Let’s be real. Friends don’t let friends mount docker.sock in Kubernetes pods to build images through a CI pipeline. Enter Kaniko! Kaniko is the best thing I’ve found, and it’s backed by Google. Kaniko doesn’t depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. Similar tools include img and buildah.

For some reason Kaniko’s documentation at the time of this writing recommends running Kaniko in a pod, but I think it makes more sense as a Kubernetes Job, especially as part of a build pipeline.

Sample App

You have to have something to build. So here’s some sample code we’ll use. A small go binary and a Dockerfile to build/package it. Image size: 6MB.

#hello.go
package main
import "fmt"
func main() {
 fmt.Println("Hello FE")
}
#Dockerfile
FROM golang:1.10 AS build-env
COPY . /app
RUN cd /app && go build -o hello hello.go

FROM alpine
WORKDIR /app
COPY --from=build-env /app/hello /app/hello
RUN chown nobody:nogroup /app
USER nobody
ENTRYPOINT /app/hello

Some Kubernetes YAML

So much YAML. For Kaniko to push to a registry (in this case Docker Hub), you’ll need registry credentials stored as a secret.

kubectl create secret docker-registry docker-secret \
--docker-server=DOCKER_REGISTRY_SERVER \
--docker-username=DOCKER_USER \
--docker-password=DOCKER_PASSWORD \
--docker-email=DOCKER_EMAIL

Here’s Kaniko run as a job:

apiVersion: batch/v1
kind: Job
metadata:
  name: kanikojob
  namespace: kanikotest
spec:
  completions: 1
  template:
    metadata:
      name: kanikojob
      namespace: kanikotest
    spec:
      restartPolicy: Never
      initContainers:
      - name: init-clone-repo
        image: alpine/git
        args:
            - clone
            - --single-branch
            - --
            - https://github.com/repo/code.git
            - /context
        volumeMounts:
        - name: context
          mountPath: /context
      containers:
      - name: kaniko
        image: gcr.io/kaniko-project/executor:latest
        args: ["--dockerfile=/context/Dockerfile",
              "--context=/context",
              "--destination=registry/repo/container:latest"]
        volumeMounts:
          - name: context
            mountPath: /context
          - name:  registry-creds
            mountPath: /root/
      volumes:
        - name: registry-creds
          projected:
            sources:
            - secret:
                name: docker-secret
                items:
                - key: .dockerconfigjson
                  path: .docker/config.json
        - name: context
          emptyDir: {}

So what’s going on in all this? I’m using an initContainer to clone a repository (where my sample code lives), then using a shared volume to my Kaniko container to pass in the code to build (the Dockerfile and context). I’m also mounting my docker-secret registry credentials Kaniko can use push the completed image to Docker Hub.

This Kubernetes Job runs once (to completion) and never restarts if it fails. If you were running this as part of a pipeline, you could create this Job then issue a command to wait for the Job condition to be “complete”:

kubectl wait --for=condition=complete job/kanikojob --timeout=300s