Using Kaniko for Container Builds on Kubernetes

CI/CD with Kubernetes is really at least two pipelines, potentially using multiple tools to get the job done.

  • CI! A pipeline to test, build, and push an image to a remote image registry
  • CD! A pipeline to deploy that image to Kubernetes by either creating or updating manifests or changing Helm charts

This post focuses on the CI part, specifically building an image. This will vary from company to company based on their applications, but the high level steps will be the same. The code that is changed in version control must be tested, a Dockerfile will be used to build a container, and the resulting container image (assuming all tests pass) will be tagged and pushed to a container image registry. I recommended you use multi-stage builds whenever possible to make smaller containers, and to maintain semantic versioning of images (ex: 1.0.2-GIT_SHA). If you’re planning on building on Kubernetes, there are several ways to go.

Mount the Docker socket

One option: Mount the host Docker socket (/var/run/docker.sock) into your Pod and use the host Docker Daemon to execute your builds. Don’t do this. Container images you build might interfere with things running on the host, and it’s a security risk.

Docker-In-Docker & Docker-outside-of-Docker

Another option: use a sidecar container in your build pod that exposes a docker daemon on “localhost”, used to build your container. Since your host is running Docker, this means you are running a Docker Daemon in a container running on the host’s Docker.

This is slightly better than mounting the host’s Docker directly, as storage for this Docker is ephemeral, container images won’t conflict with anything potentially running on the host, and you have networking. However to get this to work the docker-in-docker container has to run as privileged. This is usually a non-starter for highly secure environments, or if you’re using strict PodSecurityPolicies (which you should).


Let’s be real. Friends don’t let friends mount docker.sock in Kubernetes pods to build images through a CI pipeline. Enter Kaniko! Kaniko is the best thing I’ve found, and it’s backed by Google. Kaniko doesn’t depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. Similar tools include img and buildah.

For some reason Kaniko’s documentation at the time of this writing recommends running Kaniko in a pod, but I think it makes more sense as a Kubernetes Job, especially as part of a build pipeline.

Sample App

You have to have something to build. So here’s some sample code we’ll use. A small go binary and a Dockerfile to build/package it. Image size: 6MB.

package main
import "fmt"
func main() {
 fmt.Println("Hello FE")

FROM golang:1.10 AS build-env
COPY . /app
RUN cd /app && go build -o hello hello.go

FROM alpine
COPY --from=build-env /app/hello /app/hello
RUN chown nobody:nogroup /app
USER nobody
ENTRYPOINT /app/hello

Some Kubernetes YAML

So much YAML. For Kaniko to push to a registry (in this case Docker Hub), you’ll need registry credentials stored as a secret.

kubectl create secret docker-registry docker-secret \
--docker-server=DOCKER_REGISTRY_SERVER \
--docker-username=DOCKER_USER \
--docker-password=DOCKER_PASSWORD \

Here’s Kaniko run as a job:

apiVersion: batch/v1
kind: Job
  name: kanikojob
  namespace: kanikotest
  completions: 1
      name: kanikojob
      namespace: kanikotest
      restartPolicy: Never
      - name: init-clone-repo
        image: alpine/git
            - clone
            - --single-branch
            - --
            - /context
        - name: context
          mountPath: /context
      - name: kaniko
        args: ["--dockerfile=/context/Dockerfile",
          - name: context
            mountPath: /context
          - name:  registry-creds
            mountPath: /root/
        - name: registry-creds
            - secret:
                name: docker-secret
                - key: .dockerconfigjson
                  path: .docker/config.json
        - name: context
          emptyDir: {}

So what’s going on in all this? I’m using an initContainer to clone a repository (where my sample code lives), then using a shared volume to my Kaniko container to pass in the code to build (the Dockerfile and context). I’m also mounting my docker-secret registry credentials Kaniko can use push the completed image to Docker Hub.

This Kubernetes Job runs once (to completion) and never restarts if it fails. If you were running this as part of a pipeline, you could create this Job then issue a command to wait for the Job condition to be “complete”:

kubectl wait --for=condition=complete job/kanikojob --timeout=300s