cluster-dockermode, and the (optional) in-cluster image registry, support for
PersistentVolumeClaims is required, with enough disk space for layer caches and built images. The in-cluster registry also requires support for
hostPort, and for reaching
hostPorts from the node/Kubelet. This should work out-of-the-box in most standard setups, but clusters using Cilium for networking may need to configure this specifically, for example.
cluster-docker— (Deprecated) A single Docker daemon installed in the
garden-systemnamespace and shared between users/deployments. It is no longer recommended and we will remove it in future releases.
local-docker— Build using the local Docker daemon on the developer/CI machine before pushing to the cluster/registry.
local-dockermode is set by default. You should definitely use that when using Docker for Desktop, Minikube and most other local development clusters.
kanikois a solid choice for most cases and is currently our first recommendation. It is battle-tested among Garden's most demanding users (including the Garden team itself). It also scales horizontally and elastically, since individual Pods are created for each build. It doesn't require privileged containers to run and requires no shared cluster-wide services.
cluster-buildkitis a new addition and replaces the older
cluster-dockermode. A BuildKit Deployment is dynamically created in each project namespace and much like Kaniko requires no other cluster-wide services. This mode also offers a rootless option, which runs without any elevated privileges, in clusters that support it.
kanikois generally the better option, since the persistent BuildKit deployment won't have a warm cache anyway. For long-lived namespaces, like the ones a developer uses while working,
cluster-buildkitmay be a more performant option.
buildMode: kanikoin your
kaniko.namespace: nullin the
kubernetesprovider configuration, so that builder pods are started in the project namespace instead of the
garden-systemnamespace, which is the current default. This will become the default in Garden v0.13.
extraFlagsfield. Users with projects with a large number of files should take a look at the
--use-new-runoptions as these can provide significant performance improvements. Please refer to the official docs for the full list of available flags.
buildMode: cluster-buildkitin your
cluster-dockermode installs a standalone Docker daemon into your cluster, that is then used for builds across all users of the clusters, along with a handful of other supporting services.
buildMode: cluster-dockerin your
garden plugins kubernetes cluster-init --env=<env-name>for each applicable environment, in order to install the required cluster-wide services. Those services include the Docker daemon itself, as well as an image registry, a sync service for receiving build contexts, two persistent volumes, an NFS volume provisioner for one of those volumes, and a couple of small utility services.
garden-systemin order to be able to efficiently synchronize build sources to the cluster and then attaching those to the Kaniko pods. You can also specify a storageClass to provide another ReadWriteMany capable storage class to use instead of NFS. This may be advisable if your cloud provider provides a good alternative, or if you already have such a provisioner installed.
cluster-buildkitbuild mode, which doesn't use Docker at all. In most cases, this should work well and offer a bit of added performance, but it remains optional for now. If you have
cluster-dockerset as your
buildModeyou can enable BuildKit for an environment by adding the following to your
deploymentRegistryfield on the
kubernetesprovider config undefined, and run
garden plugins kubernetes cluster-init --env=<env-name>to install the registry. This is nice and convenient, but is not a particularly good approach for clusters with many users or lots of builds. When using the in-cluster registry you need to take care of cleaning it up routinely, and it may become a performance and redundancy bottleneck with many users and frequent (or heavy) builds.
deploymentRegistryfield on your
kubernetesprovider, and in many cases you also need to provide a Secret in order to authenticate with the registry via the
namespace: my-project-idfor the
deploymentRegistryfield, and you have a container module named
some-modulein your project, it will be tagged and pushed to
my-registry.com/my-project-id/some-module:v:<module-version>after building. That image ID will be then used in Kubernetes manifests when running containers.
imagePullSecretfor you ECR repository.
config.jsonsomewhere with the following contents (
<region>are placeholders that you need to replace for your repo):
gcr.iowith the correct registry hostname (e.g.
docker.pkg.devwith the correct registry hostname (e.g.
cluster-dockerbuild mode, we additionally untag in the Docker daemon all images that are no longer in the registry, and then clean up the dangling image layers by running
docker image prune.