cluster-docker
mode, and the (optional) in-cluster image registry, support for PersistentVolumeClaim
s is required, with enough disk space for layer caches and built images. The in-cluster registry also requires support for hostPort
, and for reaching hostPort
s from the node/Kubelet. This should work out-of-the-box in most standard setups, but clusters using Cilium for networking may need to configure this specifically, for example.cluster-docker
— (Deprecated) A single Docker daemon installed in the garden-system
namespace and shared between users/deployments. It is no longer recommended and we will remove it in future releases.local-docker
— Build using the local Docker daemon on the developer/CI machine before pushing to the cluster/registry.local-docker
mode is set by default. You should definitely use that when using Docker for Desktop, Minikube and most other local development clusters.kaniko
is a solid choice for most cases and is currently our first recommendation. It is battle-tested among Garden's most demanding users (including the Garden team itself). It also scales horizontally and elastically, since individual Pods are created for each build. It doesn't require privileged containers to run and requires no shared cluster-wide services.cluster-buildkit
is a new addition and replaces the older cluster-docker
mode. A BuildKit Deployment is dynamically created in each project namespace and much like Kaniko requires no other cluster-wide services. This mode also offers a rootless option, which runs without any elevated privileges, in clusters that support it.kaniko
is generally the better option, since the persistent BuildKit deployment won't have a warm cache anyway. For long-lived namespaces, like the ones a developer uses while working, cluster-buildkit
may be a more performant option.kaniko
build mode no longer requires shared system services or an NFS provisioner, nor running cluster-init
ahead of usage.buildMode: kaniko
in your kubernetes
provider configuration.kaniko.namespace: null
in the kubernetes
provider configuration, so that builder pods are started in the project namespace instead of the garden-system
namespace, which is the current default. This will become the default in Garden v0.13.my-org/my-image
, you need to manually create a repository next to it called my-org/my-image/cache
. AWS ECR supports immutable image tags, see the announcement and documentation. Make sure to set the cache repository's image tag mutability setting to mutable
. By default, Kaniko's TTL on old cache layers is two weeks, and every layer of the image cache must be rebuilt after that if the image tags are immutable
.--cache-repo
flag, which you can set on the extraFlags
field. See this GitHub comment in the Kaniko repo for more details.extraFlags
field. Users with projects with a large number of files should take a look at the --snapshotMode=redo
and --use-new-run
options as these can provide significant performance improvements. Please refer to the official docs for the full list of available flags.nodeSelector
to serve the same purpose.kaniko
(and unlike cluster-docker
), this mode requires no cluster-wide services or permissions to be managed, and thus no permissions outside of a single namespace for each user/project.buildMode: cluster-buildkit
in your kubernetes
provider configuration.nodeSelector
to serve the same purpose.cluster-docker
build mode has been deprecated and will be removed in an upcoming release. Please use kaniko
or cluster-buildkit
instead.cluster-docker
mode installs a standalone Docker daemon into your cluster, that is then used for builds across all users of the clusters, along with a handful of other supporting services.buildMode: cluster-docker
in your kubernetes
provider configuration.garden plugins kubernetes cluster-init --env=<env-name>
for each applicable environment, in order to install the required cluster-wide services. Those services include the Docker daemon itself, as well as an image registry, a sync service for receiving build contexts, two persistent volumes, an NFS volume provisioner for one of those volumes, and a couple of small utility services.garden-system
in order to be able to efficiently synchronize build sources to the cluster and then attaching those to the Kaniko pods. You can also specify a storageClass to provide another ReadWriteMany capable storage class to use instead of NFS. This may be advisable if your cloud provider provides a good alternative, or if you already have such a provisioner installed.cluster-buildkit
build mode, which doesn't use Docker at all. In most cases, this should work well and offer a bit of added performance, but it remains optional for now. If you have cluster-docker
set as your buildMode
you can enable BuildKit for an environment by adding the following to your kubernetes
provider configuration:deploymentRegistry
field on the kubernetes
provider config undefined, and run garden plugins kubernetes cluster-init --env=<env-name>
to install the registry. This is nice and convenient, but is not a particularly good approach for clusters with many users or lots of builds. When using the in-cluster registry you need to take care of cleaning it up routinely, and it may become a performance and redundancy bottleneck with many users and frequent (or heavy) builds.deploymentRegistry
field on your kubernetes
provider, and in many cases you also need to provide a Secret in order to authenticate with the registry via the imagePullSecrets
field:hostname: my-registry.com
and namespace: my-project-id
for the deploymentRegistry
field, and you have a container module named some-module
in your project, it will be tagged and pushed to my-registry.com/my-project-id/some-module:v:<module-version>
after building. That image ID will be then used in Kubernetes manifests when running containers.kaniko
or cluster-docker
build mode, you need to re-run garden plugins kubernetes cluster-init
any time you add or modify imagePullSecrets, for them to work.imagePullSecret
for you ECR repository.config.json
somewhere with the following contents (<aws_account_id>
and <region>
are placeholders that you need to replace for your repo):kubernetes
provider configuration:gcr.io
with the correct registry hostname (e.g. eu.gcr.io
or asia.gcr.io
):kubernetes
provider configuration:docker.pkg.dev
with the correct registry hostname (e.g. southamerica-east1-docker.pkg.dev
or australia-southeast1-docker.pkg.dev
):kubernetes
provider configuration:garden publish
command. See the Publishing images section in the Container Modules guide for details.kaniko
or cluster-docker
build modes, the kubernetes
provider exposes a utility command:cluster-docker
build mode, we additionally untag in the Docker daemon all images that are no longer in the registry, and then clean up the dangling image layers by running docker image prune
.cluster-buildkit
build mode.my-private-registry.com
requires authorization.kubernetes
provider configuration:kaniko
or cluster-docker
build mode, you need to re-run garden plugins kubernetes cluster-init
any time you add or modify imagePullSecrets, for them to work when pulling base images!