Troubleshooting
Last updated
Was this helpful?
Last updated
Was this helpful?
This section could (obviously) use more work. Contributions are most appreciated!
When running Garden commands against an Azure AKS cluster with RBAC enabled, an error like the following may appear:
This happens because Azure with uses a different authentication mechanism that the Kubernetes client library doesn't support. The solution is to use . See also this .
This issue often comes up on Linux, and in other scenarios where the filesystem doesn't support event-based file watching.
Thankfully, you can in most cases avoid this problem using the modules.exclude
field in your project config, and/or the exclude
field in your individual module configs. See the section in our Configuration Files guide for details.
This is a known issue with Windows and may affect many Node.js applications (and possibly others). To fix it, you can open the Windows Defender Security Center and either
a) disable Real-time protection; or
b) click "Add or remove exclusions" and add "$HOME\.garden" to the list of exclusions.
helm
and kubernetes
modules.Pinging the service will still work and you'll see the Ingress resource if you run kubectl get ingress --namespace <my-namspace>
.
<release-name> has no deployed releases
.This is likely because they're being excluded somewhere, e.g. in .gitignore
or .gardenignore
. Garden currently respects .gitignore
but we plan to change that in our next major release.
ErrImagePull
when referencing an image from a container
module in a helm
module.Make sure to use the outputs
field from the container module being referenced.
For example:
garden-build-sync
and garden-docker-daemon
pods stuck in ContainerCreating
on EKS or AKS.This may be due the the NFS provisioner not playing well with EKS and AKS.
On EKS, you can use efs
instead, which may be more stable and scalable than the default NFS storage
On AKS, you can use azurefile
.
garden-nginx
times out when using the local-kubernetes
provider.This can occur if nginx is not able to bind to its default port which is port 80
. Stopping the process that occupies the port should solve the issue.
You can also skip the nginx installation if you already have a separate ingress controller installed, by setting setupIngressController: null
in your local-kubernetes
provider configuration.
If this error came up when running the garden
binary from inside your ~/Downloads
directory, try moving it outside the ~/Downloads
directory before running it again.
If you're still getting this error, a workaround is to find the garden
binary in Finder, CTRL-click it and choose Open. This should prevent this error message from coming up again.
See also: https://support.apple.com/en-gb/guide/mac-help/mh40616/mac
Error response from daemon: experimental session with v1 builder is no longer supported, use builder version v2 (BuildKit) instead
In some container repositories, you may need to create the cache repo manually.
This can occur if you re-install the Garden Nginx Ingress Controller. For example because you ran garden plugins kubernetes uninstall-garden-services
and then garden plugins kubernetes cluster-init
when upgrading the system services.
When the Ingress Controller gets re-installed, it may be assigned a new IP address by your cloud provider, meaning that hostnames pointing to the previous one will no longer work.
To fix this, run kubectl get svc -n garden-system and look for the EXTERNAL-IP of the garden-nginx-nginx-ingress-controller service and update your DNS records with this new value.
You need to set tmux to use 256 colors. As per the , you can do that by adding set -g default-terminal "screen-256color"
or set -g default-terminal "tmux-256color"
to your ~/.tmux.conf
file.
This could be because Garden is scanning the project files. Make sure you exclude things like node_modules
or other large vendor directories. See this .
Garden does create the ingress at the Kubernetes level. However, it does not print the ingresses with the CLI output and the Garden command call won't work. This is a .
This is a well-known . You'll need to delete the release manually with helm -n <namespace> uninstall <release-name>
There's an for a fix.
You'll need to install the provisioners yourself and override the field in the kubernetes
provider config.
This is a bug in Docker CE (i.e. Docker for Desktop), version 2.4.x.y
. See this for a fix and more details.
See of our docs and this for more details.