The kubernetes
provider allows you to deploy container
modules to Kubernetes clusters, and adds the helm
and kubernetes
module types.
For usage information, please refer to the guides section. A good place to start is the Remote Kubernetes guide guide if you're connecting to remote clusters. The demo-project example project and guide are also helpful as an introduction.
Note that if you're using a local Kubernetes cluster (e.g. minikube or Docker Desktop), the local-kubernetes provider simplifies (and automates) the configuration and setup quite a bit.
Below is the full schema reference for the provider configuration. For an introduction to configuring a Garden project with providers, please look at our configuration guide.
The reference is divided into two sections. The first section contains the complete YAML schema, and the second section describes each schema key.
The values in the schema below are the default values.
providers:- # List other providers that should be resolved before this one.dependencies: []# If specified, this provider will only be used in the listed environments. Note that an empty array effectively# disables the provider. To use a provider in all environments, omit this field.environments:# Choose the mechanism for building container images before deploying. By default it uses the local Docker# daemon, but you can set it to `cluster-docker` or `kaniko` to sync files to a remote Docker daemon,# installed in the cluster, and build container images there. This removes the need to run Docker or# Kubernetes locally, and allows you to share layer and image caches between multiple developers, as well# as between your development and CI workflows.## This is currently experimental and sometimes not desired, so it's not enabled by default. For example when using# the `local-kubernetes` provider with Docker for Desktop and Minikube, we directly use the in-cluster docker# daemon when building. You might also be deploying to a remote cluster that isn't intended as a development# environment, so you'd want your builds to happen elsewhere.## Functionally, both `cluster-docker` and `kaniko` do the same thing, but use different underlying mechanisms# to build. The former uses a normal Docker daemon in the cluster. Because this has to run in privileged mode,# this is less secure than Kaniko, but in turn it is generally faster. See the# [Kaniko docs](https://github.com/GoogleContainerTools/kaniko) for more information on Kaniko.buildMode: local-docker# Configuration options for the `cluster-docker` build mode.clusterDocker:# Enable [BuildKit](https://github.com/moby/buildkit) support. This should in most cases work well and be more# performant, but we're opting to keep it optional until it's enabled by default in Docker.enableBuildKit: false# Configuration options for the `kaniko` build mode.kaniko:# Change the kaniko image (repository/image:tag) to use when building in kaniko mode.image: 'gcr.io/kaniko-project/executor:debug-v0.23.0'# Specify extra flags to use when building the container image with kaniko. Flags set on container module take# precedence over these.extraFlags:# A default hostname to use when no hostname is explicitly configured for a service.defaultHostname:# Defines the strategy for deploying the project services.# Default is "rolling update" and there is experimental support for "blue/green" deployment.# The feature only supports modules of type `container`: other types will just deploy using the default strategy.deploymentStrategy: rolling# Require SSL on all `container` module services. If set to true, an error is raised when no certificate is# available for a configured hostname on a `container` module.forceSsl: false# References to `docker-registry` secrets to use for authenticating with remote registries when pulling# images. This is necessary if you reference private images in your module configuration, and is required# when configuring a remote Kubernetes environment with buildMode=local.imagePullSecrets:- # The name of the Kubernetes secret.name:# The namespace where the secret is stored. If necessary, the secret may be copied to the appropriate# namespace before use.namespace: default# Resource requests and limits for the in-cluster builder, container registry and code sync service. (which are# automatically installed and used when `buildMode` is `cluster-docker` or `kaniko`).resources:# Resource requests and limits for the in-cluster builder.## When `buildMode` is `cluster-docker`, this refers to the Docker Daemon that is installed and run# cluster-wide. This is shared across all users and builds, so it should be resourced accordingly, factoring# in how many concurrent builds you expect and how heavy your builds tend to be.## When `buildMode` is `kaniko`, this refers to _each instance_ of Kaniko, so you'd generally use lower# limits/requests, but you should evaluate based on your needs.builder:limits:# CPU limit in millicpu.cpu: 4000# Memory limit in megabytes.memory: 8192requests:# CPU request in millicpu.cpu: 200# Memory request in megabytes.memory: 512# Resource requests and limits for the in-cluster image registry. Built images are pushed to this registry,# so that they are available to all the nodes in your cluster.## This is shared across all users and builds, so it should be resourced accordingly, factoring# in how many concurrent builds you expect and how large your images tend to be.registry:limits:# CPU limit in millicpu.cpu: 2000# Memory limit in megabytes.memory: 4096requests:# CPU request in millicpu.cpu: 200# Memory request in megabytes.memory: 512# Resource requests and limits for the code sync service, which we use to sync build contexts to the cluster# ahead of building images. This generally is not resource intensive, but you might want to adjust the# defaults if you have many concurrent users.sync:limits:# CPU limit in millicpu.cpu: 500# Memory limit in megabytes.memory: 512requests:# CPU request in millicpu.cpu: 100# Memory request in megabytes.memory: 64# Storage parameters to set for the in-cluster builder, container registry and code sync persistent volumes# (which are automatically installed and used when `buildMode` is `cluster-docker` or `kaniko`).## These are all shared cluster-wide across all users and builds, so they should be resourced accordingly,# factoring in how many concurrent builds you expect and how large your images and build contexts tend to be.storage:# Storage parameters for the data volume for the in-cluster Docker Daemon.## Only applies when `buildMode` is set to `cluster-docker`, ignored otherwise.builder:# Volume size in megabytes.size: 20480# Storage class to use for the volume.storageClass: null# Storage parameters for the NFS provisioner, which we automatically create for the sync volume, _unless_# you specify a `storageClass` for the sync volume. See the below `sync` parameter for more.## Only applies when `buildMode` is set to `cluster-docker` or `kaniko`, ignored otherwise.nfs:# Storage class to use as backing storage for NFS .storageClass: null# Storage parameters for the in-cluster Docker registry volume. Built images are stored here, so that they# are available to all the nodes in your cluster.## Only applies when `buildMode` is set to `cluster-docker` or `kaniko`, ignored otherwise.registry:# Volume size in megabytes.size: 20480# Storage class to use for the volume.storageClass: null# Storage parameters for the code sync volume, which build contexts are synced to ahead of running# in-cluster builds.## Important: The storage class configured here has to support _ReadWriteMany_ access.# If you don't specify a storage class, Garden creates an NFS provisioner and provisions an# NFS volume for the sync data volume.## Only applies when `buildMode` is set to `cluster-docker` or `kaniko`, ignored otherwise.sync:# Volume size in megabytes.size: 10240# Storage class to use for the volume.storageClass: null# One or more certificates to use for ingress.tlsCertificates:- # A unique identifier for this certificate.name:# A list of hostnames that this certificate should be used for. If you don't specify these, they will be# automatically read from the certificate.hostnames:# A reference to the Kubernetes secret that contains the TLS certificate and key for the domain.secretRef:# The name of the Kubernetes secret.name:# The namespace where the secret is stored. If necessary, the secret may be copied to the appropriate# namespace before use.namespace: default# Set to `cert-manager` to configure [cert-manager](https://github.com/jetstack/cert-manager) to manage this# certificate. See our# [cert-manager integration guide](https://docs.garden.io/advanced/cert-manager-integration) for details.managedBy:# cert-manager configuration, for creating and managing TLS certificates. See the# [cert-manager guide](https://docs.garden.io/advanced/cert-manager-integration) for details.certManager:# Automatically install `cert-manager` on initialization. See the# [cert-manager integration guide](https://docs.garden.io/advanced/cert-manager-integration) for details.install: false# The email to use when requesting Let's Encrypt certificates.email:# The type of issuer for the certificate (only ACME is supported for now).issuer: acme# Specify which ACME server to request certificates from. Currently Let's Encrypt staging and prod servers are# supported.acmeServer: letsencrypt-staging# The type of ACME challenge used to validate hostnames and generate the certificates (only HTTP-01 is supported# for now).acmeChallengeType: HTTP-01# Exposes the `nodeSelector` field on the PodSpec of system services. This allows you to constrain# the system services to only run on particular nodes. [See# here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/) for the official Kubernetes guide to# assigning Pods to nodes.systemNodeSelector: {}# For setting tolerations on the registry-proxy when using in-cluster building.# The registry-proxy is a DaemonSet that proxies connections to the docker registry service on each node.## Use this only if you're doing in-cluster building and the nodes in your cluster# have [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).registryProxyTolerations:- # "Effect" indicates the taint effect to match. Empty means match all taint effects. When specified,# allowed values are "NoSchedule", "PreferNoSchedule" and "NoExecute".effect:# "Key" is the taint key that the toleration applies to. Empty means match all taint keys.# If the key is empty, operator must be "Exists"; this combination means to match all values and all keys.key:# "Operator" represents a key's relationship to the value. Valid operators are "Exists" and "Equal". Defaults# to# "Equal". "Exists" is equivalent to wildcard for value, so that a pod can tolerate all taints of a# particular category.operator: Equal# "TolerationSeconds" represents the period of time the toleration (which must be of effect "NoExecute",# otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate# the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately)# by the system.tolerationSeconds:# "Value" is the taint value the toleration matches to. If the operator is "Exists", the value should be# empty,# otherwise just a regular string.value:# The name of the provider plugin to use.name: kubernetes# The kubectl context to use to connect to the Kubernetes cluster.context:# The registry where built containers should be pushed to, and then pulled to the cluster when deploying services.## Important: If you specify this in combination with `buildMode: cluster-docker` or `buildMode: kaniko`, you must# make sure `imagePullSecrets` includes authentication with the specified deployment registry, that has the# appropriate write privileges (usually full write access to the configured `deploymentRegistry.namespace`).deploymentRegistry:# The hostname (and optionally port, if not the default port) of the registry.hostname:# The port where the registry listens on, if not the default.port:# The namespace in the registry where images should be pushed.namespace: _# The ingress class to use on configured Ingresses (via the `kubernetes.io/ingress.class` annotation)# when deploying `container` services. Use this if you have multiple ingress controllers in your cluster.ingressClass:# The external HTTP port of the cluster's ingress controller.ingressHttpPort: 80# The external HTTPS port of the cluster's ingress controller.ingressHttpsPort: 443# Path to kubeconfig file to use instead of the system default. Must be a POSIX-style path.kubeconfig:# Specify which namespace to deploy services to. Defaults to `<project name>-<environment namespace>`.## Note that the framework may generate other namespaces as well with this name as a prefix.namespace:# Set this to `nginx` to install/enable the NGINX ingress controller.setupIngressController: false
Type | Default | Required |
|
| No |
providers > dependencies
List other providers that should be resolved before this one.
Type | Default | Required |
|
| No |
Example:
providers:- dependencies:- exec
providers > environments
If specified, this provider will only be used in the listed environments. Note that an empty array effectively disables the provider. To use a provider in all environments, omit this field.
Type | Required |
| No |
Example:
providers:- environments:- dev- stage
providers > buildMode
Choose the mechanism for building container images before deploying. By default it uses the local Docker daemon, but you can set it to cluster-docker
or kaniko
to sync files to a remote Docker daemon, installed in the cluster, and build container images there. This removes the need to run Docker or Kubernetes locally, and allows you to share layer and image caches between multiple developers, as well as between your development and CI workflows.
This is currently experimental and sometimes not desired, so it's not enabled by default. For example when using the local-kubernetes
provider with Docker for Desktop and Minikube, we directly use the in-cluster docker daemon when building. You might also be deploying to a remote cluster that isn't intended as a development environment, so you'd want your builds to happen elsewhere.
Functionally, both cluster-docker
and kaniko
do the same thing, but use different underlying mechanisms to build. The former uses a normal Docker daemon in the cluster. Because this has to run in privileged mode, this is less secure than Kaniko, but in turn it is generally faster. See the Kaniko docs for more information on Kaniko.
Type | Default | Required |
|
| No |
providers > clusterDocker
Configuration options for the cluster-docker
build mode.
Type | Required |
| No |
providers > clusterDocker > enableBuildKit
Enable BuildKit support. This should in most cases work well and be more performant, but we're opting to keep it optional until it's enabled by default in Docker.
Type | Default | Required |
|
| No |
providers > kaniko
Configuration options for the kaniko
build mode.
Type | Required |
| No |
Change the kaniko image (repository/image:tag) to use when building in kaniko mode.
Type | Default | Required |
|
| No |
providers > kaniko > extraFlags
Specify extra flags to use when building the container image with kaniko. Flags set on container module take precedence over these.
Type | Required |
| No |
providers > defaultHostname
A default hostname to use when no hostname is explicitly configured for a service.
Type | Required |
| No |
Example:
providers:- defaultHostname: "api.mydomain.com"
providers > deploymentStrategy
⚠️ Experimental: this is an experimental feature and the API might change in the future.
Defines the strategy for deploying the project services. Default is "rolling update" and there is experimental support for "blue/green" deployment. The feature only supports modules of type container
: other types will just deploy using the default strategy.
Type | Default | Required |
|
| No |
providers > forceSsl
Require SSL on all container
module services. If set to true, an error is raised when no certificate is available for a configured hostname on a container
module.
Type | Default | Required |
|
| No |
providers > imagePullSecrets
References to docker-registry
secrets to use for authenticating with remote registries when pulling images. This is necessary if you reference private images in your module configuration, and is required when configuring a remote Kubernetes environment with buildMode=local.
Type | Default | Required |
|
| No |
providers > imagePullSecrets > name
The name of the Kubernetes secret.
Type | Required |
| Yes |
Example:
providers:- imagePullSecrets:- name: "my-secret"
providers > imagePullSecrets > namespace
The namespace where the secret is stored. If necessary, the secret may be copied to the appropriate namespace before use.
Type | Default | Required |
|
| No |
providers > resources
Resource requests and limits for the in-cluster builder, container registry and code sync service. (which are automatically installed and used when buildMode
is cluster-docker
or kaniko
).
Type | Default | Required |
|
| No |
providers > resources > builder
Resource requests and limits for the in-cluster builder.
When buildMode
is cluster-docker
, this refers to the Docker Daemon that is installed and run cluster-wide. This is shared across all users and builds, so it should be resourced accordingly, factoring in how many concurrent builds you expect and how heavy your builds tend to be.
When buildMode
is kaniko
, this refers to each instance of Kaniko, so you'd generally use lower limits/requests, but you should evaluate based on your needs.
Type | Default | Required |
|
| No |
providers > resources > builder > limits
Type | Default | Required |
|
| No |
providers > resources > builder > limits > cpu
CPU limit in millicpu.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...builder:...limits:...cpu: 4000
providers > resources > builder > limits > memory
Memory limit in megabytes.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...builder:...limits:...memory: 8192
providers > resources > builder > requests
Type | Default | Required |
|
| No |
providers > resources > builder > requests > cpu
CPU request in millicpu.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...builder:...requests:...cpu: 200
providers > resources > builder > requests > memory
Memory request in megabytes.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...builder:...requests:...memory: 512
providers > resources > registry
Resource requests and limits for the in-cluster image registry. Built images are pushed to this registry, so that they are available to all the nodes in your cluster.
This is shared across all users and builds, so it should be resourced accordingly, factoring in how many concurrent builds you expect and how large your images tend to be.
Type | Default | Required |
|
| No |
providers > resources > registry > limits
Type | Default | Required |
|
| No |
providers > resources > registry > limits > cpu
CPU limit in millicpu.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...registry:...limits:...cpu: 2000
providers > resources > registry > limits > memory
Memory limit in megabytes.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...registry:...limits:...memory: 4096
providers > resources > registry > requests
Type | Default | Required |
|
| No |
providers > resources > registry > requests > cpu
CPU request in millicpu.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...registry:...requests:...cpu: 200
providers > resources > registry > requests > memory
Memory request in megabytes.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...registry:...requests:...memory: 512
Resource requests and limits for the code sync service, which we use to sync build contexts to the cluster ahead of building images. This generally is not resource intensive, but you might want to adjust the defaults if you have many concurrent users.
Type | Default | Required |
|
| No |
providers > resources > sync > limits
Type | Default | Required |
|
| No |
providers > resources > sync > limits > cpu
CPU limit in millicpu.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...sync:...limits:...cpu: 500
providers > resources > sync > limits > memory
Memory limit in megabytes.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...sync:...limits:...memory: 512
providers > resources > sync > requests
Type | Default | Required |
|
| No |
providers > resources > sync > requests > cpu
CPU request in millicpu.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...sync:...requests:...cpu: 100
providers > resources > sync > requests > memory
Memory request in megabytes.
Type | Default | Required |
|
| No |
Example:
providers:- resources:...sync:...requests:...memory: 64
providers > storage
Storage parameters to set for the in-cluster builder, container registry and code sync persistent volumes (which are automatically installed and used when buildMode
is cluster-docker
or kaniko
).
These are all shared cluster-wide across all users and builds, so they should be resourced accordingly, factoring in how many concurrent builds you expect and how large your images and build contexts tend to be.
Type | Default | Required |
|
| No |
providers > storage > builder
Storage parameters for the data volume for the in-cluster Docker Daemon.
Only applies when buildMode
is set to cluster-docker
, ignored otherwise.
Type | Default | Required |
|
| No |
providers > storage > builder > size
Volume size in megabytes.
Type | Default | Required |
|
| No |
providers > storage > builder > storageClass
Storage class to use for the volume.
Type | Default | Required |
|
| No |
Storage parameters for the NFS provisioner, which we automatically create for the sync volume, unless you specify a storageClass
for the sync volume. See the below sync
parameter for more.
Only applies when buildMode
is set to cluster-docker
or kaniko
, ignored otherwise.
Type | Default | Required |
|
| No |
providers > storage > nfs > storageClass
Storage class to use as backing storage for NFS .
Type | Default | Required |
|
| No |
providers > storage > registry
Storage parameters for the in-cluster Docker registry volume. Built images are stored here, so that they are available to all the nodes in your cluster.
Only applies when buildMode
is set to cluster-docker
or kaniko
, ignored otherwise.
Type | Default | Required |
|
| No |
providers > storage > registry > size
Volume size in megabytes.
Type | Default | Required |
|
| No |
providers > storage > registry > storageClass
Storage class to use for the volume.
Type | Default | Required |
|
| No |
Storage parameters for the code sync volume, which build contexts are synced to ahead of running in-cluster builds.
Important: The storage class configured here has to support ReadWriteMany access. If you don't specify a storage class, Garden creates an NFS provisioner and provisions an NFS volume for the sync data volume.
Only applies when buildMode
is set to cluster-docker
or kaniko
, ignored otherwise.
Type | Default | Required |
|
| No |
providers > storage > sync > size
Volume size in megabytes.
Type | Default | Required |
|
| No |
providers > storage > sync > storageClass
Storage class to use for the volume.
Type | Default | Required |
|
| No |
providers > tlsCertificates
One or more certificates to use for ingress.
Type | Default | Required |
|
| No |
providers > tlsCertificates > name
A unique identifier for this certificate.
Type | Required |
| Yes |
Example:
providers:- tlsCertificates:- name: "www"
providers > tlsCertificates > hostnames
A list of hostnames that this certificate should be used for. If you don't specify these, they will be automatically read from the certificate.
Type | Required |
| No |
Example:
providers:- tlsCertificates:- hostnames:- www.mydomain.com
providers > tlsCertificates > secretRef
A reference to the Kubernetes secret that contains the TLS certificate and key for the domain.
Type | Required |
| No |
Example:
providers:- tlsCertificates:- secretRef:name: my-tls-secretnamespace: default
providers > tlsCertificates > secretRef > name
The name of the Kubernetes secret.
Type | Required |
| Yes |
Example:
providers:- tlsCertificates:- secretRef:name: my-tls-secretnamespace: default...name: "my-secret"
providers > tlsCertificates > secretRef > namespace
The namespace where the secret is stored. If necessary, the secret may be copied to the appropriate namespace before use.
Type | Default | Required |
|
| No |
providers > tlsCertificates > managedBy
Set to cert-manager
to configure cert-manager to manage this certificate. See our cert-manager integration guide for details.
Type | Required |
| No |
Example:
providers:- tlsCertificates:- managedBy: "cert-manager"
providers > certManager
cert-manager configuration, for creating and managing TLS certificates. See the cert-manager guide for details.
Type | Required |
| No |
providers > certManager > install
Automatically install cert-manager
on initialization. See the cert-manager integration guide for details.
Type | Default | Required |
|
| No |
providers > certManager > email
The email to use when requesting Let's Encrypt certificates.
Type | Required |
| Yes |
Example:
providers:- certManager:...email: "yourname@example.com"
providers > certManager > issuer
The type of issuer for the certificate (only ACME is supported for now).
Type | Default | Required |
|
| No |
Example:
providers:- certManager:...issuer: "acme"
providers > certManager > acmeServer
Specify which ACME server to request certificates from. Currently Let's Encrypt staging and prod servers are supported.
Type | Default | Required |
|
| No |
Example:
providers:- certManager:...acmeServer: "letsencrypt-staging"
providers > certManager > acmeChallengeType
The type of ACME challenge used to validate hostnames and generate the certificates (only HTTP-01 is supported for now).
Type | Default | Required |
|
| No |
Example:
providers:- certManager:...acmeChallengeType: "HTTP-01"
providers > systemNodeSelector
Exposes the nodeSelector
field on the PodSpec of system services. This allows you to constrain the system services to only run on particular nodes. See here for the official Kubernetes guide to assigning Pods to nodes.
Type | Default | Required |
|
| No |
Example:
providers:- systemNodeSelector:disktype: ssd
providers > registryProxyTolerations
For setting tolerations on the registry-proxy when using in-cluster building. The registry-proxy is a DaemonSet that proxies connections to the docker registry service on each node.
Use this only if you're doing in-cluster building and the nodes in your cluster have taints.
Type | Default | Required |
|
| No |
providers > registryProxyTolerations > effect
"Effect" indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are "NoSchedule", "PreferNoSchedule" and "NoExecute".
Type | Required |
| No |
providers > registryProxyTolerations > key
"Key" is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be "Exists"; this combination means to match all values and all keys.
Type | Required |
| No |
providers > registryProxyTolerations > operator
"Operator" represents a key's relationship to the value. Valid operators are "Exists" and "Equal". Defaults to "Equal". "Exists" is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
Type | Default | Required |
|
| No |
providers > registryProxyTolerations > tolerationSeconds
"TolerationSeconds" represents the period of time the toleration (which must be of effect "NoExecute", otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
Type | Required |
| No |
providers > registryProxyTolerations > value
"Value" is the taint value the toleration matches to. If the operator is "Exists", the value should be empty, otherwise just a regular string.
Type | Required |
| No |
providers > name
The name of the provider plugin to use.
Type | Default | Required |
|
| Yes |
Example:
providers:- name: "kubernetes"
providers > context
The kubectl context to use to connect to the Kubernetes cluster.
Type | Required |
| Yes |
Example:
providers:- context: "my-dev-context"
providers > deploymentRegistry
The registry where built containers should be pushed to, and then pulled to the cluster when deploying services.
Important: If you specify this in combination with buildMode: cluster-docker
or buildMode: kaniko
, you must make sure imagePullSecrets
includes authentication with the specified deployment registry, that has the appropriate write privileges (usually full write access to the configured deploymentRegistry.namespace
).
Type | Required |
| No |
providers > deploymentRegistry > hostname
The hostname (and optionally port, if not the default port) of the registry.
Type | Required |
| Yes |
Example:
providers:- deploymentRegistry:...hostname: "gcr.io"
providers > deploymentRegistry > port
The port where the registry listens on, if not the default.
Type | Required |
| No |
providers > deploymentRegistry > namespace
The namespace in the registry where images should be pushed.
Type | Default | Required |
|
| No |
Example:
providers:- deploymentRegistry:...namespace: "my-project"
providers > ingressClass
The ingress class to use on configured Ingresses (via the kubernetes.io/ingress.class
annotation) when deploying container
services. Use this if you have multiple ingress controllers in your cluster.
Type | Required |
| No |
providers > ingressHttpPort
The external HTTP port of the cluster's ingress controller.
Type | Default | Required |
|
| No |
providers > ingressHttpsPort
The external HTTPS port of the cluster's ingress controller.
Type | Default | Required |
|
| No |
providers > kubeconfig
Path to kubeconfig file to use instead of the system default. Must be a POSIX-style path.
Type | Required |
| No |
providers > namespace
Specify which namespace to deploy services to. Defaults to <project name>-<environment namespace>
.
Note that the framework may generate other namespaces as well with this name as a prefix.
Type | Required |
| No |
providers > setupIngressController
Set this to nginx
to install/enable the NGINX ingress controller.
Type | Default | Required |
|
| No |
The following keys are available via the ${providers.<provider-name>}
template string key for kubernetes
providers.
The primary namespace used for resource deployments.
Type |
|
The default hostname configured on the provider.
Type |
|
The namespace used for Garden metadata.
Type |
|