container
module type, which provides a high-level abstraction around container-based services, that's easy to understand and use.container
modules can be used to just build container images, or they can specify deployable services through the optional services
key, as well as tasks
and tests
. So you might in one scenario use a container
module to both build and deploy services, and in another you might only build the image using a container
module, and then refer to that image in a helm
or kubernetes
module.container
module type, please take a look at the reference.container
module just specifies common required fields:Dockerfile
next to this file, this is enough to tell Garden to build it. You can also specify dockerfile: <path-to-Dockerfile>
if you need to override the Dockerfile name. You might also want to explicitly include or exclude files in the build context.buildArgs
field. This can be quite handy, especially when e.g. referencing other modules such as build dependencies:GARDEN_MODULE_VERSION
as a build argument, so that you can reference the version of module being built.image
field:include: []
in your module configuration.garden publish
command.kubernetes
provider), you need to specify the image
field on the container
module in question to indicate where the image should be published. For example:container
module image
field, if any. If none is set there, we default to the Garden module version.--tag
option on the garden publish
command to override the tag used for images. You can both set a specific tag or you can use template strings for the tag. For example, you cangarden publish --tag "v1.2.3"
garden publish --tag 'v0.1-${module.hash}'
garden publish --tag 'v0.1-${git.branch}'
${module.name}
— the name of the module being tagged${module.version}
— the full Garden version of the module being tagged, e.g. v-abcdef1234
${module.hash}
— the Garden version hash of the module being tagged, e.g. abcdef1234
(i.e. without the v-
prefix)container
modules also have an optional services
field, which you can use to deploy the container image using your configured providers (such as kubernetes
/local-kubernetes
).container
service specification and convert it to the corresponding Kubernetes manifests, i.e. Deployment, Service and (if applicable) Ingress resources.frontend
container as a service with the same name. We also configure a health check, a couple of ingress endpoints, and specify that this service depends on the backend
service. There is a number of other options, which you can find in the container
module reference.kubernetes
or helm
). The container
module type is not intended to capture all those features.services[].env
field:env
is a simple mapping of "name: value". Above, we see a simple example with a string value, but you'll also commonly use template strings to interpolate variables to be consumed by the container service.valueFrom.secretKeyRef
fields in the Pod specs, which direct Kubernetes to mount values from Secret
resources that you have created in the application namespace, as environment variables in the Pod.some-key-in-secret
key from the my-secret
Secret resource in the application namespace, and make it available as an environment variable.kubectl
. For example, to create a basic generic secret you could use:<my-app-namespace>
is your project namespace (which is either set with namespace
in your provider config, or defaults to your project name). There are notably other, more secure ways to create secrets via kubectl
. Please refer to the official Kubernetes Secrets docs for details.tests
and tasks
keys, respectively. Here, for example, is a configuration for two different test suites:unit
test suite, which has no dependencies, and simply runs npm test
in the container. The integ
suite is similar but adds a runtime dependency. This means that before the integ
test is run, Garden makes sure that some-service
is running and up-to-date.garden test
or garden dev
we will run those tests. In both cases, the tests will be executed by running the container with the specified command in your configured environment (as opposed to locally on the machine you're running the garden
CLI from).db-migrate
task that runs rake db:migrate
(which is commonly used for database migrations, but you can run anything you like of course). The task has a dependency on my-database
, so that Garden will make sure the database is up and running before running the migration task.db-migrate
as a dependency, so that it only runs after the migrations have been executed.container
modules are, for instance, often referenced by other module types such as helm
module types. For example:my-image
as a dependency for the my-service
Helm chart. In order for the Helm chart to be able to reference the built container image, we must provide the correct image name and version.container
module type, take a look at the outputs reference.container
services, tasks and tests can all mount volumes, using volume modules. One such is the persistentvolumeclaim
module type, supported by the kubernetes
provider. To mount a volume, you need to define a volume module, and reference it using the volumes
key on your services, tasks and/or tests.my-volume
PVC at /volume
in the my-service
service when it is run. The my-volume
module creates a PersistentVolumeClaim
resource in your project namespace, and the spec
field is passed directly to the same field on the PVC resource.accessModes
field in the volume module above. The default storage classes in Kubernetes generally don't support being mounted by multiple Pods at the same time. If your volume module doesn't support the ReadWriteMany
access mode, you must take care not to use the same volume in multiple services, tasks or tests, or multiple replicas. See Shared volumes below for how to share a single volume with multiple Pods.tests.volumes
and tasks.volumes
fields. persistentvolumeclaim
volumes can of course also be referenced in kubernetes
and helm
modules, since they are deployed as standard PersistentVolumeClaim resources.container
modules using the configmap
module type, supported by the kubernetes
provider. Here's a simple example:data
field on the my-configmap
module under the /config
directory in the container. In this case, you'll find the file /config/config.properties
there, with the value above (some: data ...
) as the file contents.tests.volumes
and tasks.volumes
fields. configmap
volumes can of course also be referenced in kubernetes
and helm
modules, since they are deployed as standard ConfigMap resources.storageClassName
field) that supports the ReadWriteMany
(RWX) access mode. The available storage classes that support RWX vary by cloud providers and cluster setups, and in many cases you need to define a StorageClass
or deploy a storage class provisioner to your cluster.persistentvolumeclaim
module that uses the configured storage class. Here, for example, is how you might use a shared volume with a configured azurefile
storage class: