Garden includes a container
module type, which provides a high-level abstraction around container-based services, that's easy to understand and use.
container
modules can be used to just build container images, or they can specify deployable services through the optional services
key, as well as tasks
and tests
. So you might in one scenario use a container
module to both build and deploy services, and in another you might only build the image using a container
module, and then refer to that image in a helm
or kubernetes
module.
Below we'll walk through some usage examples. For a full reference of the container
module type, please take a look at the reference.
Note: Even though we've spent the most time on supporting Kubernetes, we've tried to design this module type in a way that makes it generically applicable to other container orchestrators as well, such as Docker Swarm, Docker Compose, AWS ECS etc. This will come in handy as we add more providers, that can then use the same module type.
A bare minimum container
module just specifies common required fields:
# garden.ymlkind: Moduletype: containername: my-container
If you have a Dockerfile
next to this file, this is enough to tell Garden to build it. You can also specify dockerfile: <path-to-Dockerfile>
if you need to override the Dockerfile name. You might also want to explicitly include or exclude files in the build context.
You can specify build arguments using the buildArgs
field. This can be quite handy, especially when e.g. referencing other modules such as build dependencies:
# garden.ymlkind: Moduletype: containername: my-containerbuild:depdendencies: [base-image]buildArgs:baseImageVersion: ${modules.base-image.version}
Garden will also automatically set GARDEN_MODULE_VERSION
as a build argument, so that you can reference the version of module being built.
If you're not building the container image yourself and just need to deploy an external image, you can skip the Dockerfile and specify the image
field:
# garden.ymlkind: Moduletype: containername: redisimage: redis:5.0.5-alpine # <- replace with any docker image IDservices:...
When you do have your own Dockerfile to build, and want to publish it, you also need to use the image
field:
# garden.ymlkind: Moduletype: containername: my-containerimage: my-org/my-container # <- your image repo ID
This tells Garden which namespace, and optionally registry hostname (e.g. gcr.io
or quay.io
), to publish the image to when you run garden publish
.
If you specify a tag as well, for example image: my-org/my-container:v1.2.3
, that tag will also be used when publishing. If you omit it, Garden will automatically set a tag based on the source hash of the module, e.g. v-0c61a773cb
.
container
modules also have an optional services
field, which you can use to deploy the container image using your configured providers (such as kubernetes
/local-kubernetes
).
In the case of Kubernetes, Garden will take the simplified container
service specification and convert it to the corresponding Kubernetes manifests, i.e. Deployment, Service and (if applicable) Ingress resources.
Here, for example, is the spec for the frontend
service in our example demo project:
kind: Modulename: frontenddescription: Frontend service containertype: containerservices:- name: frontendports:- name: httpcontainerPort: 8080healthCheck:httpGet:path: /hello-frontendport: httpingresses:- path: /hello-frontendport: http- path: /call-backendport: httpdependencies:- backend...
This, first of all, tells Garden that it should deploy the built frontend
container as a service with the same name. We also configure a health check, a couple of ingress endpoints, and specify that this service depends on the backend
service. There is a number of other options, which you can find in the container
module reference.
If you need to use advanced (or otherwise very specific) features of the underlying platform, you may need to use more platform-specific module types (e.g. kubernetes
or helm
). The container
module type is not intended to capture all those features.
Container services can specify environment variables, using the services[].env
field:
kind: Moduletype: containername: my-containerservices:- name: my-container-service...env:MY_ENV_VAR: fooMY_TEMPLATED_ENV_VAR: ${var.some-project-variable}......
env
is a simple mapping of "name: value". Above, we see a simple example with a string value, but you'll also commonly use template strings to interpolate variables to be consumed by the container service.
As of Garden v0.10.1 you can reference secrets in environment variables. For Kubernetes, this translates to valueFrom.secretKeyRef
fields in the Pod specs, which direct Kubernetes to mount values from Secret
resources that you have created in the application namespace, as environment variables in the Pod.
For example:
kind: Moduletype: containername: my-containerservices:- name: my-container-service...env:MY_SECRET_VAR:secretRef:name: my-secretkey: some-key-in-secret......
This will pull the some-key-in-secret
key from the my-secret
Secret resource in the application namespace, and make available as an environment variable.
Note that you must create the Secret manually for the Pod to be able to reference it.
For Kubernetes, this is commonly done using kubectl
. For example, to create a basic generic secret you could use:
kubectl --namespace <my-app-namespace> create secret generic --from-literal=some-key-in-secret=foo
Where <my-app-namespace>
is your project namespace (which is either set with namespace
in your provider config, or defaults to your project name). There are notably other, more secure ways to create secrets via kubectl
. Please refer to the offical Kubernetes Secrets docs for details.
Also check out the Kubernetes Secrets example project for a working example.
You can define both tests and tasks as part of any container module. The two are configured in very similar ways, using the tests
and tasks
keys, respectively. Here, for example, is a configuration for two different test suites:
kind: Moduletype: containername: my-container...tests:- name: unitcommand: [npm, test]- name: integcommand: [npm, run, integ]dependencies:- some-service...
Here we first define a unit
test suite, which has no dependencies, and simply runs npm test
in the container. The integ
suite is similar but adds a runtime dependency. This means that before the integ
test is run, Garden makes sure that some-service
is running and up-to-date.
When you run garden test
or garden dev
we will run those tests. In both cases, the tests will be executed by running the container with the specified command in your configured environment (as opposed to locally on the machine you're running the garden
CLI from).
The names and commands to run are of course completely up to you, but we suggest naming the test suites consistently across your different modules.
See the reference for all the configurable parameters for container tests.
Tasks are defined very similarly to tests:
kind: Moduletype: containername: my-container...tasks:- name: db-migratecommand: [rake, db:migrate]dependencies:- my-database...
In this example, we define a db-migrate
task that runs rake db:migrate
(which is commonly used for database migrations, but you can run anything you like of course). The task has a dependency on my-database
, so that Garden will make sure the database is up and running before running the migration task.
Unlike tests, tasks can also be dependencies for services and other tasks. For example, you might define another task or a service with db-migrate
as a dependency, so that it only runs after the migrations have been executed.
One thing to note, is that tasks should in most cases be idempotent, meaning that running the same task multiple times should be safe.
See the reference for all the configurable parameters for container tasks.
Modules can reference outputs from each other using template strings. container
modules are, for instance, often referenced by other module types such as helm
module types. For example:
kind: Moduledescription: Helm chart for the worker containertype: helmname: my-service...build:dependencies: [my-image]values:image:name: ${modules.my-image.outputs.deployment-image-name}tag: ${modules.my-image.version}
Here, we declare my-image
as a dependency for the my-service
Helm chart. In order for the Helm chart to be able to reference the built container image, we must provide the correct image name and version.
For a full list of keys that are available for the container
module type, take a look at the outputs reference.
container
services, tasks and tests can all mount volumes, using volume modules. One such is the persistentvolumeclaim
module type, supported by the kubernetes
provider. To mount a volume, you need to define a volume module, and reference it using the volumes
key on your services, tasks and/or tests.
Example:
kind: Modulename: my-volumetype: persistentvolumeclaimspec:accessModes: [ReadWriteOnce]resources:requests:storage: 1Gi---kind: Modulename: my-moduletype: containerservices:- name: my-servicereplicas: 1 # <- Important! Unless your volume supports ReadWriteMany, you can't run multiple replicas with itvolumes:- name: my-volumemodule: my-volumecontainerPath: /volume...
This will mount the my-volume
PVC at /volume
in the my-service
service when it is run. The my-volume
module creates a PersistentVolumeClaim
resource in your project namespace, and the spec
field is passed directly to the same field on the PVC resource.
Notice the accessModes
field in the volume module above. The default storage classes in Kubernetes generally don't support being mounted by multiple Pods at the same time. If your volume module doesn't support the ReadWriteMany
access mode, you must take care not to use the same volume in multiple services, tasks or tests, or multiple replicas. See Shared volumes below for how to share a single volume with multiple Pods.
You can do the same for tests and tasks using the tests.volumes
and tasks.volumes
fields. persistentvolumeclaim
volumes can of course also be referenced in kubernetes
and helm
modules, since they are deployed as standard PersistentVolumeClaim resources.
Take a look at the persistentvolumeclaim
module type and container
module docs for more details.
For a volume to be shared between multiple replicas, or multiple services, tasks and/or tests, it needs to be configured with a storage class (using the storageClassName
field) that supports the ReadWriteMany
(RWX) access mode. The available storage classes that support RWX vary by cloud providers and cluster setups, and in many cases you need to define a StorageClass
or deploy a storage class provisioner to your cluster.
You can find a list of storage options and their supported access modes here. Here are a few commonly used RWX provisioners and storage classes:
Once any of those is set up you can create a persistentvolumeclaim
module that uses the configured storage class. Here, for example, is how you might use a shared volume with a configured azurefile
storage class:
kind: Modulename: shared-volumetype: persistentvolumeclaimspec:accessModes: [ReadWriteMany]resources:requests:storage: 1GistorageClassName: azurefile---kind: Modulename: my-moduletype: containerservices:- name: my-servicevolumes:- &volume # <- using a YAML anchor to re-use the volume spec in tasks and testsname: shared-volumemodule: shared-volumecontainerPath: /volume...tasks:- name: my-taskvolumes:- *volume...tests:- name: my-testvolumes:- *volume...
Here the same volume is used across a service, task and a test in the same module. You could similarly use the same volume across multiple container modules.