31 KiB
stage | group | info | type |
---|---|---|---|
Verify | Pipeline Execution | To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments | concepts, howto |
Use Docker to build Docker images (FREE)
You can use GitLab CI/CD with Docker to create Docker images. For example, you can create a Docker image of your application, test it, and push it to a container registry.
To run Docker commands in your CI/CD jobs, you must configure
GitLab Runner to support docker
commands. This method requires privileged
mode.
If you want to build Docker images without enabling privileged
mode on the runner,
you can use a Docker alternative.
Enable Docker commands in your CI/CD jobs
To enable Docker commands for your CI/CD jobs, you can use:
Use the shell executor
To include Docker commands in your CI/CD jobs, you can configure your runner to
use the shell
executor. In this configuration, the gitlab-runner
user runs
the Docker commands, but needs permission to do so.
-
Install GitLab Runner.
-
Register a runner. Select the
shell
executor. For example:sudo gitlab-runner register -n \ --url https://gitlab.com/ \ --registration-token REGISTRATION_TOKEN \ --executor shell \ --description "My Runner"
-
On the server where GitLab Runner is installed, install Docker Engine. View a list of supported platforms.
-
Add the
gitlab-runner
user to thedocker
group:sudo usermod -aG docker gitlab-runner
-
Verify that
gitlab-runner
has access to Docker:sudo -u gitlab-runner -H docker info
-
In GitLab, add
docker info
to.gitlab-ci.yml
to verify that Docker is working:before_script: - docker info build_image: script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests
You can now use docker
commands (and install Docker Compose if needed).
When you add gitlab-runner
to the docker
group, you effectively grant gitlab-runner
full root permissions.
For more information, see security of the docker
group.
Use Docker-in-Docker
"Docker-in-Docker" (dind
) means:
- Your registered runner uses the Docker executor or the Kubernetes executor.
- The executor uses a container image of Docker, provided by Docker, to run your CI/CD jobs.
The Docker image includes all of the docker
tools and can run
the job script in context of the image in privileged mode.
You should use Docker-in-Docker with TLS enabled, which is supported by GitLab.com shared runners.
You should always pin a specific version of the image, like docker:20.10.16
.
If you use a tag like docker:stable
, you have no control over which version is used.
This can cause incompatibility problems when new versions are released.
Use the Docker executor with Docker-in-Docker
You can use the Docker executor to run jobs in a Docker container.
Docker-in-Docker with TLS enabled in the Docker executor
Introduced in GitLab Runner 11.11.
The Docker daemon supports connections over TLS. TLS is the default in Docker 19.03.12 and later.
WARNING:
This task enables --docker-privileged
, which effectively disables the container's security mechanisms and exposes your host to privilege
escalation. This action can cause container breakout. For more information, see
runtime privilege and Linux capabilities.
To use Docker-in-Docker with TLS enabled:
-
Install GitLab Runner.
-
Register GitLab Runner from the command line. Use
docker
andprivileged
mode:sudo gitlab-runner register -n \ --url https://gitlab.com/ \ --registration-token REGISTRATION_TOKEN \ --executor docker \ --description "My Docker Runner" \ --docker-image "docker:20.10.16" \ --docker-privileged \ --docker-volumes "/certs/client"
- This command registers a new runner to use the
docker:20.10.16
image. To start the build and service containers, it uses theprivileged
mode. If you want to use Docker-in-Docker, you must always useprivileged = true
in your Docker containers. - This command mounts
/certs/client
for the service and build container, which is needed for the Docker client to use the certificates in that directory. For more information, see the Docker image documentation.
The previous command creates a
config.toml
entry similar to the following example:[[runners]] url = "https://gitlab.com/" token = TOKEN executor = "docker" [runners.docker] tls_verify = false image = "docker:20.10.16" privileged = true disable_cache = false volumes = ["/certs/client", "/cache"] [runners.cache] [runners.cache.s3] [runners.cache.gcs]
- This command registers a new runner to use the
-
You can now use
docker
in the job script. You should include thedocker:20.10.16-dind
service:image: docker:20.10.16 variables: # When you use the dind service, you must instruct Docker to talk with # the daemon started inside of the service. The daemon is available # with a network connection instead of the default # /var/run/docker.sock socket. Docker 19.03 does this automatically # by setting the DOCKER_HOST in # https://github.com/docker-library/docker/blob/d45051476babc297257df490d22cbd806f1b11e4/19.03/docker-entrypoint.sh#L23-L29 # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/services/#accessing-the-services. # # Specify to Docker where to create the certificates. Docker # creates them automatically on boot, and creates # `/certs/client` to share between the service and job # container, thanks to volume mount from config.toml DOCKER_TLS_CERTDIR: "/certs" services: - docker:20.10.16-dind before_script: - docker info build: stage: build script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests
Docker-in-Docker with TLS disabled in the Docker executor
Sometimes there are legitimate reasons to disable TLS. For example, you have no control over the GitLab Runner configuration that you are using.
Assuming that the runner's config.toml
is similar to:
[[runners]]
url = "https://gitlab.com/"
token = TOKEN
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:20.10.16"
privileged = true
disable_cache = false
volumes = ["/cache"]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
You can now use docker
in the job script. You should include the
docker:20.10.16-dind
service:
image: docker:20.10.16
variables:
# When using dind service, you must instruct docker to talk with the
# daemon started inside of the service. The daemon is available with
# a network connection instead of the default /var/run/docker.sock socket.
#
# The 'docker' hostname is the alias of the service container as described at
# https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services
#
# If you're using GitLab Runner 12.7 or earlier with the Kubernetes executor and Kubernetes 1.6 or earlier,
# the variable must be set to tcp://localhost:2375 because of how the
# Kubernetes executor connects services to the job container
# DOCKER_HOST: tcp://localhost:2375
#
DOCKER_HOST: tcp://docker:2375
#
# This instructs Docker not to start over TLS.
DOCKER_TLS_CERTDIR: ""
services:
- docker:20.10.16-dind
before_script:
- docker info
build:
stage: build
script:
- docker build -t my-docker-image .
- docker run my-docker-image /script/to/run/tests
Use the Kubernetes executor with Docker-in-Docker
You can use the Kubernetes executor to run jobs in a Docker container.
Docker-in-Docker with TLS enabled in Kubernetes
Introduced in GitLab Runner Helm Chart 0.23.0.
To use Docker-in-Docker with TLS enabled in Kubernetes:
-
Using the Helm chart, update the
values.yml
file to specify a volume mount.runners: config: | [[runners]] [runners.kubernetes] image = "ubuntu:20.04" privileged = true [[runners.kubernetes.volumes.empty_dir]] name = "docker-certs" mount_path = "/certs/client" medium = "Memory"
-
You can now use
docker
in the job script. You should include thedocker:20.10.16-dind
service:image: docker:20.10.16 variables: # When using dind service, you must instruct Docker to talk with # the daemon started inside of the service. The daemon is available # with a network connection instead of the default # /var/run/docker.sock socket. DOCKER_HOST: tcp://docker:2376 # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/services/#accessing-the-services. # If you're using GitLab Runner 12.7 or earlier with the Kubernetes executor and Kubernetes 1.6 or earlier, # the variable must be set to tcp://localhost:2376 because of how the # Kubernetes executor connects services to the job container # DOCKER_HOST: tcp://localhost:2376 # # Specify to Docker where to create the certificates. Docker # creates them automatically on boot, and creates # `/certs/client` to share between the service and job # container, thanks to volume mount from config.toml DOCKER_TLS_CERTDIR: "/certs" # These are usually specified by the entrypoint, however the # Kubernetes executor doesn't run entrypoints # https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4125 DOCKER_TLS_VERIFY: 1 DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client" services: - docker:20.10.16-dind before_script: - docker info build: stage: build script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests
Known issues with Docker-in-Docker
Docker-in-Docker is the recommended configuration, but you should be aware of the following issues:
-
The
docker-compose
command: This command is not available in this configuration by default. To usedocker-compose
in your job scripts, follow the Docker Compose installation instructions. -
Cache: Each job runs in a new environment. Because every build gets its own instance of the Docker engine, concurrent jobs do not cause conflicts. However, jobs can be slower because there's no caching of layers. See Docker layer caching.
-
Storage drivers: By default, earlier versions of Docker use the
vfs
storage driver, which copies the file system for each job. Docker 17.09 and later use--storage-driver overlay2
, which is the recommended storage driver. See Using the OverlayFS driver for details. -
Root file system: Because the
docker:20.10.16-dind
container and the runner container do not share their root file system, you can use the job's working directory as a mount point for child containers. For example, if you have files you want to share with a child container, you could create a subdirectory under/builds/$CI_PROJECT_PATH
and use it as your mount point. For a more detailed explanation, see issue #41227.variables: MOUNT_POINT: /builds/$CI_PROJECT_PATH/mnt script: - mkdir -p "$MOUNT_POINT" - docker run -v "$MOUNT_POINT:/mnt" my-docker-image
Use Docker socket binding
To use Docker commands in your CI/CD jobs, you can bind-mount /var/run/docker.sock
into the
container. Docker is then available in the context of the image.
NOTE:
If you bind the Docker socket and you are
using GitLab Runner 11.11 or later,
you can no longer use docker:20.10.16-dind
as a service. Volume bindings
also affect services, making them incompatible.
Use the Docker executor with Docker socket binding
To make Docker available in the context of the image, you need to mount
/var/run/docker.sock
into the launched containers. To do this with the Docker
executor, add "/var/run/docker.sock:/var/run/docker.sock"
to the
Volumes in the [runners.docker]
section.
Your configuration should look similar to this example:
[[runners]]
url = "https://gitlab.com/"
token = RUNNER_TOKEN
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:20.10.16"
privileged = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
[runners.cache]
Insecure = false
To mount /var/run/docker.sock
while registering your runner, include the following options:
sudo gitlab-runner register -n \
--url https://gitlab.com/ \
--registration-token REGISTRATION_TOKEN \
--executor docker \
--description "My Docker Runner" \
--docker-image "docker:20.10.16" \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock
Enable registry mirror for docker:dind
service
When the Docker daemon starts inside the service container, it uses the default configuration. You might want to configure a registry mirror for performance improvements and to ensure you do not exceed Docker Hub rate limits.
The service in the .gitlab-ci.yml
file
You can append extra CLI flags to the dind
service to set the registry
mirror:
services:
- name: docker:20.10.16-dind
command: ["--registry-mirror", "https://registry-mirror.example.com"] # Specify the registry mirror to use
The service in the GitLab Runner configuration file
Introduced in GitLab Runner 13.6.
If you are a GitLab Runner administrator, you can specify the command
to configure the registry mirror
for the Docker daemon. The dind
service must be defined for the
Docker
or Kubernetes executor.
Docker:
[[runners]]
...
executor = "docker"
[runners.docker]
...
privileged = true
[[runners.docker.services]]
name = "docker:20.10.16-dind"
command = ["--registry-mirror", "https://registry-mirror.example.com"]
Kubernetes:
[[runners]]
...
name = "kubernetes"
[runners.kubernetes]
...
privileged = true
[[runners.kubernetes.services]]
name = "docker:20.10.16-dind"
command = ["--registry-mirror", "https://registry-mirror.example.com"]
The Docker executor in the GitLab Runner configuration file
If you are a GitLab Runner administrator, you can use
the mirror for every dind
service. Update the
configuration
to specify a volume mount.
For example, if you have a /opt/docker/daemon.json
file with the following
content:
{
"registry-mirrors": [
"https://registry-mirror.example.com"
]
}
Update the config.toml
file to mount the file to
/etc/docker/daemon.json
. This mounts the file for every
container created by GitLab Runner. The configuration is
detected by the dind
service.
[[runners]]
...
executor = "docker"
[runners.docker]
image = "alpine:3.12"
privileged = true
volumes = ["/opt/docker/daemon.json:/etc/docker/daemon.json:ro"]
The Kubernetes executor in the GitLab Runner configuration file
Introduced in GitLab Runner 13.6.
If you are a GitLab Runner administrator, you can use
the mirror for every dind
service. Update the
configuration
to specify a ConfigMap volume mount.
For example, if you have a /tmp/daemon.json
file with the following
content:
{
"registry-mirrors": [
"https://registry-mirror.example.com"
]
}
Create a ConfigMap with the content of this file. You can do this with a command like:
kubectl create configmap docker-daemon --namespace gitlab-runner --from-file /tmp/daemon.json
NOTE: You must use the namespace that the Kubernetes executor for GitLab Runner uses to create job pods.
After the ConfigMap is created, you can update the config.toml
file to mount the file to /etc/docker/daemon.json
. This update
mounts the file for every container created by GitLab Runner.
The dind
service detects this configuration.
[[runners]]
...
executor = "kubernetes"
[runners.kubernetes]
image = "alpine:3.12"
privileged = true
[[runners.kubernetes.volumes.config_map]]
name = "docker-daemon"
mount_path = "/etc/docker/daemon.json"
sub_path = "daemon.json"
Known issues with Docker socket binding
When you use Docker socket binding, you avoid running Docker in privileged mode. However, the implications of this method are:
-
By sharing the Docker daemon, you effectively disable all the container's security mechanisms and expose your host to privilege escalation. This can cause container breakout. For example, if a project ran
docker rm -f $(docker ps -a -q)
, it would remove the GitLab Runner containers. -
Concurrent jobs might not work. If your tests create containers with specific names, they might conflict with each other.
-
Any containers created by Docker commands are siblings of the runner, rather than children of the runner. This might cause complications for your workflow.
-
Sharing files and directories from the source repository into containers might not work as expected. Volume mounting is done in the context of the host machine, not the build container. For example:
docker run --rm -t -i -v $(pwd)/src:/home/app/src test-image:latest run_app_tests
You do not need to include the docker:20.10.16-dind
service, like you do when
you use the Docker-in-Docker executor:
image: docker:20.10.16
before_script:
- docker info
build:
stage: build
script:
- docker build -t my-docker-image .
- docker run my-docker-image /script/to/run/tests
Authenticate with registry in Docker-in-Docker
When you use Docker-in-Docker, the standard authentication methods do not work, because a fresh Docker daemon is started with the service.
Option 1: Run docker login
In before_script
, run docker login
:
image: docker:20.10.16
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:20.10.16-dind
build:
stage: build
before_script:
- echo "$DOCKER_REGISTRY_PASS" | docker login $DOCKER_REGISTRY --username $DOCKER_REGISTRY_USER --password-stdin
script:
- docker build -t my-docker-image .
- docker run my-docker-image /script/to/run/tests
To sign in to Docker Hub, leave $DOCKER_REGISTRY
empty or remove it.
Option 2: Mount ~/.docker/config.json
on each job
If you are an administrator for GitLab Runner, you can mount a file
with the authentication configuration to ~/.docker/config.json
.
Then every job that the runner picks up is already authenticated. If you
are using the official docker:20.10.16
image, the home directory is
under /root
.
If you mount the configuration file, any docker
command
that modifies the ~/.docker/config.json
fails. For example, docker login
fails, because the file is mounted as read-only. Do not change it from
read-only, because this causes problems.
Here is an example of /opt/.docker/config.json
that follows the
DOCKER_AUTH_CONFIG
documentation:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ="
}
}
}
Docker
Update the volume mounts to include the file.
[[runners]]
...
executor = "docker"
[runners.docker]
...
privileged = true
volumes = ["/opt/.docker/config.json:/root/.docker/config.json:ro"]
Kubernetes
Create a ConfigMap with the content of this file. You can do this with a command like:
kubectl create configmap docker-client-config --namespace gitlab-runner --from-file /opt/.docker/config.json
Update the volume mounts to include the file.
[[runners]]
...
executor = "kubernetes"
[runners.kubernetes]
image = "alpine:3.12"
privileged = true
[[runners.kubernetes.volumes.config_map]]
name = "docker-client-config"
mount_path = "/root/.docker/config.json"
# If you are running GitLab Runner 13.5
# or lower you can remove this
sub_path = "config.json"
Option 3: Use DOCKER_AUTH_CONFIG
If you already have
DOCKER_AUTH_CONFIG
defined, you can use the variable and save it in
~/.docker/config.json
.
You can define this authentication in several ways:
- In
pre_build_script
in the runner configuration file. - In
before_script
. - In
script
.
The following example shows before_script
.
The same commands apply for any solution you implement.
image: docker:20.10.16
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:20.10.16-dind
build:
stage: build
before_script:
- mkdir -p $HOME/.docker
- echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json
script:
- docker build -t my-docker-image .
- docker run my-docker-image /script/to/run/tests
Make Docker-in-Docker builds faster with Docker layer caching
When using Docker-in-Docker, Docker downloads all layers of your image every
time you create a build. Recent versions of Docker (Docker 1.13 and later) can
use a pre-existing image as a cache during the docker build
step. This significantly
accelerates the build process.
How Docker caching works
When running docker build
, each command in Dockerfile
creates a layer.
These layers are retained as a cache and can be reused if there have been no changes. Change in one layer causes the recreation of all subsequent layers.
To specify a tagged image to be used as a cache source for the docker build
command, use the --cache-from
argument. Multiple images can be specified
as a cache source by using multiple --cache-from
arguments. Any image that's used
with the --cache-from
argument must be pulled
(using docker pull
) before it can be used as a cache source.
Docker caching example
This example .gitlab-ci.yml
file shows how to use Docker caching:
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
In the script
section for the build
stage:
- The first command tries to pull the image from the registry so that it can be
used as a cache for the
docker build
command. - The second command builds a Docker image by using the pulled image as a
cache (see the
--cache-from $CI_REGISTRY_IMAGE:latest
argument) if available, and tags it. - The last two commands push the tagged Docker images to the container registry so that they can also be used as cache for subsequent builds.
Use the OverlayFS driver
NOTE:
The shared runners on GitLab.com use the overlay2
driver by default.
By default, when using docker:dind
, Docker uses the vfs
storage driver, which
copies the file system on every run. You can avoid this disk-intensive operation by using a different driver, for example overlay2
.
Requirements
-
Ensure a recent kernel is used, preferably
>= 4.2
. -
Check whether the
overlay
module is loaded:sudo lsmod | grep overlay
If you see no result, then the module is not loaded. To load the module, use:
sudo modprobe overlay
If the module loaded, you must make sure the module loads on reboot. On Ubuntu systems, do this by adding the following line to
/etc/modules
:overlay
Use the OverlayFS driver per project
You can enable the driver for each project individually by using the DOCKER_DRIVER
CI/CD variable in .gitlab-ci.yml
:
variables:
DOCKER_DRIVER: overlay2
Use the OverlayFS driver for every project
If you use your own runners, you
can enable the driver for every project by setting the DOCKER_DRIVER
environment variable in the
[[runners]]
section of the config.toml
file:
environment = ["DOCKER_DRIVER=overlay2"]
If you're running multiple runners, you must modify all configuration files.
Read more about the runner configuration and using the OverlayFS storage driver.
Docker alternatives
To build Docker images without enabling privileged mode on the runner, you can use one of these alternatives:
For example, with buildah
:
# Some details from https://major.io/2019/05/24/build-containers-in-gitlab-ci-with-buildah/
build:
stage: build
image: quay.io/buildah/stable
variables:
# Use vfs with buildah. Docker offers overlayfs as a default, but buildah
# cannot stack overlayfs on top of another overlayfs filesystem.
STORAGE_DRIVER: vfs
# Write all image metadata in the docker format, not the standard OCI format.
# Newer versions of docker can handle the OCI format, but older versions, like
# the one shipped with Fedora 30, cannot handle the format.
BUILDAH_FORMAT: docker
# You may need this workaround for some errors: https://stackoverflow.com/a/70438141/1233435
BUILDAH_ISOLATION: chroot
FQ_IMAGE_NAME: "$CI_REGISTRY_IMAGE/test"
before_script:
# Log in to the GitLab container registry
- export REGISTRY_AUTH_FILE=$HOME/auth.json
- echo "$CI_REGISTRY_PASSWORD" | buildah login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY
script:
- buildah images
- buildah build -t $FQ_IMAGE_NAME
- buildah images
- buildah push $FQ_IMAGE_NAME
Use the GitLab Container Registry
After you've built a Docker image, you can push it to the GitLab Container Registry.
Troubleshooting
docker: Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running?
This is a common error when you are using Docker-in-Docker v19.03 or later.
This error occurs because Docker starts on TLS automatically.
- If this is your first time setting it up, see use the Docker executor with the Docker image.
- If you are upgrading from v18.09 or earlier, see the upgrade guide.
This error can also occur with the Kubernetes executor when attempts are made to access the Docker-in-Docker service before it has fully started up. For a more detailed explanation, see issue 27215.
Docker no such host
error
You might get an error that says
docker: error during connect: Post https://docker:2376/v1.40/containers/create: dial tcp: lookup docker on x.x.x.x:53: no such host
.
This issue can occur when the service's image name includes a registry hostname. For example:
image: docker:20.10.16
services:
- registry.hub.docker.com/library/docker:20.10.16-dind
A service's hostname is derived from the full image name.
However, the shorter service hostname docker
is expected.
To allow service resolution and access, add an explicit alias for the service name docker
:
image: docker:20.10.16
services:
- name: registry.hub.docker.com/library/docker:20.10.16-dind
alias: docker