debian-mirror-gitlab/doc/ci/yaml/README.md
2021-04-17 20:07:23 +05:30

4912 lines
163 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
stage: Verify
group: Continuous Integration
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
type: reference
---
<!-- markdownlint-disable MD044 -->
<!-- vale gitlab.Spelling = NO -->
# Keyword reference for the .gitlab-ci.yml file **(FREE)**
<!-- vale gitlab.Spelling = YES -->
<!-- markdownlint-enable MD044 -->
This document lists the configuration options for your GitLab `.gitlab-ci.yml` file.
- For a quick introduction to GitLab CI/CD, follow the [quick start guide](../quick_start/index.md).
- For a collection of examples, see [GitLab CI/CD Examples](../examples/README.md).
- To view a large `.gitlab-ci.yml` file used in an enterprise, see the [`.gitlab-ci.yml` file for `gitlab`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml).
When you are editing your `.gitlab-ci.yml` file, you can validate it with the
[CI Lint](../lint.md) tool.
## Job keywords
A job is defined as a list of keywords that define the job's behavior.
The keywords available for jobs are:
| Keyword | Description |
| :-----------------------------------|:------------|
| [`after_script`](#after_script) | Override a set of commands that are executed after job. |
| [`allow_failure`](#allow_failure) | Allow job to fail. A failed job does not cause the pipeline to fail. |
| [`artifacts`](#artifacts) | List of files and directories to attach to a job on success. |
| [`before_script`](#before_script) | Override a set of commands that are executed before job. |
| [`cache`](#cache) | List of files that should be cached between subsequent runs. |
| [`coverage`](#coverage) | Code coverage settings for a given job. |
| [`dependencies`](#dependencies) | Restrict which artifacts are passed to a specific job by providing a list of jobs to fetch artifacts from. |
| [`environment`](#environment) | Name of an environment to which the job deploys. |
| [`except`](#onlyexcept-basic) | Limit when jobs are not created. |
| [`extends`](#extends) | Configuration entries that this job inherits from. |
| [`image`](#image) | Use Docker images. |
| [`include`](#include) | Include external YAML files. |
| [`inherit`](#inherit) | Select which global defaults all jobs inherit. |
| [`interruptible`](#interruptible) | Defines if a job can be canceled when made redundant by a newer run. |
| [`needs`](#needs) | Execute jobs earlier than the stage ordering. |
| [`only`](#onlyexcept-basic) | Limit when jobs are created. |
| [`pages`](#pages) | Upload the result of a job to use with GitLab Pages. |
| [`parallel`](#parallel) | How many instances of a job should be run in parallel. |
| [`release`](#release) | Instructs the runner to generate a [release](../../user/project/releases/index.md) object. |
| [`resource_group`](#resource_group) | Limit job concurrency. |
| [`retry`](#retry) | When and how many times a job can be auto-retried in case of a failure. |
| [`rules`](#rules) | List of conditions to evaluate and determine selected attributes of a job, and whether or not it's created. |
| [`script`](#script) | Shell script that is executed by a runner. |
| [`secrets`](#secrets) | The CI/CD secrets the job needs. |
| [`services`](#services) | Use Docker services images. |
| [`stage`](#stage) | Defines a job stage. |
| [`tags`](#tags) | List of tags that are used to select a runner. |
| [`timeout`](#timeout) | Define a custom job-level timeout that takes precedence over the project-wide setting. |
| [`trigger`](#trigger) | Defines a downstream pipeline trigger. |
| [`variables`](#variables) | Define job variables on a job level. |
| [`when`](#when) | When to run job. |
### Unavailable names for jobs
You can't use these keywords as job names:
- `image`
- `services`
- `stages`
- `types`
- `before_script`
- `after_script`
- `variables`
- `cache`
- `include`
### Custom default keyword values
You can set global defaults for some keywords. Jobs that do not define one or more
of the listed keywords use the value defined in the `default:` section.
These job keywords can be defined inside a `default:` section:
- [`after_script`](#after_script)
- [`artifacts`](#artifacts)
- [`before_script`](#before_script)
- [`cache`](#cache)
- [`image`](#image)
- [`interruptible`](#interruptible)
- [`retry`](#retry)
- [`services`](#services)
- [`tags`](#tags)
- [`timeout`](#timeout)
The following example sets the `ruby:2.5` image as the default for all jobs in the pipeline.
The `rspec 2.6` job does not use the default, because it overrides the default with
a job-specific `image:` section:
```yaml
default:
image: ruby:2.5
rspec:
script: bundle exec rspec
rspec 2.6:
image: ruby:2.6
script: bundle exec rspec
```
## Global keywords
Some keywords are not defined in a job. These keywords control pipeline behavior
or import additional pipeline configuration:
| Keyword | Description |
|-------------------------|:------------|
| [`stages`](#stages) | The names and order of the pipeline stages. |
| [`workflow`](#workflow) | Control what types of pipeline run. |
| [`include`](#include) | Import configuration from other YAML files. |
### `stages`
Use `stages` to define stages that contain groups of jobs. `stages` is defined globally
for the pipeline. Use [`stage`](#stage) in a job to define which stage the job is
part of.
The order of the `stages` items defines the execution order for jobs:
- Jobs in the same stage run in parallel.
- Jobs in the next stage run after the jobs from the previous stage complete successfully.
For example:
```yaml
stages:
- build
- test
- deploy
```
1. All jobs in `build` execute in parallel.
1. If all jobs in `build` succeed, the `test` jobs execute in parallel.
1. If all jobs in `test` succeed, the `deploy` jobs execute in parallel.
1. If all jobs in `deploy` succeed, the pipeline is marked as `passed`.
If any job fails, the pipeline is marked as `failed` and jobs in later stages do not
start. Jobs in the current stage are not stopped and continue to run.
If no `stages` are defined in the `.gitlab-ci.yml` file, then `build`, `test` and `deploy`
are the default pipeline stages.
If a job does not specify a [`stage`](#stage), the job is assigned the `test` stage.
To make a job start earlier and ignore the stage order, use
the [`needs`](#needs) keyword.
### `workflow`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/29654) in GitLab 12.5
Use `workflow:` to determine whether or not a pipeline is created.
Define this keyword at the top level, with a single `rules:` keyword that
is similar to [`rules:` defined in jobs](#rules).
You can use the [`workflow:rules` templates](#workflowrules-templates) to import
a preconfigured `workflow: rules` entry.
`workflow: rules` accepts these keywords:
- [`if`](#rulesif): Check this rule to determine when to run a pipeline.
- [`when`](#when): Specify what to do when the `if` rule evaluates to true.
- To run a pipeline, set to `always`.
- To prevent pipelines from running, set to `never`.
When no rules evaluate to true, the pipeline does not run.
Some example `if` clauses for `workflow: rules`:
| Example rules | Details |
|------------------------------------------------------|-----------------------------------------------------------|
| `if: '$CI_PIPELINE_SOURCE == "merge_request_event"'` | Control when merge request pipelines run. |
| `if: '$CI_PIPELINE_SOURCE == "push"'` | Control when both branch pipelines and tag pipelines run. |
| `if: $CI_COMMIT_TAG` | Control when tag pipelines run. |
| `if: $CI_COMMIT_BRANCH` | Control when branch pipelines run. |
See the [common `if` clauses for `rules`](#common-if-clauses-for-rules) for more examples.
In the following example, pipelines run for all `push` events (changes to
branches and new tags). Pipelines for push events with `-draft` in the commit message
don't run, because they are set to `when: never`. Pipelines for schedules or merge requests
don't run either, because no rules evaluate to true for them:
```yaml
workflow:
rules:
- if: $CI_COMMIT_MESSAGE =~ /-draft$/
when: never
- if: '$CI_PIPELINE_SOURCE == "push"'
```
This example has strict rules, and pipelines do **not** run in any other case.
Alternatively, all of the rules can be `when: never`, with a final
`when: always` rule. Pipelines that match the `when: never` rules do not run.
All other pipeline types run:
```yaml
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: never
- if: '$CI_PIPELINE_SOURCE == "push"'
when: never
- when: always
```
This example prevents pipelines for schedules or `push` (branches and tags) pipelines.
The final `when: always` rule runs all other pipeline types, **including** merge
request pipelines.
If your rules match both branch pipelines and merge request pipelines,
[duplicate pipelines](#avoid-duplicate-pipelines) can occur.
#### `workflow:rules` templates
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/217732) in GitLab 13.0.
GitLab provides templates that set up `workflow: rules`
for common scenarios. These templates help prevent duplicate pipelines.
The [`Branch-Pipelines` template](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Workflows/Branch-Pipelines.gitlab-ci.yml)
makes your pipelines run for branches and tags.
Branch pipeline status is displayed in merge requests that use the branch
as a source. However, this pipeline type does not support any features offered by
[merge request pipelines](../merge_request_pipelines/), like
[pipelines for merge results](../merge_request_pipelines/#pipelines-for-merged-results)
or [merge trains](../merge_request_pipelines/pipelines_for_merged_results/merge_trains/).
This template intentionally avoids those features.
To [include](#include) it:
```yaml
include:
- template: 'Workflows/Branch-Pipelines.gitlab-ci.yml'
```
The [`MergeRequest-Pipelines` template](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Workflows/MergeRequest-Pipelines.gitlab-ci.yml)
makes your pipelines run for the default branch, tags, and
all types of merge request pipelines. Use this template if you use any of the
the [pipelines for merge requests features](../merge_request_pipelines/).
To [include](#include) it:
```yaml
include:
- template: 'Workflows/MergeRequest-Pipelines.gitlab-ci.yml'
```
#### Switch between branch pipelines and merge request pipelines
> [Introduced in](https://gitlab.com/gitlab-org/gitlab/-/issues/201845) GitLab 13.8.
To make the pipeline switch from branch pipelines to merge request pipelines after
a merge request is created, add a `workflow: rules` section to your `.gitlab-ci.yml` file.
If you use both pipeline types at the same time, [duplicate pipelines](#avoid-duplicate-pipelines)
might run at the same time. To prevent duplicate pipelines, use the
[`CI_OPEN_MERGE_REQUESTS` variable](../variables/predefined_variables.md).
The following example is for a project that runs branch and merge request pipelines only,
but does not run pipelines for any other case. It runs:
- Branch pipelines when a merge request is not open for the branch.
- Merge request pipelines when a merge request is open for the branch.
```yaml
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS'
when: never
- if: '$CI_COMMIT_BRANCH'
```
If the pipeline is triggered by:
- A merge request, run a merge request pipeline. For example, a merge request pipeline
can be triggered by a push to a branch with an associated open merge request.
- A change to a branch, but a merge request is open for that branch, do not run a branch pipeline.
- A change to a branch, but without any open merge requests, run a branch pipeline.
You can also add a rule to an existing `workflow` section to switch from branch pipelines
to merge request pipelines when a merge request is created.
Add this rule to the top of the `workflow` section, followed by the other rules that
were already present:
```yaml
workflow:
rules:
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS && $CI_PIPELINE_SOURCE == "push"
when: never
- ... # Previously defined workflow rules here
```
[Triggered pipelines](../triggers/README.md) that run on a branch have a `$CI_COMMIT_BRANCH`
set and could be blocked by a similar rule. Triggered pipelines have a pipeline source
of `trigger` or `pipeline`, so `&& $CI_PIPELINE_SOURCE == "push"` ensures the rule
does not block triggered pipelines.
### `include`
> [Moved](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/42861) to GitLab Free in 11.4.
Use `include` to include external YAML files in your CI/CD configuration.
You can break down one long `gitlab-ci.yml` file into multiple files to increase readability,
or reduce duplication of the same configuration in multiple places.
You can also store template files in a central repository and `include` them in projects.
`include` requires the external YAML file to have the extensions `.yml` or `.yaml`,
otherwise the external file is not included.
You can't use [YAML anchors](#anchors) across different YAML files sourced by `include`.
You can only refer to anchors in the same file. To reuse configuration from different
YAML files, use [`!reference` tags](#reference-tags) or the [`extends` keyword](#extends).
`include` supports the following inclusion methods:
| Keyword | Method |
|:--------------------------------|:------------------------------------------------------------------|
| [`local`](#includelocal) | Include a file from the local project repository. |
| [`file`](#includefile) | Include a file from a different project repository. |
| [`remote`](#includeremote) | Include a file from a remote URL. Must be publicly accessible. |
| [`template`](#includetemplate) | Include templates that are provided by GitLab. |
When the pipeline starts, the `.gitlab-ci.yml` file configuration included by all methods is evaluated.
The configuration is a snapshot in time and persists in the database. GitLab does not reflect any changes to
the referenced `.gitlab-ci.yml` file configuration until the next pipeline starts.
The `include` files are:
- Deep merged with those in the `.gitlab-ci.yml` file.
- Always evaluated first and merged with the content of the `.gitlab-ci.yml` file,
regardless of the position of the `include` keyword.
NOTE:
Use merging to customize and override included CI/CD configurations with local
configurations. Local configurations in the `.gitlab-ci.yml` file override included configurations.
#### Variables with `include` **(FREE SELF)**
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/284883) in GitLab 13.8.
> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/294294) in GitLab 13.9.
You can [use some predefined variables in `include` sections](../variables/where_variables_can_be_used.md#gitlab-ciyml-file)
in your `.gitlab-ci.yml` file:
```yaml
include:
project: '$CI_PROJECT_PATH'
file: '.compliance-gitlab-ci.yml'
```
For an example of how you can include these predefined variables, and the variables' impact on CI/CD jobs,
see this [CI/CD variable demo](https://youtu.be/4XR8gw3Pkos).
#### `include:local`
Use `include:local` to include a file that is in the same repository as the `.gitlab-ci.yml` file.
Use a full path relative to the root directory (`/`).
If you use `include:local`, make sure that both the `.gitlab-ci.yml` file and the local file
are on the same branch.
You can't include local files through Git submodules paths.
All [nested includes](#nested-includes) are executed in the scope of the same project,
so it's possible to use local, project, remote, or template includes.
Example:
```yaml
include:
- local: '/templates/.gitlab-ci-template.yml'
```
You can also use shorter syntax to define the path:
```yaml
include: '.gitlab-ci-production.yml'
```
Use local includes instead of symbolic links.
#### `include:file`
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/53903) in GitLab 11.7.
To include files from another private project on the same GitLab instance,
use `include:file`. You can use `include:file` in combination with `include:project` only.
Use a full path, relative to the root directory (`/`).
For example:
```yaml
include:
- project: 'my-group/my-project'
file: '/templates/.gitlab-ci-template.yml'
```
You can also specify a `ref`. If you do not specify a value, the ref defaults to the `HEAD` of the project:
```yaml
include:
- project: 'my-group/my-project'
ref: master
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: v1.0.0
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: 787123b47f14b552955ca2786bc9542ae66fee5b # Git SHA
file: '/templates/.gitlab-ci-template.yml'
```
All [nested includes](#nested-includes) are executed in the scope of the target project.
You can use local (relative to target project), project, remote, or template includes.
##### Multiple files from a project
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/26793) in GitLab 13.6.
> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/271560) in GitLab 13.8.
You can include multiple files from the same project:
```yaml
include:
- project: 'my-group/my-project'
ref: master
file:
- '/templates/.builds.yml'
- '/templates/.tests.yml'
```
#### `include:remote`
Use `include:remote` with a full URL to include a file from a different location.
The remote file must be publicly accessible by an HTTP/HTTPS `GET` request, because
authentication in the remote URL is not supported. For example:
```yaml
include:
- remote: 'https://gitlab.com/example-project/-/raw/master/.gitlab-ci.yml'
```
All [nested includes](#nested-includes) execute without context as a public user,
so you can only `include` public projects or templates.
#### `include:template`
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/53445) in GitLab 11.7.
Use `include:template` to include `.gitlab-ci.yml` templates that are
[shipped with GitLab](https://gitlab.com/gitlab-org/gitlab/tree/master/lib/gitlab/ci/templates).
For example:
```yaml
# File sourced from the GitLab template collection
include:
- template: Auto-DevOps.gitlab-ci.yml
```
Multiple `include:template` files:
```yaml
include:
- template: Android-Fastlane.gitlab-ci.yml
- template: Auto-DevOps.gitlab-ci.yml
```
All [nested includes](#nested-includes) are executed only with the permission of the user,
so it's possible to use project, remote or template includes.
#### Nested includes
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/56836) in GitLab 11.9.
Use nested includes to compose a set of includes.
You can have up to 100 includes, but you can't have duplicate includes.
In [GitLab 12.4](https://gitlab.com/gitlab-org/gitlab/-/issues/28212) and later, the time limit
to resolve all files is 30 seconds.
#### Additional `includes` examples
View [additional `includes` examples](includes.md).
## Keyword details
The following topics explain how to use keywords to configure CI/CD pipelines.
### `image`
Use `image` to specify [a Docker image](../docker/using_docker_images.md#what-is-an-image) to use for the job.
For:
- Usage examples, see [Define `image` and `services` from `.gitlab-ci.yml`](../docker/using_docker_images.md#define-image-and-services-from-gitlab-ciyml).
- Detailed usage information, refer to [Docker integration](../docker/index.md) documentation.
#### `image:name`
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `image`](../docker/using_docker_images.md#available-settings-for-image).
#### `image:entrypoint`
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `image`](../docker/using_docker_images.md#available-settings-for-image).
#### `services`
Use `services` to specify a [service Docker image](../docker/using_docker_images.md#what-is-a-service), linked to a base image specified in [`image`](#image).
For:
- Usage examples, see [Define `image` and `services` from `.gitlab-ci.yml`](../docker/using_docker_images.md#define-image-and-services-from-gitlab-ciyml).
- Detailed usage information, refer to [Docker integration](../docker/index.md) documentation.
- Example services, see [GitLab CI/CD Services](../services/index.md).
##### `services:name`
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `services`](../docker/using_docker_images.md#available-settings-for-services).
##### `services:alias`
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `services`](../docker/using_docker_images.md#available-settings-for-services).
##### `services:entrypoint`
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `services`](../docker/using_docker_images.md#available-settings-for-services).
##### `services:command`
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `services`](../docker/using_docker_images.md#available-settings-for-services).
### `script`
Use `script` to specify a shell script for the runner to execute.
All jobs except [trigger jobs](#trigger) require a `script` keyword.
For example:
```yaml
job:
script: "bundle exec rspec"
```
You can use [YAML anchors with `script`](#yaml-anchors-for-scripts).
The `script` keyword can also contain several commands in an array:
```yaml
job:
script:
- uname -a
- bundle exec rspec
```
Sometimes, `script` commands must be wrapped in single or double quotes.
For example, commands that contain a colon (`:`) must be wrapped in single quotes (`'`).
The YAML parser needs to interpret the text as a string rather than
a "key: value" pair.
For example, this script uses a colon:
```yaml
job:
script:
- curl --request POST --header 'Content-Type: application/json' "https://gitlab/api/v4/projects"
```
To be considered valid YAML, you must wrap the entire command in single quotes. If
the command already uses single quotes, you should change them to double quotes (`"`)
if possible:
```yaml
job:
script:
- 'curl --request POST --header "Content-Type: application/json" "https://gitlab/api/v4/projects"'
```
You can verify the syntax is valid with the [CI Lint](../lint.md) tool.
Be careful when using these characters as well:
- `{`, `}`, `[`, `]`, `,`, `&`, `*`, `#`, `?`, `|`, `-`, `<`, `>`, `=`, `!`, `%`, `@`, `` ` ``.
If any of the script commands return an exit code other than zero, the job
fails and further commands are not executed. Store the exit code in a variable to
avoid this behavior:
```yaml
job:
script:
- false || exit_code=$?
- if [ $exit_code -ne 0 ]; then echo "Previous command failed"; fi;
```
#### `before_script`
Use `before_script` to define an array of commands that should run before each job,
but after [artifacts](#artifacts) are restored.
Scripts you specify in `before_script` are concatenated with any scripts you specify
in the main [`script`](#script). The combine scripts execute together in a single shell.
You can overwrite a globally-defined `before_script` if you define it in a job:
```yaml
default:
before_script:
- echo "Execute this script in all jobs that don't already have a before_script section."
job1:
script:
- echo "This script executes after the global before_script."
job:
before_script:
- echo "Execute this script instead of the global before_script."
script:
- echo "This script executes after the job's `before_script`"
```
You can use [YAML anchors with `before_script`](#yaml-anchors-for-scripts).
#### `after_script`
Use `after_script` to define an array of commands that run after each job,
including failed jobs.
If a job times out or is cancelled, the `after_script` commands do not execute.
An [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/15603) exists to support
executing `after_script` commands for timed-out or cancelled jobs.
Scripts you specify in `after_script` execute in a new shell, separate from any
`before_script` or `script` scripts. As a result, they:
- Have a current working directory set back to the default.
- Have no access to changes done by scripts defined in `before_script` or `script`, including:
- Command aliases and variables exported in `script` scripts.
- Changes outside of the working tree (depending on the runner executor), like
software installed by a `before_script` or `script` script.
- Have a separate timeout, which is hard coded to 5 minutes. See the
[related issue](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/2716) for details.
- Don't affect the job's exit code. If the `script` section succeeds and the
`after_script` times out or fails, the job exits with code `0` (`Job Succeeded`).
```yaml
default:
after_script:
- echo "Execute this script in all jobs that don't already have an after_script section."
job1:
script:
- echo "This script executes first. When it completes, the global after_script executes."
job:
script:
- echo "This script executes first. When it completes, the job's `after_script` executes."
after_script:
- echo "Execute this script instead of the global after_script."
```
You can use [YAML anchors with `after_script`](#yaml-anchors-for-scripts).
#### Script syntax
You can use syntax in [`script`](README.md#script) sections to:
- [Split long commands](script.md#split-long-commands) into multiline commands.
- [Use color codes](script.md#add-color-codes-to-script-output) to make job logs easier to review.
- [Create custom collapsible sections](../jobs/index.md#custom-collapsible-sections)
to simplify job log output.
### `stage`
Use `stage` to define which stage a job runs in. Jobs in the same
`stage` can execute in parallel (subject to [certain conditions](#use-your-own-runners)).
Jobs without a `stage` entry use the `test` stage by default. If you do not define
[`stages`](#stages) in the pipeline, you can use the 5 default stages, which execute in
this order:
- [`.pre`](#pre-and-post)
- `build`
- `test`
- `deploy`
- [`.post`](#pre-and-post)
For example:
```yaml
stages:
- build
- test
- deploy
job 0:
stage: .pre
script: make something useful before build stage
job 1:
stage: build
script: make build dependencies
job 2:
stage: build
script: make build artifacts
job 3:
stage: test
script: make test
job 4:
stage: deploy
script: make deploy
job 5:
stage: .post
script: make something useful at the end of pipeline
```
#### Use your own runners
When you use your own runners, each runner runs only one job at a time by default.
Jobs can run in parallel if they run on different runners.
If you have only one runner, jobs can run in parallel if the runner's
[`concurrent` setting](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section)
is greater than `1`.
#### `.pre` and `.post`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/31441) in GitLab 12.4.
Use `pre` and `post` for jobs that need to run first or last in a pipeline.
- `.pre` is guaranteed to always be the first stage in a pipeline.
- `.post` is guaranteed to always be the last stage in a pipeline.
User-defined stages are executed after `.pre` and before `.post`.
You must have a job in at least one stage other than `.pre` or `.post`.
You can't change the order of `.pre` and `.post`, even if you define them out of order in the `.gitlab-ci.yml` file.
For example, the following configurations are equivalent:
```yaml
stages:
- .pre
- a
- b
- .post
```
```yaml
stages:
- a
- .pre
- b
- .post
```
```yaml
stages:
- a
- b
```
### `extends`
> Introduced in GitLab 11.3.
Use `extends` to reuse configuration sections. It's an alternative to [YAML anchors](#anchors)
and is a little more flexible and readable. You can use `extends` to reuse configuration
from [included configuration files](#use-extends-and-include-together).
In the following example, the `rspec` job uses the configuration from the `.tests` template job.
GitLab:
- Performs a reverse deep merge based on the keys.
- Merges the `.tests` content with the `rspec` job.
- Doesn't merge the values of the keys.
```yaml
.tests:
script: rake test
stage: test
only:
refs:
- branches
rspec:
extends: .tests
script: rake rspec
only:
variables:
- $RSPEC
```
The result is this `rspec` job:
```yaml
rspec:
script: rake rspec
stage: test
only:
refs:
- branches
variables:
- $RSPEC
```
`.tests` in this example is a [hidden job](#hide-jobs), but it's
possible to extend configuration from regular jobs as well.
`extends` supports multi-level inheritance. You should avoid using more than three levels,
but you can use as many as eleven. The following example has two levels of inheritance:
```yaml
.tests:
only:
- pushes
.rspec:
extends: .tests
script: rake rspec
rspec 1:
variables:
RSPEC_SUITE: '1'
extends: .rspec
rspec 2:
variables:
RSPEC_SUITE: '2'
extends: .rspec
spinach:
extends: .tests
script: rake spinach
```
In GitLab 12.0 and later, it's also possible to use multiple parents for
`extends`.
#### Merge details
You can use `extends` to merge hashes but not arrays.
The algorithm used for merge is "closest scope wins," so
keys from the last member always override anything defined on other
levels. For example:
```yaml
.only-important:
variables:
URL: "http://my-url.internal"
IMPORTANT_VAR: "the details"
only:
- master
- stable
tags:
- production
script:
- echo "Hello world!"
.in-docker:
variables:
URL: "http://docker-url.internal"
tags:
- docker
image: alpine
rspec:
variables:
GITLAB: "is-awesome"
extends:
- .only-important
- .in-docker
script:
- rake rspec
```
The result is this `rspec` job:
```yaml
rspec:
variables:
URL: "http://docker-url.internal"
IMPORTANT_VAR: "the details"
GITLAB: "is-awesome"
only:
- master
- stable
tags:
- docker
image: alpine
script:
- rake rspec
```
In this example:
- The `variables` sections merge, but `URL: "http://docker-url.internal"` overwrites `URL: "http://my-url.internal"`.
- `tags: ['docker']` overwrites `tags: ['production']`.
- `script` does not merge, but `script: ['rake rspec']` overwrites
`script: ['echo "Hello world!"']`. You can use [YAML anchors](#anchors) to merge arrays.
#### Use `extends` and `include` together
To reuse configuration from different configuration files,
combine `extends` and [`include`](#include).
In the following example, a `script` is defined in the `included.yml` file.
Then, in the `.gitlab-ci.yml` file, `extends` refers
to the contents of the `script`:
- `included.yml`:
```yaml
.template:
script:
- echo Hello!
```
- `.gitlab-ci.yml`:
```yaml
include: included.yml
useTemplate:
image: alpine
extends: .template
```
### `rules`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27863) in GitLab 12.3.
Use `rules` to include or exclude jobs in pipelines.
Rules are evaluated *in order* until the first match. When a match is found, the job
is either included or excluded from the pipeline, depending on the configuration.
The job can also have [certain attributes](#rules-attributes)
added to it.
`rules` replaces [`only/except`](#onlyexcept-basic) and they can't be used together
in the same job. If you configure one job to use both keywords, the linter returns a
`key may not be used with rules` error.
#### Rules attributes
The job attributes you can use with `rules` are:
- [`when`](#when): If not defined, defaults to `when: on_success`.
- If used as `when: delayed`, `start_in` is also required.
- [`allow_failure`](#allow_failure): If not defined, defaults to `allow_failure: false`.
- [`variables`](#rulesvariables): If not defined, uses the [variables defined elsewhere](#variables).
If a rule evaluates to true, and `when` has any value except `never`, the job is included in the pipeline.
For example:
```yaml
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
when: delayed
start_in: '3 hours'
allow_failure: true
```
#### Rules clauses
Available rule clauses are:
| Clause | Description |
|----------------------------|------------------------------------------------------------------------------------------------------------------------------------|
| [`if`](#rulesif) | Add or exclude jobs from a pipeline by evaluating an `if` statement. Similar to [`only:variables`](#onlyvariablesexceptvariables). |
| [`changes`](#ruleschanges) | Add or exclude jobs from a pipeline based on what files are changed. Same as [`only:changes`](#onlychangesexceptchanges). |
| [`exists`](#rulesexists) | Add or exclude jobs from a pipeline based on the presence of specific files. |
Rules are evaluated in order until a match is found. If a match is found, the attributes
are checked to see if the job should be added to the pipeline. If no attributes are defined,
the defaults are:
- `when: on_success`
- `allow_failure: false`
The job is added to the pipeline:
- If a rule matches and has `when: on_success`, `when: delayed` or `when: always`.
- If no rules match, but the last clause is `when: on_success`, `when: delayed`
or `when: always` (with no rule).
The job is not added to the pipeline:
- If no rules match, and there is no standalone `when: on_success`, `when: delayed` or
`when: always`.
- If a rule matches, and has `when: never` as the attribute.
The following example uses `if` to strictly limit when jobs run:
```yaml
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: manual
allow_failure: true
- if: '$CI_PIPELINE_SOURCE == "schedule"'
```
- If the pipeline is for a merge request, the first rule matches, and the job
is added to the [merge request pipeline](../merge_request_pipelines/index.md)
with attributes of:
- `when: manual` (manual job)
- `allow_failure: true` (the pipeline continues running even if the manual job is not run)
- If the pipeline is **not** for a merge request, the first rule doesn't match, and the
second rule is evaluated.
- If the pipeline is a scheduled pipeline, the second rule matches, and the job
is added to the scheduled pipeline. No attributes were defined, so it is added
with:
- `when: on_success` (default)
- `allow_failure: false` (default)
- In **all other cases**, no rules match, so the job is **not** added to any other pipeline.
Alternatively, you can define a set of rules to exclude jobs in a few cases, but
run them in all other cases:
```yaml
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: never
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: never
- when: on_success
```
- If the pipeline is for a merge request, the job is **not** be added to the pipeline.
- If the pipeline is a scheduled pipeline, the job is **not** be added to the pipeline.
- In **all other cases**, the job is added to the pipeline, with `when: on_success`.
WARNING:
If you use a `when:` clause as the final rule (not including `when: never`), two
simultaneous pipelines may start. Both push pipelines and merge request pipelines can
be triggered by the same event (a push to the source branch for an open merge request).
See how to [prevent duplicate pipelines](#avoid-duplicate-pipelines)
for more details.
#### Avoid duplicate pipelines
If a job uses `rules`, a single action, like pushing a commit to a branch, can trigger
multiple pipelines. You don't have to explicitly configure rules for multiple types
of pipeline to trigger them accidentally.
Some configurations that have the potential to cause duplicate pipelines cause a
[pipeline warning](../troubleshooting.md#pipeline-warnings) to be displayed.
[Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/219431) in GitLab 13.3.
For example:
```yaml
job:
script: echo "This job creates double pipelines!"
rules:
- if: '$CUSTOM_VARIABLE == "false"'
when: never
- when: always
```
This job does not run when `$CUSTOM_VARIABLE` is false, but it *does* run in **all**
other pipelines, including **both** push (branch) and merge request pipelines. With
this configuration, every push to an open merge request's source branch
causes duplicated pipelines.
To avoid duplicate pipelines, you can:
- Use [`workflow`](#workflow) to specify which types of pipelines
can run.
- Rewrite the rules to run the job only in very specific cases,
and avoid a final `when:` rule:
```yaml
job:
script: echo "This job does NOT create double pipelines!"
rules:
- if: '$CUSTOM_VARIABLE == "true" && $CI_PIPELINE_SOURCE == "merge_request_event"'
```
You can also avoid duplicate pipelines by changing the job rules to avoid either push (branch)
pipelines or merge request pipelines. However, if you use a `- when: always` rule without
`workflow: rules`, GitLab still displays a [pipeline warning](../troubleshooting.md#pipeline-warnings).
For example, the following does not trigger double pipelines, but is not recommended
without `workflow: rules`:
```yaml
job:
script: echo "This job does NOT create double pipelines!"
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
when: never
- when: always
```
You should not include both push and merge request pipelines in the same job without
[`workflow:rules` that prevent duplicate pipelines](#switch-between-branch-pipelines-and-merge-request-pipelines):
```yaml
job:
script: echo "This job creates double pipelines!"
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
```
Also, do not mix `only/except` jobs with `rules` jobs in the same pipeline.
It may not cause YAML errors, but the different default behaviors of `only/except`
and `rules` can cause issues that are difficult to troubleshoot:
```yaml
job-with-no-rules:
script: echo "This job runs in branch pipelines."
job-with-rules:
script: echo "This job runs in merge request pipelines."
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
```
For every change pushed to the branch, duplicate pipelines run. One
branch pipeline runs a single job (`job-with-no-rules`), and one merge request pipeline
runs the other job (`job-with-rules`). Jobs with no rules default
to [`except: merge_requests`](#onlyexcept-basic), so `job-with-no-rules`
runs in all cases except merge requests.
#### `rules:if`
Use `rules:if` clauses to specify when to add a job to a pipeline:
- If an `if` statement is true, add the job to the pipeline.
- If an `if` statement is true, but it's combined with `when: never`, do not add the job to the pipeline.
- If no `if` statements are true, do not add the job to the pipeline.
`rules:if` differs slightly from `only:variables` by accepting only a single
expression string per rule, rather than an array of them. Any set of expressions to be
evaluated can be [conjoined into a single expression](../variables/README.md#conjunction--disjunction)
by using `&&` or `||`, and the [variable matching operators (`==`, `!=`, `=~` and `!~`)](../variables/README.md#syntax-of-cicd-variable-expressions).
Unlike variables in [`script`](../variables/README.md#syntax-of-cicd-variables-in-job-scripts)
sections, variables in rules expressions are always formatted as `$VARIABLE`.
`if:` clauses are evaluated based on the values of [predefined CI/CD variables](../variables/predefined_variables.md)
or [custom CI/CD variables](../variables/README.md#custom-cicd-variables).
For example:
```yaml
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/ && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"'
when: always
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/'
when: manual
allow_failure: true
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME' # Checking for the presence of a variable is possible
```
Some details regarding the logic that determines the `when` for the job:
- If none of the provided rules match, the job is set to `when: never` and is
not included in the pipeline.
- A rule without any conditional clause, such as a `when` or `allow_failure`
rule without `if` or `changes`, always matches, and is always used if reached.
- If a rule matches and has no `when` defined, the rule uses the `when`
defined for the job, which defaults to `on_success` if not defined.
- You can define `when` once per rule, or once at the job-level, which applies to
all rules. You can't mix `when` at the job-level with `when` in rules.
##### Common `if` clauses for `rules`
For behavior similar to the [`only`/`except` keywords](#onlyexcept-basic), you can
check the value of the `$CI_PIPELINE_SOURCE` variable:
| Value | Description |
|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `api` | For pipelines triggered by the [pipelines API](../../api/pipelines.md#create-a-new-pipeline). |
| `chat` | For pipelines created by using a [GitLab ChatOps](../chatops/index.md) command. |
| `external` | When you use CI services other than GitLab. |
| `external_pull_request_event` | When an external pull request on GitHub is created or updated. See [Pipelines for external pull requests](../ci_cd_for_external_repos/index.md#pipelines-for-external-pull-requests). |
| `merge_request_event` | For pipelines created when a merge request is created or updated. Required to enable [merge request pipelines](../merge_request_pipelines/index.md), [merged results pipelines](../merge_request_pipelines/pipelines_for_merged_results/index.md), and [merge trains](../merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md). |
| `parent_pipeline` | For pipelines triggered by a [parent/child pipeline](../parent_child_pipelines.md) with `rules`. Use this pipeline source in the child pipeline configuration so that it can be triggered by the parent pipeline. |
| `pipeline` | For [multi-project pipelines](../multi_project_pipelines.md) created by [using the API with `CI_JOB_TOKEN`](../multi_project_pipelines.md#triggering-multi-project-pipelines-through-api), or the [`trigger`](#trigger) keyword. |
| `push` | For pipelines triggered by a `git push` event, including for branches and tags. |
| `schedule` | For [scheduled pipelines](../pipelines/schedules.md). |
| `trigger` | For pipelines created by using a [trigger token](../triggers/README.md#trigger-token). |
| `web` | For pipelines created by using **Run pipeline** button in the GitLab UI, from the project's **CI/CD > Pipelines** section. |
| `webide` | For pipelines created by using the [WebIDE](../../user/project/web_ide/index.md). |
The following example runs the job as a manual job in scheduled pipelines or in push
pipelines (to branches or tags), with `when: on_success` (default). It does not
add the job to any other pipeline type.
```yaml
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: manual
allow_failure: true
- if: '$CI_PIPELINE_SOURCE == "push"'
```
The following example runs the job as a `when: on_success` job in [merge request pipelines](../merge_request_pipelines/index.md)
and scheduled pipelines. It does not run in any other pipeline type.
```yaml
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_PIPELINE_SOURCE == "schedule"'
```
Other commonly used variables for `if` clauses:
- `if: $CI_COMMIT_TAG`: If changes are pushed for a tag.
- `if: $CI_COMMIT_BRANCH`: If changes are pushed to any branch.
- `if: '$CI_COMMIT_BRANCH == "main"'`: If changes are pushed to `main`.
- `if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'`: If changes are pushed to the default
branch. Use when you want to have the same configuration in multiple
projects with different default branches.
- `if: '$CI_COMMIT_BRANCH =~ /regex-expression/'`: If the commit branch matches a regular expression.
- `if: '$CUSTOM_VARIABLE !~ /regex-expression/'`: If the [custom variable](../variables/README.md#custom-cicd-variables)
`CUSTOM_VARIABLE` does **not** match a regular expression.
- `if: '$CUSTOM_VARIABLE == "value1"'`: If the custom variable `CUSTOM_VARIABLE` is
exactly `value1`.
#### `rules:changes`
Use `rules:changes` to specify when to add a job to a pipeline by checking for
changes to specific files.
`rules: changes` works the same way as [`only: changes` and `except: changes`](#onlychangesexceptchanges).
It accepts an array of paths. You should use `rules: changes` only with branch
pipelines or merge request pipelines. For example, it's common to use `rules: changes`
with merge request pipelines:
```yaml
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
changes:
- Dockerfile
when: manual
allow_failure: true
```
In this example:
- If the pipeline is a merge request pipeline, check `Dockerfile` for changes.
- If `Dockerfile` has changed, add the job to the pipeline as a manual job, and the pipeline
continues running even if the job is not triggered (`allow_failure: true`).
- If `Dockerfile` has not changed, do not add job to any pipeline (same as `when: never`).
To use `rules: changes` with branch pipelines instead of merge request pipelines,
change the `if:` clause in the previous example to:
```yaml
rules:
- if: $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH
```
To implement a rule similar to [`except:changes`](#onlychangesexceptchanges),
use `when: never`.
WARNING:
You can use `rules: changes` with other pipeline types, but it is not recommended
because `rules: changes` always evaluates to true when there is no Git `push` event.
Tag pipelines, scheduled pipelines, and so on do **not** have a Git `push` event
associated with them. A `rules: changes` job is **always** added to those pipeline
if there is no `if:` statement that limits the job to branch or merge request pipelines.
##### Variables in `rules:changes`
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/34272) in GitLab 13.6.
> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/267192) in GitLab 13.7.
You can use CI/CD variables in `rules:changes` expressions to determine when
to add jobs to a pipeline:
```yaml
docker build:
variables:
DOCKERFILES_DIR: 'path/to/files/'
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- changes:
- $DOCKERFILES_DIR/*
```
You can use the `$` character for both variables and paths. For example, if the
`$DOCKERFILES_DIR` variable exists, its value is used. If it does not exist, the
`$` is interpreted as being part of a path.
#### `rules:exists`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/24021) in GitLab 12.4.
Use `exists` to run a job when certain files exist in the repository.
You can use an array of paths.
In the following example, `job` runs if a `Dockerfile` exists anywhere in the repository:
```yaml
job:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- exists:
- Dockerfile
```
Paths are relative to the project directory (`$CI_PROJECT_DIR`) and can't directly link outside it.
You can use glob patterns to match multiple files in any directory in the repository:
```yaml
job:
script: bundle exec rspec
rules:
- exists:
- spec/**.rb
```
Glob patterns are interpreted with Ruby [`File.fnmatch`](https://docs.ruby-lang.org/en/2.7.0/File.html#method-c-fnmatch)
with the flags `File::FNM_PATHNAME | File::FNM_DOTMATCH | File::FNM_EXTGLOB`.
For performance reasons, GitLab matches a maximum of 10,000 `exists` patterns. After the 10,000th check, rules with patterned globs always match.
#### `rules:allow_failure`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/30235) in GitLab 12.8.
You can use [`allow_failure: true`](#allow_failure) in `rules:` to allow a job to fail, or a manual job to
wait for action, without stopping the pipeline itself. All jobs that use `rules:` default to `allow_failure: false`
if you do not define `allow_failure:`.
The rule-level `rules:allow_failure` option overrides the job-level
[`allow_failure`](#allow_failure) option, and is only applied when
the particular rule triggers the job.
```yaml
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"'
when: manual
allow_failure: true
```
In this example, if the first rule matches, then the job has `when: manual` and `allow_failure: true`.
#### `rules:variables`
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/209864) in GitLab 13.7.
> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/289803) in GitLab 13.10.
Use [`variables`](#variables) in `rules:` to define variables for specific conditions.
For example:
```yaml
job:
variables:
DEPLOY_VARIABLE: "default-deploy"
rules:
- if: $CI_COMMIT_REF_NAME =~ /master/
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "deploy-production" # at the job level.
- if: $CI_COMMIT_REF_NAME =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
```
#### Complex rule clauses
To conjoin `if`, `changes`, and `exists` clauses with an `AND`, use them in the
same rule.
In the following example:
- If the `Dockerfile` file or any file in `/docker/scripts` has changed, and `$VAR` == "string value",
then the job runs manually
- Otherwise, the job isn't included in the pipeline.
```yaml
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: '$VAR == "string value"'
changes: # Include the job and set to when:manual if any of the follow paths match a modified file.
- Dockerfile
- docker/scripts/*
when: manual
# - "when: never" would be redundant here. It is implied any time rules are listed.
```
Keywords such as `branches` or `refs` that are available for
`only`/`except` are not available in `rules`. They are being individually
considered for their usage and behavior in this context. Future keyword improvements
are being discussed in our [epic for improving `rules`](https://gitlab.com/groups/gitlab-org/-/epics/2783),
where anyone can add suggestions or requests.
You can use [parentheses](../variables/README.md#parentheses) with `&&` and `||` to build more complicated variable expressions.
[Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/230938) in GitLab 13.3:
```yaml
job1:
script:
- echo This rule uses parentheses.
rules:
if: ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "develop") && $MY_VARIABLE
```
WARNING:
[Before GitLab 13.3](https://gitlab.com/gitlab-org/gitlab/-/issues/230938),
rules that use both `||` and `&&` may evaluate with an unexpected order of operations.
### `only`/`except` (basic)
NOTE:
`only` and `except` are not being actively developed. To define when
to add jobs to pipelines, use [`rules`](#rules).
`only` and `except` are two keywords that determine when to add jobs to pipelines:
1. `only` defines the names of branches and tags the job runs for.
1. `except` defines the names of branches and tags the job does
**not** run for.
A few rules apply to the usage of job policy:
- `only` and `except` are inclusive. If both `only` and `except` are defined
in a job specification, the ref is filtered by `only` and `except`.
- `only` and `except` can use regular expressions ([supported regexp syntax](#supported-onlyexcept-regexp-syntax)).
- `only` and `except` can specify a repository path to filter jobs for forks.
In addition, `only` and `except` can use these keywords:
| **Value** | **Description** |
|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `api` | For pipelines triggered by the [pipelines API](../../api/pipelines.md#create-a-new-pipeline). |
| `branches` | When the Git reference for a pipeline is a branch. |
| `chat` | For pipelines created by using a [GitLab ChatOps](../chatops/index.md) command. |
| `external` | When you use CI services other than GitLab. |
| `external_pull_requests` | When an external pull request on GitHub is created or updated (See [Pipelines for external pull requests](../ci_cd_for_external_repos/index.md#pipelines-for-external-pull-requests)). |
| `merge_requests` | For pipelines created when a merge request is created or updated. Enables [merge request pipelines](../merge_request_pipelines/index.md), [merged results pipelines](../merge_request_pipelines/pipelines_for_merged_results/index.md), and [merge trains](../merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md). |
| `pipelines` | For [multi-project pipelines](../multi_project_pipelines.md) created by [using the API with `CI_JOB_TOKEN`](../multi_project_pipelines.md#triggering-multi-project-pipelines-through-api), or the [`trigger`](#trigger) keyword. |
| `pushes` | For pipelines triggered by a `git push` event, including for branches and tags. |
| `schedules` | For [scheduled pipelines](../pipelines/schedules.md). |
| `tags` | When the Git reference for a pipeline is a tag. |
| `triggers` | For pipelines created by using a [trigger token](../triggers/README.md#trigger-token). |
| `web` | For pipelines created by using **Run pipeline** button in the GitLab UI, from the project's **CI/CD > Pipelines** section. |
Scheduled pipelines run on specific branches, so jobs configured with `only: branches`
run on scheduled pipelines too. Add `except: schedules` to prevent jobs with `only: branches`
from running on scheduled pipelines.
In the following example, `job` runs only for refs that start with `issue-`.
All branches are skipped:
```yaml
job:
# use regexp
only:
- /^issue-.*$/
# use special keyword
except:
- branches
```
Pattern matching is case-sensitive by default. Use the `i` flag modifier, like
`/pattern/i`, to make a pattern case-insensitive:
```yaml
job:
# use regexp
only:
- /^issue-.*$/i
# use special keyword
except:
- branches
```
In the following example, `job` runs only for:
- Git tags
- [Triggers](../triggers/README.md#trigger-token)
- [Scheduled pipelines](../pipelines/schedules.md)
```yaml
job:
# use special keywords
only:
- tags
- triggers
- schedules
```
To execute jobs only for the parent repository and not forks:
```yaml
job:
only:
- branches@gitlab-org/gitlab
except:
- master@gitlab-org/gitlab
- /^release/.*$/@gitlab-org/gitlab
```
This example runs `job` for all branches on `gitlab-org/gitlab`,
except `master` and branches that start with `release/`.
If a job does not have an `only` rule, `only: ['branches', 'tags']` is set by
default. If the job does not have an `except` rule, it's empty.
For example, `job1` and `job2` are essentially the same:
```yaml
job1:
script: echo 'test'
job2:
script: echo 'test'
only: ['branches', 'tags']
```
#### Regular expressions
The `@` symbol denotes the beginning of a ref's repository path.
To match a ref name that contains the `@` character in a regular expression,
you must use the hex character code match `\x40`.
Only the tag or branch name can be matched by a regular expression.
The repository path, if given, is always matched literally.
To match the tag or branch name,
the entire ref name part of the pattern must be a regular expression surrounded by `/`.
For example, you can't use `issue-/.*/` to match all tag names or branch names
that begin with `issue-`, but you can use `/issue-.*/`.
Regular expression flags must be appended after the closing `/`.
NOTE:
Use anchors `^` and `$` to avoid the regular expression
matching only a substring of the tag name or branch name.
For example, `/^issue-.*$/` is equivalent to `/^issue-/`,
while just `/issue/` would also match a branch called `severe-issues`.
#### Supported `only`/`except` regexp syntax
In GitLab 11.9.4, GitLab began internally converting the regexp used
in `only` and `except` keywords to [RE2](https://github.com/google/re2/wiki/Syntax).
[RE2](https://github.com/google/re2/wiki/Syntax) limits the set of available features
due to computational complexity, and some features, like negative lookaheads, became unavailable.
Only a subset of features provided by [Ruby Regexp](https://ruby-doc.org/core/Regexp.html)
are now supported.
From GitLab 11.9.7 to GitLab 12.0, GitLab provided a feature flag to
let you use unsafe regexp syntax. After migrating to safe syntax, you should disable
this feature flag again:
```ruby
Feature.enable(:allow_unsafe_ruby_regexp)
```
### `only`/`except` (advanced)
GitLab supports multiple strategies, and it's possible to use an
array or a hash configuration scheme.
Four keys are available:
- `refs`
- `variables`
- `changes`
- `kubernetes`
If you use multiple keys under `only` or `except`, the keys are evaluated as a
single conjoined expression. That is:
- `only:` includes the job if **all** of the keys have at least one condition that matches.
- `except:` excludes the job if **any** of the keys have at least one condition that matches.
With `only`, individual keys are logically joined by an `AND`. A job is added to
the pipeline if the following is true:
- `(any listed refs are true) AND (any listed variables are true) AND (any listed changes are true) AND (any chosen Kubernetes status matches)`
In the following example, the `test` job is `only` created when **all** of the following are true:
- The pipeline is [scheduled](../pipelines/schedules.md) **or** runs for `master`.
- The `variables` keyword matches.
- The `kubernetes` service is active on the project.
```yaml
test:
script: npm run test
only:
refs:
- master
- schedules
variables:
- $CI_COMMIT_MESSAGE =~ /run-end-to-end-tests/
kubernetes: active
```
With `except`, individual keys are logically joined by an `OR`. A job is **not**
added if the following is true:
- `(any listed refs are true) OR (any listed variables are true) OR (any listed changes are true) OR (a chosen Kubernetes status matches)`
In the following example, the `test` job is **not** created when **any** of the following are true:
- The pipeline runs for the `master` branch.
- There are changes to the `README.md` file in the root directory of the repository.
```yaml
test:
script: npm run test
except:
refs:
- master
changes:
- "README.md"
```
#### `only:refs`/`except:refs`
> `refs` policy introduced in GitLab 10.0.
The `refs` strategy can take the same values as the
[simplified only/except configuration](#onlyexcept-basic).
In the following example, the `deploy` job is created only when the
pipeline is [scheduled](../pipelines/schedules.md) or runs for the `master` branch:
```yaml
deploy:
only:
refs:
- master
- schedules
```
#### `only:kubernetes`/`except:kubernetes`
> `kubernetes` policy introduced in GitLab 10.0.
The `kubernetes` strategy accepts only the `active` keyword.
In the following example, the `deploy` job is created only when the
Kubernetes service is active in the project:
```yaml
deploy:
only:
kubernetes: active
```
#### `only:variables`/`except:variables`
> `variables` policy introduced in GitLab 10.7.
The `variables` keyword defines variable expressions.
These expressions determine whether or not a job should be created.
Examples of variable expressions:
```yaml
deploy:
script: cap staging deploy
only:
refs:
- branches
variables:
- $RELEASE == "staging"
- $STAGING
```
Another use case is excluding jobs depending on a commit message:
```yaml
end-to-end:
script: rake test:end-to-end
except:
variables:
- $CI_COMMIT_MESSAGE =~ /skip-end-to-end-tests/
```
You can use [parentheses](../variables/README.md#parentheses) with `&&` and `||` to build more complicated variable expressions.
[Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/230938) in GitLab 13.3:
```yaml
job1:
script:
- echo This rule uses parentheses.
only:
variables:
- ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "develop") && $MY_VARIABLE
```
#### `only:changes`/`except:changes`
> `changes` policy [introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/19232) in GitLab 11.4.
Use the `changes` keyword with `only` to run a job, or with `except` to skip a job,
when a Git push event modifies a file.
Use `only:changes` with pipelines triggered by the following refs only:
- `branches`
- `external_pull_requests`
- `merge_requests` (see additional details about [using `only:changes` with pipelines for merge requests](#use-onlychanges-with-pipelines-for-merge-requests))
WARNING:
In pipelines with [sources other than the three above](../variables/predefined_variables.md)
`changes` can't determine if a given file is new or old and always returns `true`.
You can configure jobs to use `only: changes` with other `only: refs` keywords. However,
those jobs ignore the changes and always run.
In the following example, when you push commits to an existing branch, the `docker build` job
runs only if any of these files change:
- The `Dockerfile` file.
- Files in the `docker/scripts/` directory.
- Files and subdirectories in the `dockerfiles` directory.
- Files with `rb`, `py`, `sh` extensions in the `more_scripts` directory.
```yaml
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
only:
refs:
- branches
changes:
- Dockerfile
- docker/scripts/*
- dockerfiles/**/*
- more_scripts/*.{rb,py,sh}
```
WARNING:
If you use `only:changes` with [only allow merge requests to be merged if the pipeline succeeds](../../user/project/merge_requests/merge_when_pipeline_succeeds.md#only-allow-merge-requests-to-be-merged-if-the-pipeline-succeeds),
you should [also use `only:merge_requests`](#use-onlychanges-with-pipelines-for-merge-requests). Otherwise it may not work as expected.
You can also use glob patterns to match multiple files in either the root directory
of the repository, or in _any_ directory in the repository. However, they must be wrapped
in double quotes or GitLab can't parse them:
```yaml
test:
script: npm run test
only:
refs:
- branches
changes:
- "*.json"
- "**/*.sql"
```
You can skip a job if a change is detected in any file with a
`.md` extension in the root directory of the repository:
```yaml
build:
script: npm run build
except:
changes:
- "*.md"
```
If you change multiple files, but only one file ends in `.md`,
the `build` job is still skipped. The job does not run for any of the files.
Read more about how to use this feature with:
- [New branches or tags *without* pipelines for merge requests](#use-onlychanges-without-pipelines-for-merge-requests).
- [Scheduled pipelines](#use-onlychanges-with-scheduled-pipelines).
##### Use `only:changes` with pipelines for merge requests
With [pipelines for merge requests](../merge_request_pipelines/index.md),
it's possible to define a job to be created based on files modified
in a merge request.
Use this keyword with `only: [merge_requests]` so GitLab can find the correct base
SHA of the source branch. File differences are correctly calculated from any further
commits, and all changes in the merge requests are properly tested in pipelines.
For example:
```yaml
docker build service one:
script: docker build -t my-service-one-image:$CI_COMMIT_REF_SLUG .
only:
refs:
- merge_requests
changes:
- Dockerfile
- service-one/**/*
```
In this scenario, if a merge request changes
files in the `service-one` directory or the `Dockerfile`, GitLab creates
the `docker build service one` job.
For example:
```yaml
docker build service one:
script: docker build -t my-service-one-image:$CI_COMMIT_REF_SLUG .
only:
changes:
- Dockerfile
- service-one/**/*
```
In this example, the pipeline might fail because of changes to a file in `service-one/**/*`.
A later commit that doesn't have changes in `service-one/**/*`
but does have changes to the `Dockerfile` can pass. The job
only tests the changes to the `Dockerfile`.
GitLab checks the **most recent pipeline** that **passed**. If the merge request is mergeable,
it doesn't matter that an earlier pipeline failed because of a change that has not been corrected.
When you use this configuration, ensure that the most recent pipeline
properly corrects any failures from previous pipelines.
##### Use `only:changes` without pipelines for merge requests
Without [pipelines for merge requests](../merge_request_pipelines/index.md), pipelines
run on branches or tags that don't have an explicit association with a merge request.
In this case, a previous SHA is used to calculate the diff, which is equivalent to `git diff HEAD~`.
This can result in some unexpected behavior, including:
- When pushing a new branch or a new tag to GitLab, the policy always evaluates to true.
- When pushing a new commit, the changed files are calculated by using the previous commit
as the base SHA.
##### Use `only:changes` with scheduled pipelines
`only:changes` always evaluates as true in [Scheduled pipelines](../pipelines/schedules.md).
All files are considered to have changed when a scheduled pipeline runs.
### `needs`
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/47063) in GitLab 12.2.
> - In GitLab 12.3, maximum number of jobs in `needs` array raised from five to 50.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/30631) in GitLab 12.8, `needs: []` lets jobs start immediately.
Use `needs:` to execute jobs out-of-order. Relationships between jobs
that use `needs` can be visualized as a [directed acyclic graph](../directed_acyclic_graph/index.md).
You can ignore stage ordering and run some jobs without waiting for others to complete.
Jobs in multiple stages can run concurrently.
The following example creates four paths of execution:
- Linter: the `lint` job runs immediately without waiting for the `build` stage
to complete because it has no needs (`needs: []`).
- Linux path: the `linux:rspec` and `linux:rubocop` jobs runs as soon as the `linux:build`
job finishes without waiting for `mac:build` to finish.
- macOS path: the `mac:rspec` and `mac:rubocop` jobs runs as soon as the `mac:build`
job finishes, without waiting for `linux:build` to finish.
- The `production` job runs as soon as all previous jobs finish; in this case:
`linux:build`, `linux:rspec`, `linux:rubocop`, `mac:build`, `mac:rspec`, `mac:rubocop`.
```yaml
linux:build:
stage: build
mac:build:
stage: build
lint:
stage: test
needs: []
linux:rspec:
stage: test
needs: ["linux:build"]
linux:rubocop:
stage: test
needs: ["linux:build"]
mac:rspec:
stage: test
needs: ["mac:build"]
mac:rubocop:
stage: test
needs: ["mac:build"]
production:
stage: deploy
```
#### Requirements and limitations
- In GitLab 13.9 and older, if `needs:` refers to a job that might not be added to
a pipeline because of `only`, `except`, or `rules`, the pipeline might fail to create.
- The maximum number of jobs that a single job can need in the `needs:` array is limited:
- For GitLab.com, the limit is 50. For more information, see our
[infrastructure issue](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/7541).
- For self-managed instances, the limit is: 50. This limit [can be changed](#changing-the-needs-job-limit).
- If `needs:` refers to a job that uses the [`parallel`](#parallel) keyword,
it depends on all jobs created in parallel, not just one job. It also downloads
artifacts from all the parallel jobs by default. If the artifacts have the same
name, they overwrite each other and only the last one downloaded is saved.
- `needs:` is similar to `dependencies:` in that it must use jobs from prior stages,
meaning it's impossible to create circular dependencies. Depending on jobs in the
current stage is not possible either, but support [is planned](https://gitlab.com/gitlab-org/gitlab/-/issues/30632).
- Stages must be explicitly defined for all jobs
that have the keyword `needs:` or are referred to by one.
##### Changing the `needs:` job limit **(FREE SELF)**
The maximum number of jobs that can be defined in `needs:` defaults to 50.
A GitLab administrator with [access to the GitLab Rails console](../../administration/feature_flags.md)
can choose a custom limit. For example, to set the limit to 100:
```ruby
Plan.default.actual_limits.update!(ci_needs_size_limit: 100)
```
To disable directed acyclic graphs (DAG), set the limit to `0`.
#### Artifact downloads with `needs`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/14311) in GitLab v12.6.
Use `artifacts: true` (default) or `artifacts: false` to control when artifacts are
downloaded in jobs that use `needs`.
In the following example, the `rspec` job downloads the `build_job` artifacts, but the
`rubocop` job does not:
```yaml
build_job:
stage: build
artifacts:
paths:
- binaries/
rspec:
stage: test
needs:
- job: build_job
artifacts: true
rubocop:
stage: test
needs:
- job: build_job
artifacts: false
```
In the following example, the `rspec` job downloads the artifacts from all three `build_jobs`.
`artifacts` is:
- Set to true for `build_job_1`.
- Defaults to true for both `build_job_2` and `build_job_3`.
```yaml
rspec:
needs:
- job: build_job_1
artifacts: true
- job: build_job_2
- build_job_3
```
In GitLab 12.6 and later, you can't combine the [`dependencies`](#dependencies) keyword
with `needs`.
#### Cross project artifact downloads with `needs` **(PREMIUM)**
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/14311) in GitLab v12.7.
Use `needs` to download artifacts from up to five jobs in pipelines:
- [On other refs in the same project](#artifact-downloads-between-pipelines-in-the-same-project).
- In different projects, groups and namespaces.
```yaml
build_job:
stage: build
script:
- ls -lhR
needs:
- project: namespace/group/project-name
job: build-1
ref: master
artifacts: true
```
`build_job` downloads the artifacts from the latest successful `build-1` job
on the `master` branch in the `group/project-name` project. If the project is in the
same group or namespace, you can omit them from the `project:` keyword. For example,
`project: group/project-name` or `project: project-name`.
The user running the pipeline must have at least `reporter` access to the group or project, or the group/project must have public visibility.
##### Artifact downloads between pipelines in the same project
Use `needs` to download artifacts from different pipelines in the current project.
Set the `project` keyword as the current project's name, and specify a ref.
In the following example, `build_job` downloads the artifacts for the latest successful
`build-1` job with the `other-ref` ref:
```yaml
build_job:
stage: build
script:
- ls -lhR
needs:
- project: group/same-project-name
job: build-1
ref: other-ref
artifacts: true
```
CI/CD variable support for `project:`, `job:`, and `ref` was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/202093)
in GitLab 13.3. [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/235761) in GitLab 13.4.
For example:
```yaml
build_job:
stage: build
script:
- ls -lhR
needs:
- project: $CI_PROJECT_PATH
job: $DEPENDENCY_JOB_NAME
ref: $ARTIFACTS_DOWNLOAD_REF
artifacts: true
```
You can't download artifacts from jobs that run in [`parallel:`](#parallel).
To download artifacts between [parent-child pipelines](../parent_child_pipelines.md),
use [`needs:pipeline`](#artifact-downloads-to-child-pipelines).
You should not download artifacts from the same ref as a running pipeline. Concurrent
pipelines running on the same ref could override the artifacts.
##### Artifact downloads to child pipelines
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/255983) in GitLab v13.7.
A [child pipeline](../parent_child_pipelines.md) can download artifacts from a job in
its parent pipeline or another child pipeline in the same parent-child pipeline hierarchy.
For example, with the following parent pipeline that has a job that creates some artifacts:
```yaml
create-artifact:
stage: build
script: echo 'sample artifact' > artifact.txt
artifacts:
paths: [artifact.txt]
child-pipeline:
stage: test
trigger:
include: child.yml
strategy: depend
variables:
PARENT_PIPELINE_ID: $CI_PIPELINE_ID
```
A job in the child pipeline can download artifacts from the `create-artifact` job in
the parent pipeline:
```yaml
use-artifact:
script: cat artifact.txt
needs:
- pipeline: $PARENT_PIPELINE_ID
job: create-artifact
```
The `pipeline` attribute accepts a pipeline ID and it must be a pipeline present
in the same parent-child pipeline hierarchy of the given pipeline.
The `pipeline` attribute does not accept the current pipeline ID (`$CI_PIPELINE_ID`).
To download artifacts from a job in the current pipeline, use the basic form of [`needs`](#artifact-downloads-with-needs).
#### Optional `needs`
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/30680) in GitLab 13.10.
> - It's [deployed behind a feature flag](../../user/feature_flags.md), disabled by default.
> - It's disabled on GitLab.com.
> - It's not recommended for production use.
> - To use it in GitLab self-managed instances, ask a GitLab administrator to [enable it](#enable-or-disable-optional-needs). **(FREE SELF)**
WARNING:
This feature might not be available to you. Check the **version history** note above for details.
To need a job that sometimes does not exist in the pipeline, add `optional: true`
to the `needs` configuration. If not defined, `optional: false` is the default.
Jobs that use [`rules`](#rules), [`only`, or `except`](#onlyexcept-basic), might
not always exist in a pipeline. When the pipeline starts, it checks the `needs`
relationships before running. Without `optional: true`, needs relationships that
point to a job that does not exist stops the pipeline from starting and causes a pipeline
error similar to:
- `'job1' job needs 'job2' job, but it was not added to the pipeline`
In this example:
- When the branch is `master`, the `build` job exists in the pipeline, and the `rspec`
job waits for it to complete before starting.
- When the branch is not `master`, the `build` job does not exist in the pipeline.
The `rspec` job runs immediately (similar to `needs: []`) because its `needs`
relationship to the `build` job is optional.
```yaml
build:
stage: build
rules:
- if: $CI_COMMIT_REF_NAME == "master"
rspec:
stage: test
needs:
- job: build
optional: true
```
#### Enable or disable optional needs **(FREE SELF)**
Optional needs is under development and not ready for production use. It is
deployed behind a feature flag that is **disabled by default**.
[GitLab administrators with access to the GitLab Rails console](../../administration/feature_flags.md)
can enable it.
To enable it:
```ruby
Feature.enable(:ci_needs_optional)
```
To disable it:
```ruby
Feature.disable(:ci_needs_optional)
```
### `tags`
Use `tags` to select a specific runner from the list of all runners that are
available for the project.
When you register a runner, you can specify the runner's tags, for
example `ruby`, `postgres`, `development`.
In the following example, the job is run by a runner that
has both `ruby` and `postgres` tags defined.
```yaml
job:
tags:
- ruby
- postgres
```
You can use tags to run different jobs on different platforms. For
example, if you have an OS X runner with tag `osx` and a Windows runner with tag
`windows`, you can run a job on each platform:
```yaml
windows job:
stage:
- build
tags:
- windows
script:
- echo Hello, %USERNAME%!
osx job:
stage:
- build
tags:
- osx
script:
- echo "Hello, $USER!"
```
### `allow_failure`
Use `allow_failure` when you want to let a job fail without impacting the rest of the CI
suite. The default value is `false`, except for [manual](#whenmanual) jobs that use
the `when: manual` syntax.
In jobs that use [`rules:`](#rules), all jobs default to `allow_failure: false`,
*including* `when: manual` jobs.
When `allow_failure` is set to `true` and the job fails, the job shows an orange warning in the UI.
However, the logical flow of the pipeline considers the job a
success/passed, and is not blocked.
Assuming all other jobs are successful, the job's stage and its pipeline
show the same orange warning. However, the associated commit is marked as
"passed", without warnings.
In the following example, `job1` and `job2` run in parallel. If `job1`
fails, it doesn't stop the next stage from running, because it's marked with
`allow_failure: true`:
```yaml
job1:
stage: test
script:
- execute_script_that_will_fail
allow_failure: true
job2:
stage: test
script:
- execute_script_that_will_succeed
job3:
stage: deploy
script:
- deploy_to_staging
```
#### `allow_failure:exit_codes`
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/273157) in GitLab 13.8.
> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/292024) in GitLab 13.9.
Use `allow_failure:exit_codes` to dynamically control if a job should be allowed
to fail. You can list which exit codes are not considered failures. The job fails
for any other exit code:
```yaml
test_job_1:
script:
- echo "Run a script that results in exit code 1. This job fails."
- exit 1
allow_failure:
exit_codes: 137
test_job_2:
script:
- echo "Run a script that results in exit code 137. This job is allowed to fail."
- exit 137
allow_failure:
exit_codes:
- 137
- 255
```
### `when`
Use `when` to implement jobs that run in case of failure or despite the
failure.
The valid values of `when` are:
1. `on_success` (default) - Execute job only when all jobs in earlier stages succeed,
or are considered successful because they have `allow_failure: true`.
1. `on_failure` - Execute job only when at least one job in an earlier stage fails.
1. `always` - Execute job regardless of the status of jobs in earlier stages.
1. `manual` - Execute job [manually](#whenmanual).
1. `delayed` - [Delay the execution of a job](#whendelayed) for a specified duration.
Added in GitLab 11.14.
1. `never`:
- With [`rules`](#rules), don't execute job.
- With [`workflow`](#workflow), don't run pipeline.
In the following example, the script:
1. Executes `cleanup_build_job` only when `build_job` fails.
1. Always executes `cleanup_job` as the last step in pipeline regardless of
success or failure.
1. Executes `deploy_job` when you run it manually in the GitLab UI.
```yaml
stages:
- build
- cleanup_build
- test
- deploy
- cleanup
build_job:
stage: build
script:
- make build
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build when failed
when: on_failure
test_job:
stage: test
script:
- make test
deploy_job:
stage: deploy
script:
- make deploy
when: manual
cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always
```
#### `when:manual`
A manual job is a type of job that is not executed automatically and must be explicitly
started by a user. You might want to use manual jobs for things like deploying to production.
To make a job manual, add `when: manual` to its configuration.
When the pipeline starts, manual jobs display as skipped and do not run automatically.
They can be started from the pipeline, job, [environment](../environments/index.md#configure-manual-deployments),
and deployment views.
Manual jobs can be either optional or blocking:
- **Optional**: Manual jobs have [`allow_failure: true](#allow_failure) set by default
and are considered optional. The status of an optional manual job does not contribute
to the overall pipeline status. A pipeline can succeed even if all its manual jobs fail.
- **Blocking**: To make a blocking manual job, add `allow_failure: false` to its configuration.
Blocking manual jobs stop further execution of the pipeline at the stage where the
job is defined. To let the pipeline continue running, click **{play}** (play) on
the blocking manual job.
Merge requests in projects with [merge when pipeline succeeds](../../user/project/merge_requests/merge_when_pipeline_succeeds.md)
enabled can't be merged with a blocked pipeline. Blocked pipelines show a status
of **blocked**.
When you use [`rules:`](#rules), `allow_failure` defaults to `false`, including for manual jobs.
To trigger a manual job, a user must have permission to merge to the assigned branch.
You can use [protected branches](../../user/project/protected_branches.md) to more strictly
[protect manual deployments](#protecting-manual-jobs) from being run by unauthorized users.
In [GitLab 13.5](https://gitlab.com/gitlab-org/gitlab/-/issues/201938) and later, you
can use `when:manual` in the same job as [`trigger`](#trigger). In GitLab 13.4 and
earlier, using them together causes the error `jobs:#{job-name} when should be on_success, on_failure or always`.
##### Protecting manual jobs **(PREMIUM)**
Use [protected environments](../environments/protected_environments.md)
to define a list of users authorized to run a manual job. You can authorize only
the users associated with a protected environment to trigger manual jobs, which can:
- More precisely limit who can deploy to an environment.
- Block a pipeline until an approved user "approves" it.
To protect a manual job:
1. Add an `environment` to the job. For example:
```yaml
deploy_prod:
stage: deploy
script:
- echo "Deploy to production server"
environment:
name: production
url: https://example.com
when: manual
only:
- master
```
1. In the [protected environments settings](../environments/protected_environments.md#protecting-environments),
select the environment (`production` in this example) and add the users, roles or groups
that are authorized to trigger the manual job to the **Allowed to Deploy** list. Only those in
this list can trigger this manual job, as well as GitLab administrators
who are always able to use protected environments.
You can use protected environments with blocking manual jobs to have a list of users
allowed to approve later pipeline stages. Add `allow_failure: false` to the protected
manual job and the pipeline's next stages only run after the manual job is triggered
by authorized users.
#### `when:delayed`
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/51352) in GitLab 11.4.
Use `when: delayed` to execute scripts after a waiting period, or if you want to avoid
jobs immediately entering the `pending` state.
You can set the period with `start_in` keyword. The value of `start_in` is an elapsed time in seconds, unless a unit is
provided. `start_in` must be less than or equal to one week. Examples of valid values include:
- `'5'`
- `5 seconds`
- `30 minutes`
- `1 day`
- `1 week`
When a stage includes a delayed job, the pipeline doesn't progress until the delayed job finishes.
You can use this keyword to insert delays between different stages.
The timer of a delayed job starts immediately after the previous stage completes.
Similar to other types of jobs, a delayed job's timer doesn't start unless the previous stage passes.
The following example creates a job named `timed rollout 10%` that is executed 30 minutes after the previous stage completes:
```yaml
timed rollout 10%:
stage: deploy
script: echo 'Rolling out 10% ...'
when: delayed
start_in: 30 minutes
```
To stop the active timer of a delayed job, click the **{time-out}** (**Unschedule**) button.
This job can no longer be scheduled to run automatically. You can, however, execute the job manually.
To start a delayed job immediately, click the **Play** button.
Soon GitLab Runner picks up and starts the job.
### `environment`
Use `environment` to define the [environment](../environments/index.md) that a job deploys to.
For example:
```yaml
deploy to production:
stage: deploy
script: git push production HEAD:master
environment: production
```
You can assign a value to the `environment` keyword by using:
- Plain text, like `production`.
- Variables, including CI/CD variables, predefined, secure, or variables
defined in the `.gitlab-ci.yml` file.
You can't use variables defined in a `script` section.
If you specify an `environment` and no environment with that name exists,
an environment is created.
#### `environment:name`
Set a name for an [environment](../environments/index.md). For example:
```yaml
deploy to production:
stage: deploy
script: git push production HEAD:master
environment:
name: production
```
Common environment names are `qa`, `staging`, and `production`, but you can use any
name you want.
You can assign a value to the `name` keyword by using:
- Plain text, like `staging`.
- Variables, including CI/CD variables, predefined, secure, or variables
defined in the `.gitlab-ci.yml` file.
You can't use variables defined in a `script` section.
The environment `name` can contain:
- Letters
- Digits
- Spaces
- `-`
- `_`
- `/`
- `$`
- `{`
- `}`
#### `environment:url`
Set a URL for an [environment](../environments/index.md). For example:
```yaml
deploy to production:
stage: deploy
script: git push production HEAD:master
environment:
name: production
url: https://prod.example.com
```
After the job completes, you can access the URL by using a button in the merge request,
environment, or deployment pages.
You can assign a value to the `url` keyword by using:
- Plain text, like `https://prod.example.com`.
- Variables, including CI/CD variables, predefined, secure, or variables
defined in the `.gitlab-ci.yml` file.
You can't use variables defined in a `script` section.
#### `environment:on_stop`
Closing (stopping) environments can be achieved with the `on_stop` keyword
defined under `environment`. It declares a different job that runs to close the
environment.
Read the `environment:action` section for an example.
#### `environment:action`
Use the `action` keyword to specify jobs that prepare, start, or stop environments.
| **Value** | **Description** |
|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| start | Default value. Indicates that job starts the environment. The deployment is created after the job starts. |
| prepare | Indicates that job is only preparing the environment. Does not affect deployments. [Read more about environments](../environments/index.md#prepare-an-environment) |
| stop | Indicates that job stops deployment. See the example below. |
Take for instance:
```yaml
review_app:
stage: deploy
script: make deploy-app
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://$CI_ENVIRONMENT_SLUG.example.com
on_stop: stop_review_app
stop_review_app:
stage: deploy
variables:
GIT_STRATEGY: none
script: make delete-app
when: manual
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
```
In the above example, the `review_app` job deploys to the `review`
environment. A new `stop_review_app` job is listed under `on_stop`.
After the `review_app` job is finished, it triggers the
`stop_review_app` job based on what is defined under `when`. In this case,
it is set to `manual`, so it needs a [manual action](#whenmanual) from
the GitLab UI to run.
Also in the example, `GIT_STRATEGY` is set to `none`. If the
`stop_review_app` job is [automatically triggered](../environments/index.md#stopping-an-environment),
the runner wont try to check out the code after the branch is deleted.
The example also overwrites global variables. If your `stop` `environment` job depends
on global variables, use [anchor variables](#yaml-anchors-for-variables) when you set the `GIT_STRATEGY`
to change the job without overriding the global variables.
The `stop_review_app` job is **required** to have the following keywords defined:
- `when`, defined at either:
- [The job level](#when).
- [In a rules clause](#rules). If you use `rules:` and `when: manual`, you should
also set [`allow_failure: true`](#allow_failure) so the pipeline can complete
even if the job doesn't run.
- `environment:name`
- `environment:action`
Additionally, both jobs should have matching [`rules`](../yaml/README.md#onlyexcept-basic)
or [`only/except`](../yaml/README.md#onlyexcept-basic) configuration.
In the examples above, if the configuration is not identical:
- The `stop_review_app` job might not be included in all pipelines that include the `review_app` job.
- It is not possible to trigger the `action: stop` to stop the environment automatically.
#### `environment:auto_stop_in`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/20956) in GitLab 12.8.
The `auto_stop_in` keyword is for specifying the lifetime of the environment,
that when expired, GitLab automatically stops them.
For example,
```yaml
review_app:
script: deploy-review-app
environment:
name: review/$CI_COMMIT_REF_NAME
auto_stop_in: 1 day
```
When the environment for `review_app` is created, the environment's lifetime is set to `1 day`.
Every time the review app is deployed, that lifetime is also reset to `1 day`.
For more information, see
[the environments auto-stop documentation](../environments/index.md#stop-an-environment-after-a-certain-time-period)
#### `environment:kubernetes`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27630) in GitLab 12.6.
Use the `kubernetes` keyword to configure deployments to a
[Kubernetes cluster](../../user/project/clusters/index.md) that is associated with your project.
For example:
```yaml
deploy:
stage: deploy
script: make deploy-app
environment:
name: production
kubernetes:
namespace: production
```
This configuration sets up the `deploy` job to deploy to the `production`
environment, using the `production`
[Kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/).
For more information, see
[Available settings for `kubernetes`](../environments/index.md#configure-kubernetes-deployments).
NOTE:
Kubernetes configuration is not supported for Kubernetes clusters
that are [managed by GitLab](../../user/project/clusters/index.md#gitlab-managed-clusters).
To follow progress on support for GitLab-managed clusters, see the
[relevant issue](https://gitlab.com/gitlab-org/gitlab/-/issues/38054).
#### `environment:deployment_tier`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27630) in GitLab 13.10.
Use the `deployment_tier` keyword to specify the tier of the deployment environment:
```yaml
deploy:
script: echo
environment:
name: customer-portal
deployment_tier: production
```
For more information,
see [Deployment tier of environments](../environments/index.md#deployment-tier-of-environments).
#### Dynamic environments
Use CI/CD [variables](../variables/README.md) to dynamically name environments.
For example:
```yaml
deploy as review app:
stage: deploy
script: make deploy
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://$CI_ENVIRONMENT_SLUG.example.com/
```
The `deploy as review app` job is marked as a deployment to dynamically
create the `review/$CI_COMMIT_REF_NAME` environment. `$CI_COMMIT_REF_NAME`
is a [CI/CD variable](../variables/README.md) set by the runner. The
`$CI_ENVIRONMENT_SLUG` variable is based on the environment name, but suitable
for inclusion in URLs. If the `deploy as review app` job runs in a branch named
`pow`, this environment would be accessible with a URL like `https://review-pow.example.com/`.
The common use case is to create dynamic environments for branches and use them
as Review Apps. You can see an example that uses Review Apps at
<https://gitlab.com/gitlab-examples/review-apps-nginx/>.
### `cache`
Use `cache` to specify a list of files and directories to
cache between jobs. You can only use paths that are in the local working copy.
If `cache` is defined outside the scope of jobs, it's set
globally and all jobs use that configuration.
Caching is shared between pipelines and jobs. Caches are restored before [artifacts](#artifacts).
Read how caching works and find out some good practices in the
[caching dependencies documentation](../caching/index.md).
#### `cache:paths`
Use the `paths` directive to choose which files or directories to cache. Paths
are relative to the project directory (`$CI_PROJECT_DIR`) and can't directly link outside it.
You can use Wildcards that use [glob](https://en.wikipedia.org/wiki/Glob_(programming))
patterns and:
- In [GitLab Runner 13.0](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/2620) and later,
[`doublestar.Glob`](https://pkg.go.dev/github.com/bmatcuk/doublestar@v1.2.2?tab=doc#Match).
- In GitLab Runner 12.10 and earlier,
[`filepath.Match`](https://pkg.go.dev/path/filepath/#Match).
Cache all files in `binaries` that end in `.apk` and the `.config` file:
```yaml
rspec:
script: test
cache:
paths:
- binaries/*.apk
- .config
```
Locally defined cache overrides globally defined options. The following `rspec`
job caches only `binaries/`:
```yaml
cache:
paths:
- my/files
rspec:
script: test
cache:
key: rspec
paths:
- binaries/
```
The cache is shared between jobs, so if you're using different
paths for different jobs, you should also set a different `cache:key`.
Otherwise cache content can be overwritten.
#### `cache:key`
The `key` keyword defines the affinity of caching between jobs.
You can have a single cache for all jobs, cache per-job, cache per-branch,
or any other way that fits your workflow. You can fine tune caching,
including caching data between different jobs or even different branches.
The `cache:key` variable can use any of the
[predefined variables](../variables/README.md). The default key, if not
set, is just literal `default`, which means everything is shared between
pipelines and jobs by default.
For example, to enable per-branch caching:
```yaml
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- binaries/
```
If you use **Windows Batch** to run your shell scripts you need to replace
`$` with `%`:
```yaml
cache:
key: "%CI_COMMIT_REF_SLUG%"
paths:
- binaries/
```
The `cache:key` variable can't contain the `/` character, or the equivalent
URI-encoded `%2F`. A value made only of dots (`.`, `%2E`) is also forbidden.
You can specify a [fallback cache key](#fallback-cache-key) to use if the specified `cache:key` is not found.
##### Multiple caches
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/32814) in GitLab 13.10.
> - It's [deployed behind a feature flag](../../user/feature_flags.md), disabled by default.
> - It's disabled on GitLab.com.
> - It's not recommended for production use.
> - To use it in GitLab self-managed instances, ask a GitLab administrator to [enable it](#enable-or-disable-multiple-caches). **(FREE SELF)**
WARNING:
This feature might not be available to you. Check the **version history** note above for details.
You can have a maximum of four caches:
```yaml
test-job:
stage: build
cache:
- key:
files:
- Gemfile.lock
paths:
- vendor/ruby
- key:
files:
- yarn.lock
paths:
- .yarn-cache/
script:
- bundle install --path=vendor
- yarn install --cache-folder .yarn-cache
- echo Run tests...
```
If multiple caches are combined with a [Fallback cache key](#fallback-cache-key),
the fallback is fetched multiple times if multiple caches are not found.
##### Enable or disable multiple caches **(FREE SELF)**
The multiple caches feature is under development and not ready for production use.
It is deployed behind a feature flag that is **disabled by default**.
[GitLab administrators with access to the GitLab Rails console](../../administration/feature_flags.md)
can enable it.
To enable it:
```ruby
Feature.enable(:multiple_cache_per_job)
```
To disable it:
```ruby
Feature.disable(:multiple_cache_per_job)
```
#### Fallback cache key
> [Introduced](https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests/1534) in GitLab Runner 13.4.
You can use the `$CI_COMMIT_REF_SLUG` [variable](#variables) to specify your [`cache:key`](#cachekey).
For example, if your `$CI_COMMIT_REF_SLUG` is `test` you can set a job
to download cache that's tagged with `test`.
If a cache with this tag is not found, you can use `CACHE_FALLBACK_KEY` to
specify a cache to use when none exists.
In the following example, if the `$CI_COMMIT_REF_SLUG` is not found, the job uses the key defined
by the `CACHE_FALLBACK_KEY` variable:
```yaml
variables:
CACHE_FALLBACK_KEY: fallback-key
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- binaries/
```
##### `cache:key:files`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/18986) in GitLab v12.5.
The `cache:key:files` keyword extends the `cache:key` functionality by making it easier
to reuse some caches, and rebuild them less often, which speeds up subsequent pipeline
runs.
When you include `cache:key:files`, you must also list the project files that are used to generate the key, up to a maximum of two files.
The cache `key` is a SHA checksum computed from the most recent commits (up to two, if two files are listed)
that changed the given files. If neither file is changed in any commits,
the fallback key is `default`.
```yaml
cache:
key:
files:
- Gemfile.lock
- package.json
paths:
- vendor/ruby
- node_modules
```
This example creates a cache for Ruby and Node.js dependencies that
is tied to current versions of the `Gemfile.lock` and `package.json` files. Whenever one of
these files changes, a new cache key is computed and a new cache is created. Any future
job runs that use the same `Gemfile.lock` and `package.json` with `cache:key:files`
use the new cache, instead of rebuilding the dependencies.
##### `cache:key:prefix`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/18986) in GitLab v12.5.
When you want to combine a prefix with the SHA computed for `cache:key:files`,
use the `prefix` keyword with `key:files`.
For example, if you add a `prefix` of `test`, the resulting key is: `test-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5`.
If neither file is changed in any commits, the prefix is added to `default`, so the
key in the example would be `test-default`.
Like `cache:key`, `prefix` can use any of the [predefined variables](../variables/README.md),
but cannot include:
- the `/` character (or the equivalent URI-encoded `%2F`)
- a value made only of `.` (or the equivalent URI-encoded `%2E`)
```yaml
cache:
key:
files:
- Gemfile.lock
prefix: ${CI_JOB_NAME}
paths:
- vendor/ruby
rspec:
script:
- bundle exec rspec
```
For example, adding a `prefix` of `$CI_JOB_NAME`
causes the key to look like: `rspec-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5` and
the job cache is shared across different branches. If a branch changes
`Gemfile.lock`, that branch has a new SHA checksum for `cache:key:files`. A new cache key
is generated, and a new cache is created for that key.
If `Gemfile.lock` is not found, the prefix is added to
`default`, so the key in the example would be `rspec-default`.
#### `cache:untracked`
Set `untracked: true` to cache all files that are untracked in your Git
repository:
```yaml
rspec:
script: test
cache:
untracked: true
```
Cache all Git untracked files and files in `binaries`:
```yaml
rspec:
script: test
cache:
untracked: true
paths:
- binaries/
```
#### `cache:when`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/18969) in GitLab 13.5 and GitLab Runner v13.5.0.
`cache:when` defines when to save the cache, based on the status of the job. You can
set `cache:when` to:
- `on_success` (default): Save the cache only when the job succeeds.
- `on_failure`: Save the cache only when the job fails.
- `always`: Always save the cache.
For example, to store a cache whether or not the job fails or succeeds:
```yaml
rspec:
script: rspec
cache:
paths:
- rspec/
when: 'always'
```
#### `cache:policy`
The default behavior of a caching job is to download the files at the start of
execution, and to re-upload them at the end. Any changes made by the
job are persisted for future runs. This behavior is known as the `pull-push` cache
policy.
If you know the job does not alter the cached files, you can skip the upload step
by setting `policy: pull` in the job specification. You can add an ordinary cache
job at an earlier stage to ensure the cache is updated from time to time:
```yaml
stages:
- setup
- test
prepare:
stage: setup
cache:
key: gems
paths:
- vendor/bundle
script:
- bundle install --deployment
rspec:
stage: test
cache:
key: gems
paths:
- vendor/bundle
policy: pull
script:
- bundle exec rspec ...
```
Use the `pull` policy when you have many jobs executing in parallel that use caches. This
policy speeds up job execution and reduces load on the cache server.
If you have a job that unconditionally recreates the cache without
referring to its previous contents, you can skip the download step.
To do so, add `policy: push` to the job.
### `artifacts`
Use `artifacts` to specify a list of files and directories that are
attached to the job when it [succeeds, fails, or always](#artifactswhen).
The artifacts are sent to GitLab after the job finishes. They are
available for download in the GitLab UI if the size is not
larger than the [maximum artifact size](../../user/gitlab_com/index.md#gitlab-cicd).
Job artifacts are only collected for successful jobs by default, and
artifacts are restored after [caches](#cache).
[Read more about artifacts](../pipelines/job_artifacts.md).
#### `artifacts:paths`
Paths are relative to the project directory (`$CI_PROJECT_DIR`) and can't directly
link outside it. You can use Wildcards that use [glob](https://en.wikipedia.org/wiki/Glob_(programming))
patterns and:
- In [GitLab Runner 13.0](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/2620) and later,
[`doublestar.Glob`](https://pkg.go.dev/github.com/bmatcuk/doublestar@v1.2.2?tab=doc#Match).
- In GitLab Runner 12.10 and earlier,
[`filepath.Match`](https://pkg.go.dev/path/filepath/#Match).
To restrict which jobs a specific job fetches artifacts from, see [dependencies](#dependencies).
Send all files in `binaries` and `.config`:
```yaml
artifacts:
paths:
- binaries/
- .config
```
To disable artifact passing, define the job with empty [dependencies](#dependencies):
```yaml
job:
stage: build
script: make build
dependencies: []
```
You may want to create artifacts only for tagged releases to avoid filling the
build server storage with temporary build artifacts.
Create artifacts only for tags (`default-job` doesn't create artifacts):
```yaml
default-job:
script:
- mvn test -U
except:
- tags
release-job:
script:
- mvn package -U
artifacts:
paths:
- target/*.war
only:
- tags
```
You can use wildcards for directories too. For example, if you want to get all the files inside the directories that end with `xyz`:
```yaml
job:
artifacts:
paths:
- path/*xyz/*
```
#### `artifacts:public`
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/49775) in GitLab 13.8
> - It's [deployed behind a feature flag](../../user/feature_flags.md), disabled by default.
> - It's enabled on GitLab.com.
> - It's recommended for production use.
Use `artifacts:public` to determine whether the job artifacts should be
publicly available.
The default for `artifacts:public` is `true` which means that the artifacts in
public pipelines are available for download by anonymous and guest users:
```yaml
artifacts:
public: true
```
To deny read access for anonymous and guest users to artifacts in public
pipelines, set `artifacts:public` to `false`:
```yaml
artifacts:
public: false
```
#### `artifacts:exclude`
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/15122) in GitLab 13.1
> - Requires GitLab Runner 13.1
`exclude` makes it possible to prevent files from being added to an artifacts
archive.
Similar to [`artifacts:paths`](#artifactspaths), `exclude` paths are relative
to the project directory. You can use Wildcards that use
[glob](https://en.wikipedia.org/wiki/Glob_(programming)) or
[`filepath.Match`](https://golang.org/pkg/path/filepath/#Match) patterns.
For example, to store all files in `binaries/`, but not `*.o` files located in
subdirectories of `binaries/`:
```yaml
artifacts:
paths:
- binaries/
exclude:
- binaries/**/*.o
```
Files matched by [`artifacts:untracked`](#artifactsuntracked) can be excluded using
`artifacts:exclude` too.
#### `artifacts:expose_as`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/15018) in GitLab 12.5.
Use the `expose_as` keyword to expose [job artifacts](../pipelines/job_artifacts.md)
in the [merge request](../../user/project/merge_requests/index.md) UI.
For example, to match a single file:
```yaml
test:
script: ["echo 'test' > file.txt"]
artifacts:
expose_as: 'artifact 1'
paths: ['file.txt']
```
With this configuration, GitLab adds a link **artifact 1** to the relevant merge request
that points to `file1.txt`.
An example that matches an entire directory:
```yaml
test:
script: ["mkdir test && echo 'test' > test/file.txt"]
artifacts:
expose_as: 'artifact 1'
paths: ['test/']
```
Note the following:
- Artifacts do not display in the merge request UI when using variables to define the `artifacts:paths`.
- A maximum of 10 job artifacts per merge request can be exposed.
- Glob patterns are unsupported.
- If a directory is specified, the link is to the job [artifacts browser](../pipelines/job_artifacts.md#browsing-artifacts) if there is more than
one file in the directory.
- For exposed single file artifacts with `.html`, `.htm`, `.txt`, `.json`, `.xml`,
and `.log` extensions, if [GitLab Pages](../../administration/pages/index.md) is:
- Enabled, GitLab automatically renders the artifact.
- Not enabled, the file is displayed in the artifacts browser.
#### `artifacts:name`
Use the `name` directive to define the name of the created artifacts
archive. You can specify a unique name for every archive. The `artifacts:name`
variable can make use of any of the [predefined variables](../variables/README.md).
The default name is `artifacts`, which becomes `artifacts.zip` when you download it.
To create an archive with a name of the current job:
```yaml
job:
artifacts:
name: "$CI_JOB_NAME"
paths:
- binaries/
```
To create an archive with a name of the current branch or tag including only
the binaries directory:
```yaml
job:
artifacts:
name: "$CI_COMMIT_REF_NAME"
paths:
- binaries/
```
If your branch-name contains forward slashes
(for example `feature/my-feature`) it's advised to use `$CI_COMMIT_REF_SLUG`
instead of `$CI_COMMIT_REF_NAME` for proper naming of the artifact.
To create an archive with a name of the current job and the current branch or
tag including only the binaries directory:
```yaml
job:
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_NAME"
paths:
- binaries/
```
To create an archive with a name of the current [stage](#stages) and branch name:
```yaml
job:
artifacts:
name: "$CI_JOB_STAGE-$CI_COMMIT_REF_NAME"
paths:
- binaries/
```
---
If you use **Windows Batch** to run your shell scripts you need to replace
`$` with `%`:
```yaml
job:
artifacts:
name: "%CI_JOB_STAGE%-%CI_COMMIT_REF_NAME%"
paths:
- binaries/
```
If you use **Windows PowerShell** to run your shell scripts you need to replace
`$` with `$env:`:
```yaml
job:
artifacts:
name: "$env:CI_JOB_STAGE-$env:CI_COMMIT_REF_NAME"
paths:
- binaries/
```
#### `artifacts:untracked`
Use `artifacts:untracked` to add all Git untracked files as artifacts (along
with the paths defined in `artifacts:paths`). `artifacts:untracked` ignores configuration
in the repository's `.gitignore` file.
Send all Git untracked files:
```yaml
artifacts:
untracked: true
```
Send all Git untracked files and files in `binaries`:
```yaml
artifacts:
untracked: true
paths:
- binaries/
```
Send all untracked files but [exclude](#artifactsexclude) `*.txt`:
```yaml
artifacts:
untracked: true
exclude:
- "*.txt"
```
#### `artifacts:when`
Use `artifacts:when` to upload artifacts on job failure or despite the
failure.
`artifacts:when` can be set to one of the following values:
1. `on_success` (default): Upload artifacts only when the job succeeds.
1. `on_failure`: Upload artifacts only when the job fails.
1. `always`: Always upload artifacts.
For example, to upload artifacts only when a job fails:
```yaml
job:
artifacts:
when: on_failure
```
#### `artifacts:expire_in`
Use `expire_in` to specify how long artifacts are active before they
expire and are deleted.
The expiration time period begins when the artifact is uploaded and
stored on GitLab. If the expiry time is not defined, it defaults to the
[instance wide setting](../../user/admin_area/settings/continuous_integration.md#default-artifacts-expiration)
(30 days by default).
To override the expiration date and protect artifacts from being automatically deleted:
- Use the **Keep** button on the job page.
- Set the value of `expire_in` to `never`. [Available](https://gitlab.com/gitlab-org/gitlab/-/issues/22761)
in GitLab 13.3 and later.
After their expiry, artifacts are deleted hourly by default (via a cron job),
and are not accessible anymore.
The value of `expire_in` is an elapsed time in seconds, unless a unit is
provided. Examples of valid values:
- `'42'`
- `42 seconds`
- `3 mins 4 sec`
- `2 hrs 20 min`
- `2h20min`
- `6 mos 1 day`
- `47 yrs 6 mos and 4d`
- `3 weeks and 2 days`
- `never`
To expire artifacts 1 week after being uploaded:
```yaml
job:
artifacts:
expire_in: 1 week
```
The latest artifacts for refs are locked against deletion, and kept regardless of
the expiry time. [Introduced in](https://gitlab.com/gitlab-org/gitlab/-/issues/16267)
GitLab 13.0 behind a disabled feature flag, and [made the default behavior](https://gitlab.com/gitlab-org/gitlab/-/issues/229936)
in GitLab 13.4.
In [GitLab 13.8 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/241026), you can [disable this behavior at the project level in the CI/CD settings](../pipelines/job_artifacts.md#keep-artifacts-from-most-recent-successful-jobs). In [GitLab 13.9 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/276583), you can [disable this behavior instance-wide](../../user/admin_area/settings/continuous_integration.md#keep-the-latest-artifacts-for-all-jobs-in-the-latest-successful-pipelines).
#### `artifacts:reports`
Use [`artifacts:reports`](../pipelines/job_artifacts.md#artifactsreports)
to collect test reports, code quality reports, and security reports from jobs.
It also exposes these reports in the GitLab UI (merge requests, pipeline views, and security dashboards).
These are the available report types:
| Keyword | Description |
|-----------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------|
| [`artifacts:reports:cobertura`](../pipelines/job_artifacts.md#artifactsreportscobertura) | The `cobertura` report collects Cobertura coverage XML files. |
| [`artifacts:reports:codequality`](../pipelines/job_artifacts.md#artifactsreportscodequality) | The `codequality` report collects Code Quality issues. |
| [`artifacts:reports:container_scanning`](../pipelines/job_artifacts.md#artifactsreportscontainer_scanning) **(ULTIMATE)** | The `container_scanning` report collects Container Scanning vulnerabilities. |
| [`artifacts:reports:dast`](../pipelines/job_artifacts.md#artifactsreportsdast) **(ULTIMATE)** | The `dast` report collects Dynamic Application Security Testing vulnerabilities. |
| [`artifacts:reports:dependency_scanning`](../pipelines/job_artifacts.md#artifactsreportsdependency_scanning) **(ULTIMATE)** | The `dependency_scanning` report collects Dependency Scanning vulnerabilities. |
| [`artifacts:reports:dotenv`](../pipelines/job_artifacts.md#artifactsreportsdotenv) | The `dotenv` report collects a set of environment variables. |
| [`artifacts:reports:junit`](../pipelines/job_artifacts.md#artifactsreportsjunit) | The `junit` report collects JUnit XML files. |
| [`artifacts:reports:license_management`](../pipelines/job_artifacts.md#artifactsreportslicense_management) **(ULTIMATE)** | The `license_management` report collects Licenses (*removed from GitLab 13.0*). |
| [`artifacts:reports:license_scanning`](../pipelines/job_artifacts.md#artifactsreportslicense_scanning) **(ULTIMATE)** | The `license_scanning` report collects Licenses. |
| [`artifacts:reports:load_performance`](../pipelines/job_artifacts.md#artifactsreportsload_performance) **(PREMIUM)** | The `load_performance` report collects load performance metrics. |
| [`artifacts:reports:metrics`](../pipelines/job_artifacts.md#artifactsreportsmetrics) **(PREMIUM)** | The `metrics` report collects Metrics. |
| [`artifacts:reports:performance`](../pipelines/job_artifacts.md#artifactsreportsperformance) **(PREMIUM)** | The `performance` report collects Browser Performance metrics. |
| [`artifacts:reports:sast`](../pipelines/job_artifacts.md#artifactsreportssast) | The `sast` report collects Static Application Security Testing vulnerabilities. |
| [`artifacts:reports:terraform`](../pipelines/job_artifacts.md#artifactsreportsterraform) | The `terraform` report collects Terraform `tfplan.json` files. |
#### `dependencies`
By default, all [`artifacts`](#artifacts) from previous [stages](#stages)
are passed to each job. However, you can use the `dependencies` keyword to
define a limited list of jobs to fetch artifacts from. You can also set a job to download no artifacts at all.
To use this feature, define `dependencies` in context of the job and pass
a list of all previous jobs the artifacts should be downloaded from.
You can define jobs from stages that were executed before the current one.
An error occurs if you define jobs from the current or an upcoming stage.
To prevent a job from downloading artifacts, define an empty array.
When you use `dependencies`, the status of the previous job is not considered.
If a job fails or it's a manual job that isn't triggered, no error occurs.
The following example defines two jobs with artifacts: `build:osx` and
`build:linux`. When the `test:osx` is executed, the artifacts from `build:osx`
are downloaded and extracted in the context of the build. The same happens
for `test:linux` and artifacts from `build:linux`.
The job `deploy` downloads artifacts from all previous jobs because of
the [stage](#stages) precedence:
```yaml
build:osx:
stage: build
script: make build:osx
artifacts:
paths:
- binaries/
build:linux:
stage: build
script: make build:linux
artifacts:
paths:
- binaries/
test:osx:
stage: test
script: make test:osx
dependencies:
- build:osx
test:linux:
stage: test
script: make test:linux
dependencies:
- build:linux
deploy:
stage: deploy
script: make deploy
```
##### When a dependent job fails
> Introduced in GitLab 10.3.
If the artifacts of the job that is set as a dependency are
[expired](#artifactsexpire_in) or
[erased](../pipelines/job_artifacts.md#erasing-artifacts), then
the dependent job fails.
You can ask your administrator to
[flip this switch](../../administration/job_artifacts.md#validation-for-dependencies)
and bring back the old behavior.
### `coverage`
Use `coverage` to configure how code coverage is extracted from the
job output.
Regular expressions are the only valid kind of value expected here. So, using
surrounding `/` is mandatory to consistently and explicitly represent
a regular expression string. You must escape special characters if you want to
match them literally.
For example:
```yaml
job1:
script: rspec
coverage: '/Code coverage: \d+\.\d+/'
```
The coverage is shown in the UI if at least one line in the job output matches the regular expression.
If there is more than one matched line in the job output, the last line is used.
For the matched line, the first occurrence of `\d+(\.\d+)?` is the code coverage.
Leading zeros are removed.
Coverage output from [child pipelines](../parent_child_pipelines.md) is not recorded
or displayed. Check [the related issue](https://gitlab.com/gitlab-org/gitlab/-/issues/280818)
for more details.
### `retry`
> [Introduced](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/3515) in GitLab 11.5, you can control which failures to retry on.
Use `retry` to configure how many times a job is retried in
case of a failure.
When a job fails, the job is processed again,
until the limit specified by the `retry` keyword is reached.
If `retry` is set to `2`, and a job succeeds in a second run (first retry), it is not retried.
The `retry` value must be a positive integer, from `0` to `2`
(two retries maximum, three runs in total).
The following example retries all failure cases:
```yaml
test:
script: rspec
retry: 2
```
By default, a job is retried on all failure cases. To have better control
over which failures to retry, `retry` can be a hash with the following keys:
- `max`: The maximum number of retries.
- `when`: The failure cases to retry.
To retry only runner system failures at maximum two times:
```yaml
test:
script: rspec
retry:
max: 2
when: runner_system_failure
```
If there is another failure, other than a runner system failure, the job
is not retried.
To retry on multiple failure cases, `when` can also be an array of failures:
```yaml
test:
script: rspec
retry:
max: 2
when:
- runner_system_failure
- stuck_or_timeout_failure
```
Possible values for `when` are:
<!--
If you change any of the values below, make sure to update the `RETRY_WHEN_IN_DOCUMENTATION`
array in `spec/lib/gitlab/ci/config/entry/retry_spec.rb`.
The test there makes sure that all documented
values are valid as a configuration option and therefore should always
stay in sync with this documentation.
-->
- `always`: Retry on any failure (default).
- `unknown_failure`: Retry when the failure reason is unknown.
- `script_failure`: Retry when the script failed.
- `api_failure`: Retry on API failure.
- `stuck_or_timeout_failure`: Retry when the job got stuck or timed out.
- `runner_system_failure`: Retry if there is a runner system failure (for example, job setup failed).
- `missing_dependency_failure`: Retry if a dependency is missing.
- `runner_unsupported`: Retry if the runner is unsupported.
- `stale_schedule`: Retry if a delayed job could not be executed.
- `job_execution_timeout`: Retry if the script exceeded the maximum execution time set for the job.
- `archived_failure`: Retry if the job is archived and can't be run.
- `unmet_prerequisites`: Retry if the job failed to complete prerequisite tasks.
- `scheduler_failure`: Retry if the scheduler failed to assign the job to a runner.
- `data_integrity_failure`: Retry if there is a structural integrity problem detected.
You can specify the number of [retry attempts for certain stages of job execution](../runners/README.md#job-stages-attempts) using variables.
### `timeout`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/14887) in GitLab 12.3.
Use `timeout` to configure a timeout for a specific job. For example:
```yaml
build:
script: build.sh
timeout: 3 hours 30 minutes
test:
script: rspec
timeout: 3h 30m
```
The job-level timeout can exceed the
[project-level timeout](../pipelines/settings.md#timeout) but can't
exceed the runner-specific timeout.
### `parallel`
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/21480) in GitLab 11.5.
Use `parallel` to configure how many instances of a job to run in parallel.
The value can be from 2 to 50.
The `parallel` keyword creates N instances of the same job that run in parallel.
They are named sequentially from `job_name 1/N` to `job_name N/N`:
```yaml
test:
script: rspec
parallel: 5
```
Every parallel job has a `CI_NODE_INDEX` and `CI_NODE_TOTAL`
[predefined CI/CD variable](../variables/README.md#predefined-cicd-variables) set.
Different languages and test suites have different methods to enable parallelization.
For example, use [Semaphore Test Boosters](https://github.com/renderedtext/test-boosters)
and RSpec to run Ruby tests in parallel:
```ruby
# Gemfile
source 'https://rubygems.org'
gem 'rspec'
gem 'semaphore_test_boosters'
```
```yaml
test:
parallel: 3
script:
- bundle
- bundle exec rspec_booster --job $CI_NODE_INDEX/$CI_NODE_TOTAL
```
WARNING:
Test Boosters reports usage statistics to the author.
You can then navigate to the **Jobs** tab of a new pipeline build and see your RSpec
job split into three separate jobs.
#### Parallel `matrix` jobs
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/15356) in GitLab 13.3.
Use `matrix:` to run a job multiple times in parallel in a single pipeline,
but with different variable values for each instance of the job.
There can be from 2 to 50 jobs.
Jobs can only run in parallel if there are multiple runners, or a single runner is
[configured to run multiple jobs concurrently](#use-your-own-runners).
Every job gets the same `CI_NODE_TOTAL` [CI/CD variable](../variables/README.md#predefined-cicd-variables) value, and a unique `CI_NODE_INDEX` value.
```yaml
deploystacks:
stage: deploy
script:
- bin/deploy
parallel:
matrix:
- PROVIDER: aws
STACK:
- monitoring
- app1
- app2
- PROVIDER: ovh
STACK: [monitoring, backup, app]
- PROVIDER: [gcp, vultr]
STACK: [data, processing]
```
The following example generates 10 parallel `deploystacks` jobs, each with different values
for `PROVIDER` and `STACK`:
```plaintext
deploystacks: [aws, monitoring]
deploystacks: [aws, app1]
deploystacks: [aws, app2]
deploystacks: [ovh, monitoring]
deploystacks: [ovh, backup]
deploystacks: [ovh, app]
deploystacks: [gcp, data]
deploystacks: [gcp, processing]
deploystacks: [vultr, data]
deploystacks: [vultr, processing]
```
The job naming style was [improved in GitLab 13.4](https://gitlab.com/gitlab-org/gitlab/-/issues/230452).
##### One-dimensional `matrix` jobs
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/26362) in GitLab 13.5.
You can also have one-dimensional matrices with a single job:
```yaml
deploystacks:
stage: deploy
script:
- bin/deploy
parallel:
matrix:
- PROVIDER: [aws, ovh, gcp, vultr]
```
##### Parallel `matrix` trigger jobs
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/270957) in GitLab 13.10.
Use `matrix:` to run a [trigger](#trigger) job multiple times in parallel in a single pipeline,
but with different variable values for each instance of the job.
```yaml
deploystacks:
stage: deploy
trigger:
include: path/to/child-pipeline.yml
parallel:
matrix:
- PROVIDER: aws
STACK: [monitoring, app1]
- PROVIDER: ovh
STACK: [monitoring, backup]
- PROVIDER: [gcp, vultr]
STACK: [data]
```
This example generates 6 parallel `deploystacks` trigger jobs, each with different values
for `PROVIDER` and `STACK`, and they create 6 different child pipelines with those variables.
```plaintext
deploystacks: [aws, monitoring]
deploystacks: [aws, app1]
deploystacks: [ovh, monitoring]
deploystacks: [ovh, backup]
deploystacks: [gcp, data]
deploystacks: [vultr, data]
```
### `trigger`
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/8997) in [GitLab Premium](https://about.gitlab.com/pricing/) 11.8.
> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/199224) to GitLab Free in 12.8.
Use `trigger` to define a downstream pipeline trigger. When GitLab starts a `trigger` job,
a downstream pipeline is created.
Jobs with `trigger` can only use a [limited set of keywords](../multi_project_pipelines.md#limitations).
For example, you can't run commands with [`script`](#script), [`before_script`](#before_script),
or [`after_script`](#after_script).
You can use this keyword to create two different types of downstream pipelines:
- [Multi-project pipelines](../multi_project_pipelines.md#creating-multi-project-pipelines-from-gitlab-ciyml)
- [Child pipelines](../parent_child_pipelines.md)
[In GitLab 13.2](https://gitlab.com/gitlab-org/gitlab/-/issues/197140/) and later, you can
view which job triggered a downstream pipeline. In the [pipeline graph](../pipelines/index.md#visualize-pipelines),
hover over the downstream pipeline job.
In [GitLab 13.5](https://gitlab.com/gitlab-org/gitlab/-/issues/201938) and later, you
can use [`when:manual`](#whenmanual) in the same job as `trigger`. In GitLab 13.4 and
earlier, using them together causes the error `jobs:#{job-name} when should be on_success, on_failure or always`.
You [cannot start `manual` trigger jobs with the API](https://gitlab.com/gitlab-org/gitlab/-/issues/284086).
#### Basic `trigger` syntax for multi-project pipelines
You can configure a downstream trigger by using the `trigger` keyword
with a full path to a downstream project:
```yaml
rspec:
stage: test
script: bundle exec rspec
staging:
stage: deploy
trigger: my/deployment
```
#### Complex `trigger` syntax for multi-project pipelines
You can configure a branch name that GitLab uses to create
a downstream pipeline with:
```yaml
rspec:
stage: test
script: bundle exec rspec
staging:
stage: deploy
trigger:
project: my/deployment
branch: stable
```
To mirror the status from a triggered pipeline:
```yaml
trigger_job:
trigger:
project: my/project
strategy: depend
```
To mirror the status from an upstream pipeline:
```yaml
upstream_bridge:
stage: test
needs:
pipeline: other/project
```
#### `trigger` syntax for child pipeline
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/16094) in GitLab 12.7.
To create a [child pipeline](../parent_child_pipelines.md), specify the path to the
YAML file that contains the configuration of the child pipeline:
```yaml
trigger_job:
trigger:
include: path/to/child-pipeline.yml
```
Similar to [multi-project pipelines](../multi_project_pipelines.md#mirroring-status-from-triggered-pipeline),
it's possible to mirror the status from a triggered pipeline:
```yaml
trigger_job:
trigger:
include:
- local: path/to/child-pipeline.yml
strategy: depend
```
##### Trigger child pipeline with generated configuration file
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/35632) in GitLab 12.9.
You can also trigger a child pipeline from a [dynamically generated configuration file](../parent_child_pipelines.md#dynamic-child-pipelines):
```yaml
generate-config:
stage: build
script: generate-ci-config > generated-config.yml
artifacts:
paths:
- generated-config.yml
child-pipeline:
stage: test
trigger:
include:
- artifact: generated-config.yml
job: generate-config
```
The `generated-config.yml` is extracted from the artifacts and used as the configuration
for triggering the child pipeline.
##### Trigger child pipeline with files from another project
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/205157) in GitLab 13.5.
To trigger child pipelines with files from another private project under the same
GitLab instance, use [`include:file`](#includefile):
```yaml
child-pipeline:
trigger:
include:
- project: 'my-group/my-pipeline-library'
ref: 'master'
file: '/path/to/child-pipeline.yml'
```
#### Linking pipelines with `trigger:strategy`
By default, the `trigger` job completes with the `success` status
as soon as the downstream pipeline is created.
To force the `trigger` job to wait for the downstream (multi-project or child) pipeline to complete, use
`strategy: depend`. This setting makes the trigger job wait with a "running" status until the triggered
pipeline completes. At that point, the `trigger` job completes and displays the same status as
the downstream job.
This setting can help keep your pipeline execution linear. In the following example, jobs from
subsequent stages wait for the triggered pipeline to successfully complete before
starting, which reduces parallelization.
```yaml
trigger_job:
trigger:
include: path/to/child-pipeline.yml
strategy: depend
```
#### Trigger a pipeline by API call
To force a rebuild of a specific branch, tag, or commit, you can use an API call
with a trigger token.
The trigger token is different than the [`trigger`](#trigger) keyword.
[Read more in the triggers documentation.](../triggers/README.md)
### `interruptible`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/32022) in GitLab 12.3.
Use `interruptible` to indicate that a running job should be canceled if made redundant by a newer pipeline run.
Defaults to `false` (uninterruptible). Jobs that have not started yet (pending) are considered interruptible
and safe to be cancelled.
This value is used only if the [automatic cancellation of redundant pipelines feature](../pipelines/settings.md#auto-cancel-redundant-pipelines)
is enabled.
When enabled, a pipeline is immediately canceled when a new pipeline starts on the same branch if either of the following is true:
- All jobs in the pipeline are set as interruptible.
- Any uninterruptible jobs have not started yet.
Set jobs as interruptible that can be safely canceled once started (for instance, a build job).
In the following example, a new pipeline run causes an existing running pipeline to be:
- Canceled, if only `step-1` is running or pending.
- Not canceled, once `step-2` starts running.
After an uninterruptible job starts running, the pipeline cannot be canceled.
```yaml
stages:
- stage1
- stage2
- stage3
step-1:
stage: stage1
script:
- echo "Can be canceled."
interruptible: true
step-2:
stage: stage2
script:
- echo "Can not be canceled."
step-3:
stage: stage3
script:
- echo "Because step-2 can not be canceled, this step can never be canceled, even though it's set as interruptible."
interruptible: true
```
### `resource_group`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/15536) in GitLab 12.7.
Sometimes running multiple jobs or pipelines at the same time in an environment
can lead to errors during the deployment.
To avoid these errors, use the `resource_group` attribute to make sure that
the runner doesn't run certain jobs simultaneously. Resource groups behave similar
to semaphores in other programming languages.
When the `resource_group` keyword is defined for a job in the `.gitlab-ci.yml` file,
job executions are mutually exclusive across different pipelines for the same project.
If multiple jobs belonging to the same resource group are enqueued simultaneously,
only one of the jobs is picked by the runner. The other jobs wait until the
`resource_group` is free.
For example:
```yaml
deploy-to-production:
script: deploy
resource_group: production
```
In this case, two `deploy-to-production` jobs in two separate pipelines can never run at the same time. As a result,
you can ensure that concurrent deployments never happen to the production environment.
You can define multiple resource groups per environment. For example,
when deploying to physical devices, you may have multiple physical devices. Each device
can be deployed to, but there can be only one deployment per device at any given time.
The `resource_group` value can only contain letters, digits, `-`, `_`, `/`, `$`, `{`, `}`, `.`, and spaces.
It can't start or end with `/`.
For more information, see [Deployments Safety](../environments/deployment_safety.md).
#### Pipeline-level concurrency control with Cross-Project/Parent-Child pipelines
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/39057) in GitLab 13.9.
You can define `resource_group` for downstream pipelines that are sensitive to concurrent
executions. The [`trigger` keyword](#trigger) can trigger downstream pipelines. The
[`resource_group` keyword](#resource_group) can co-exist with it. This is useful to control the
concurrency for deployment pipelines, while running non-sensitive jobs concurrently.
The following example has two pipeline configurations in a project. When a pipeline starts running,
non-sensitive jobs are executed first and aren't affected by concurrent executions in other
pipelines. However, GitLab ensures that there are no other deployment pipelines running before
triggering a deployment (child) pipeline. If other deployment pipelines are running, GitLab waits
until those pipelines finish before running another one.
```yaml
# .gitlab-ci.yml (parent pipeline)
build:
stage: build
script: echo "Building..."
test:
stage: test
script: echo "Testing..."
deploy:
stage: deploy
trigger:
include: deploy.gitlab-ci.yml
strategy: depend
resource_group: AWS-production
```
```yaml
# deploy.gitlab-ci.yml (child pipeline)
stages:
- provision
- deploy
provision:
stage: provision
script: echo "Provisioning..."
deployment:
stage: deploy
script: echo "Deploying..."
```
You must define [`strategy: depend`](#linking-pipelines-with-triggerstrategy)
with the `trigger` keyword. This ensures that the lock isn't released until the downstream pipeline
finishes.
### `release`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/merge_requests/19298) in GitLab 13.2.
Use `release` to create a [release](../../user/project/releases/index.md).
These keywords are supported:
- [`tag_name`](#releasetag_name)
- [`description`](#releasedescription)
- [`name`](#releasename) (optional)
- [`ref`](#releaseref) (optional)
- [`milestones`](#releasemilestones) (optional)
- [`released_at`](#releasereleased_at) (optional)
The release is created only if the job processes without error. If the Rails API
returns an error during release creation, the `release` job fails.
#### `release-cli` Docker image
You must specify the Docker image to use for the `release-cli`:
```yaml
image: registry.gitlab.com/gitlab-org/release-cli:latest
```
#### `script`
All jobs except [trigger](#trigger) jobs must have the `script` keyword. A `release`
job can use the output from script commands, but you can use a placeholder script if
the script is not needed:
```yaml
script:
- echo 'release job'
```
An [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/223856) exists to remove this requirement in an upcoming version of GitLab.
A pipeline can have multiple `release` jobs, for example:
```yaml
ios-release:
script:
- echo 'iOS release job'
release:
tag_name: v1.0.0-ios
description: 'iOS release v1.0.0'
android-release:
script:
- echo 'Android release job'
release:
tag_name: v1.0.0-android
description: 'Android release v1.0.0'
```
#### `release:tag_name`
You must specify a `tag_name` for the release. The tag can refer to an existing Git tag or
you can specify a new tag.
When the specified tag doesn't exist in the repository, a new tag is created from the associated SHA of the pipeline.
For example, when creating a release from a Git tag:
```yaml
job:
release:
tag_name: $CI_COMMIT_TAG
description: 'Release description'
```
It is also possible to create any unique tag, in which case `only: tags` is not mandatory.
A semantic versioning example:
```yaml
job:
release:
tag_name: ${MAJOR}_${MINOR}_${REVISION}
description: 'Release description'
```
- The release is created only if the job's main script succeeds.
- If the release already exists, it is not updated and the job with the `release` keyword fails.
- The `release` section executes after the `script` tag and before the `after_script`.
#### `release:name`
The release name. If omitted, it is populated with the value of `release: tag_name`.
#### `release:description`
Specifies the long description of the release. You can also specify a file that contains the
description.
##### Read description from a file
> [Introduced](https://gitlab.com/gitlab-org/release-cli/-/merge_requests/67) in GitLab 13.7.
You can specify a file in `$CI_PROJECT_DIR` that contains the description. The file must be relative
to the project directory (`$CI_PROJECT_DIR`), and if the file is a symbolic link it can't reside
outside of `$CI_PROJECT_DIR`. The `./path/to/file` and filename can't contain spaces.
```yaml
job:
release:
tag_name: ${MAJOR}_${MINOR}_${REVISION}
description: './path/to/CHANGELOG.md'
```
#### `release:ref`
If the `release: tag_name` doesnt exist yet, the release is created from `ref`.
`ref` can be a commit SHA, another tag name, or a branch name.
#### `release:milestones`
The title of each milestone the release is associated with.
#### `release:released_at`
The date and time when the release is ready. Defaults to the current date and time if not
defined. Should be enclosed in quotes and expressed in ISO 8601 format.
```json
released_at: '2021-03-15T08:00:00Z'
```
#### Complete example for `release`
If you combine the previous examples for `release`, you get two options, depending on how you generate the
tags. You can't use these options together, so choose one:
- To create a release when you push a Git tag, or when you add a Git tag
in the UI by going to **Repository > Tags**:
```yaml
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $CI_COMMIT_TAG # Run this job when a tag is created manually
script:
- echo 'running release_job'
release:
name: 'Release $CI_COMMIT_TAG'
description: 'Created using the release-cli $EXTRA_DESCRIPTION' # $EXTRA_DESCRIPTION must be defined
tag_name: '$CI_COMMIT_TAG' # elsewhere in the pipeline.
ref: '$CI_COMMIT_TAG'
milestones:
- 'm1'
- 'm2'
- 'm3'
released_at: '2020-07-15T08:00:00Z' # Optional, is auto generated if not defined, or can use a variable.
```
- To create a release automatically when commits are pushed or merged to the default branch,
using a new Git tag that is defined with variables:
NOTE:
Environment variables set in `before_script` or `script` are not available for expanding
in the same job. Read more about
[potentially making variables available for expanding](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/6400).
```yaml
prepare_job:
stage: prepare # This stage must run before the release stage
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
script:
- echo "EXTRA_DESCRIPTION=some message" >> variables.env # Generate the EXTRA_DESCRIPTION and TAG environment variables
- echo "TAG=v$(cat VERSION)" >> variables.env # and append to the variables.env file
artifacts:
reports:
dotenv: variables.env # Use artifacts:reports:dotenv to expose the variables to other jobs
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
needs:
- job: prepare_job
artifacts: true
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
script:
- echo 'running release_job for $TAG'
release:
name: 'Release $TAG'
description: 'Created using the release-cli $EXTRA_DESCRIPTION' # $EXTRA_DESCRIPTION and the $TAG
tag_name: '$TAG' # variables must be defined elsewhere
ref: '$CI_COMMIT_SHA' # in the pipeline. For example, in the
milestones: # prepare_job
- 'm1'
- 'm2'
- 'm3'
released_at: '2020-07-15T08:00:00Z' # Optional, is auto generated if not defined, or can use a variable.
```
#### Release assets as Generic packages
You can use [Generic packages](../../user/packages/generic_packages/) to host your release assets.
For a complete example, see the [Release assets as Generic packages](https://gitlab.com/gitlab-org/release-cli/-/tree/master/docs/examples/release-assets-as-generic-package/)
project.
#### `release-cli` command line
The entries under the `release` node are transformed into a `bash` command line and sent
to the Docker container, which contains the [release-cli](https://gitlab.com/gitlab-org/release-cli).
You can also call the `release-cli` directly from a `script` entry.
For example, if you use the YAML described previously:
```shell
release-cli create --name "Release $CI_COMMIT_SHA" --description "Created using the release-cli $EXTRA_DESCRIPTION" --tag-name "v${MAJOR}.${MINOR}.${REVISION}" --ref "$CI_COMMIT_SHA" --released-at "2020-07-15T08:00:00Z" --milestone "m1" --milestone "m2" --milestone "m3"
```
### `secrets`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/33014) in GitLab 13.4.
Use `secrets` to specify the [CI/CD Secrets](../secrets/index.md) the job needs. It should be a hash,
and the keys should be the names of the variables that are made available to the job.
The value of each secret is saved in a temporary file. This file's path is stored in these
variables.
#### `secrets:vault` **(PREMIUM)**
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/28321) in GitLab 13.4.
Use `vault` to specify secrets provided by [Hashicorp's Vault](https://www.vaultproject.io/).
This syntax has multiple forms. The shortest form assumes the use of the
[KV-V2](https://www.vaultproject.io/docs/secrets/kv/kv-v2) secrets engine,
mounted at the default path `kv-v2`. The last part of the secret's path is the
field to fetch the value for:
```yaml
job:
secrets:
DATABASE_PASSWORD:
vault: production/db/password # translates to secret `kv-v2/data/production/db`, field `password`
```
You can specify a custom secrets engine path by adding a suffix starting with `@`:
```yaml
job:
secrets:
DATABASE_PASSWORD:
vault: production/db/password@ops # translates to secret `ops/data/production/db`, field `password`
```
In the detailed form of the syntax, you can specify all details explicitly:
```yaml
job:
secrets:
DATABASE_PASSWORD: # translates to secret `ops/data/production/db`, field `password`
vault:
engine:
name: kv-v2
path: ops
path: production/db
field: password
```
### `pages`
Use `pages` to upload static content to GitLab. The content
is then published as a website. You must:
- Place any static content in a `public/` directory.
- Define [`artifacts`](#artifacts) with a path to the `public/` directory.
The following example moves all files from the root of the project to the
`public/` directory. The `.public` workaround is so `cp` does not also copy
`public/` to itself in an infinite loop:
```yaml
pages:
stage: deploy
script:
- mkdir .public
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- master
```
View the [GitLab Pages user documentation](../../user/project/pages/index.md).
### `inherit`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/207484) in GitLab 12.9.
Use `inherit:` to control inheritance of globally-defined defaults
and variables.
To enable or disable the inheritance of all `default:` or `variables:` keywords, use:
- `default: true` or `default: false`
- `variables: true` or `variables: false`
To inherit only a subset of `default:` keywords or `variables:`, specify what
you wish to inherit. Anything not listed is **not** inherited. Use
one of the following formats:
```yaml
inherit:
default: [keyword1, keyword2]
variables: [VARIABLE1, VARIABLE2]
```
Or:
```yaml
inherit:
default:
- keyword1
- keyword2
variables:
- VARIABLE1
- VARIABLE2
```
In the following example:
- `rubocop`:
- inherits: Nothing.
- `rspec`:
- inherits: the default `image` and the `WEBHOOK_URL` variable.
- does **not** inherit: the default `before_script` and the `DOMAIN` variable.
- `capybara`:
- inherits: the default `before_script` and `image`.
- does **not** inherit: the `DOMAIN` and `WEBHOOK_URL` variables.
- `karma`:
- inherits: the default `image` and `before_script`, and the `DOMAIN` variable.
- does **not** inherit: `WEBHOOK_URL` variable.
```yaml
default:
image: 'ruby:2.4'
before_script:
- echo Hello World
variables:
DOMAIN: example.com
WEBHOOK_URL: https://my-webhook.example.com
rubocop:
inherit:
default: false
variables: false
script: bundle exec rubocop
rspec:
inherit:
default: [image]
variables: [WEBHOOK_URL]
script: bundle exec rspec
capybara:
inherit:
variables: false
script: bundle exec capybara
karma:
inherit:
default: true
variables: [DOMAIN]
script: karma
```
## `variables`
> Introduced in GitLab Runner v0.5.0.
[CI/CD variables](../variables/README.md) are configurable values that are passed to jobs.
They can be set globally and per-job.
There are two types of variables.
- [Custom variables](../variables/README.md#custom-cicd-variables):
You can define their values in the `.gitlab-ci.yml` file, in the GitLab UI,
or by using the API. You can also input variables in the GitLab UI when
[running a pipeline manually](../pipelines/index.md#run-a-pipeline-manually).
- [Predefined variables](../variables/predefined_variables.md):
These values are set by the runner itself.
One example is `CI_COMMIT_REF_NAME`, which is the branch or tag the project is built for.
After you define a variable, you can use it in all executed commands and scripts.
Variables are meant for non-sensitive project configuration, for example:
```yaml
variables:
DEPLOY_SITE: "https://example.com/"
deploy_job:
stage: deploy
script:
- deploy-script --url $DEPLOY_SITE --path "/"
deploy_review_job:
stage: deploy
variables:
REVIEW_PATH: "/review"
script:
- deploy-review-script --url $DEPLOY_SITE --path $REVIEW_PATH
```
You can use only integers and strings for the variable's name and value.
If you define a variable at the top level of the `gitlab-ci.yml` file, it is global,
meaning it applies to all jobs. If you define a variable in a job, it's available
to that job only.
If a variable of the same name is defined globally and for a specific job, the
[job-specific variable overrides the global variable](../variables/README.md#priority-of-cicd-variables).
All YAML-defined variables are also set to any linked
[Docker service containers](../docker/using_docker_images.md#what-is-a-service).
You can use [YAML anchors for variables](#yaml-anchors-for-variables).
### Prefill variables in manual pipelines
> [Introduced in](https://gitlab.com/gitlab-org/gitlab/-/issues/30101) GitLab 13.7.
Use the `value` and `description` keywords to define [variables that are prefilled](../pipelines/index.md#prefill-variables-in-manual-pipelines)
when [running a pipeline manually](../pipelines/index.md#run-a-pipeline-manually):
```yaml
variables:
DEPLOY_ENVIRONMENT:
value: "staging" # Deploy to staging by default
description: "The deployment target. Change this variable to 'canary' or 'production' if needed."
```
### Configure runner behavior with variables
You can use [CI/CD variables](../variables/README.md) to configure how the runner processes Git requests:
- [`GIT_STRATEGY`](../runners/README.md#git-strategy)
- [`GIT_SUBMODULE_STRATEGY`](../runners/README.md#git-submodule-strategy)
- [`GIT_CHECKOUT`](../runners/README.md#git-checkout)
- [`GIT_CLEAN_FLAGS`](../runners/README.md#git-clean-flags)
- [`GIT_FETCH_EXTRA_FLAGS`](../runners/README.md#git-fetch-extra-flags)
- [`GIT_DEPTH`](../runners/README.md#shallow-cloning) (shallow cloning)
- [`GIT_CLONE_PATH`](../runners/README.md#custom-build-directories) (custom build directories)
- [`TRANSFER_METER_FREQUENCY`](../runners/README.md#artifact-and-cache-settings) (artifact/cache meter update frequency)
- [`ARTIFACT_COMPRESSION_LEVEL`](../runners/README.md#artifact-and-cache-settings) (artifact archiver compression level)
- [`CACHE_COMPRESSION_LEVEL`](../runners/README.md#artifact-and-cache-settings) (cache archiver compression level)
You can also use variables to configure how many times a runner
[attempts certain stages of job execution](../runners/README.md#job-stages-attempts).
## YAML-specific features
In your `.gitlab-ci.yml` file, you can use YAML-specific features like anchors (`&`), aliases (`*`),
and map merging (`<<`). Use these features to reduce the complexity
of the code in the `.gitlab-ci.yml` file.
Read more about the various [YAML features](https://learnxinyminutes.com/docs/yaml/).
In most cases, the [`extends` keyword](#extends) is more user friendly and you should
use it when possible.
You can use YAML anchors to merge YAML arrays.
### Anchors
YAML has a feature called 'anchors' that you can use to duplicate
content across your document.
Use anchors to duplicate or inherit properties. Use anchors with [hidden jobs](#hide-jobs)
to provide templates for your jobs. When there are duplicate keys, GitLab
performs a reverse deep merge based on the keys.
You can't use YAML anchors across multiple files when using the [`include`](#include)
keyword. Anchors are only valid in the file they were defined in. To reuse configuration
from different YAML files, use [`!reference` tags](#reference-tags) or the
[`extends` keyword](#extends).
The following example uses anchors and map merging. It creates two jobs,
`test1` and `test2`, that inherit the `.job_template` configuration, each
with their own custom `script` defined:
```yaml
.job_template: &job_configuration # Hidden yaml configuration that defines an anchor named 'job_configuration'
image: ruby:2.6
services:
- postgres
- redis
test1:
<<: *job_configuration # Merge the contents of the 'job_configuration' alias
script:
- test1 project
test2:
<<: *job_configuration # Merge the contents of the 'job_configuration' alias
script:
- test2 project
```
`&` sets up the name of the anchor (`job_configuration`), `<<` means "merge the
given hash into the current one," and `*` includes the named anchor
(`job_configuration` again). The expanded version of this example is:
```yaml
.job_template:
image: ruby:2.6
services:
- postgres
- redis
test1:
image: ruby:2.6
services:
- postgres
- redis
script:
- test1 project
test2:
image: ruby:2.6
services:
- postgres
- redis
script:
- test2 project
```
You can use anchors to define two sets of services. For example, `test:postgres`
and `test:mysql` share the `script` defined in `.job_template`, but use different
`services`, defined in `.postgres_services` and `.mysql_services`:
```yaml
.job_template: &job_configuration
script:
- test project
tags:
- dev
.postgres_services:
services: &postgres_configuration
- postgres
- ruby
.mysql_services:
services: &mysql_configuration
- mysql
- ruby
test:postgres:
<<: *job_configuration
services: *postgres_configuration
tags:
- postgres
test:mysql:
<<: *job_configuration
services: *mysql_configuration
```
The expanded version is:
```yaml
.job_template:
script:
- test project
tags:
- dev
.postgres_services:
services:
- postgres
- ruby
.mysql_services:
services:
- mysql
- ruby
test:postgres:
script:
- test project
services:
- postgres
- ruby
tags:
- postgres
test:mysql:
script:
- test project
services:
- mysql
- ruby
tags:
- dev
```
You can see that the hidden jobs are conveniently used as templates, and
`tags: [postgres]` overwrites `tags: [dev]`.
#### YAML anchors for scripts
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/23005) in GitLab 12.5.
You can use [YAML anchors](#anchors) with [script](#script), [`before_script`](#before_script),
and [`after_script`](#after_script) to use predefined commands in multiple jobs:
```yaml
.some-script-before: &some-script-before
- echo "Execute this script first"
.some-script: &some-script
- echo "Execute this script second"
- echo "Execute this script too"
.some-script-after: &some-script-after
- echo "Execute this script last"
job1:
before_script:
- *some-script-before
script:
- *some-script
- echo "Execute something, for this job only"
after_script:
- *some-script-after
job2:
script:
- *some-script-before
- *some-script
- echo "Execute something else, for this job only"
- *some-script-after
```
#### YAML anchors for variables
Use [YAML anchors](#anchors) with `variables` to repeat assignment
of variables across multiple jobs. You can also use YAML anchors when a job
requires a specific `variables` block that would otherwise override the global variables.
The following example shows how override the `GIT_STRATEGY` variable without affecting
the use of the `SAMPLE_VARIABLE` variable:
```yaml
# global variables
variables: &global-variables
SAMPLE_VARIABLE: sample_variable_value
ANOTHER_SAMPLE_VARIABLE: another_sample_variable_value
# a job that must set the GIT_STRATEGY variable, yet depend on global variables
job_no_git_strategy:
stage: cleanup
variables:
<<: *global-variables
GIT_STRATEGY: none
script: echo $SAMPLE_VARIABLE
```
### Hide jobs
If you want to temporarily disable a job, rather than commenting out all the
lines where the job is defined:
```yaml
# hidden_job:
# script:
# - run test
```
Instead, you can start its name with a dot (`.`) and it is not processed by
GitLab CI/CD. In the following example, `.hidden_job` is ignored:
```yaml
.hidden_job:
script:
- run test
```
Use this feature to ignore jobs, or use the
[YAML-specific features](#yaml-specific-features) and transform the hidden jobs
into templates.
### `!reference` tags
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/266173) in GitLab 13.9.
Use the `!reference` custom YAML tag to select keyword configuration from other job
sections and reuse it in the current section. Unlike [YAML anchors](#anchors), you can
use `!reference` tags to reuse configuration from [included](#include) configuration
files as well.
In the following example, a `script` and an `after_script` from two different locations are
reused in the `test` job:
- `setup.yml`:
```yaml
.setup:
script:
- echo creating environment
```
- `.gitlab-ci.yml`:
```yaml
include:
- local: setup.yml
.teardown:
after_script:
- echo deleting environment
test:
script:
- !reference [.setup, script]
- echo running my own command
after_script:
- !reference [.teardown, after_script]
```
In the following example, `test-vars-1` reuses the all the variables in `.vars`, while `test-vars-2`
selects a specific variable and reuses it as a new `MY_VAR` variable.
```yaml
.vars:
variables:
URL: "http://my-url.internal"
IMPORTANT_VAR: "the details"
test-vars-1:
variables: !reference [.vars, variables]
script:
- printenv
test-vars-2:
variables:
MY_VAR: !reference [.vars, variables, IMPORTANT_VAR]
script:
- printenv
```
You can't reuse a section that already includes a `!reference` tag. Only one level
of nesting is supported.
## Skip Pipeline
To push a commit without triggering a pipeline, add `[ci skip]` or `[skip ci]`, using any
capitalization, to your commit message.
Alternatively, if you are using Git 2.10 or later, use the `ci.skip` [Git push option](../../user/project/push_options.md#push-options-for-gitlab-cicd).
The `ci.skip` push option does not skip merge request
pipelines.
## Processing Git pushes
GitLab creates at most four branch and tag pipelines when
pushing multiple changes in a single `git push` invocation.
This limitation does not affect any of the updated merge request pipelines.
All updated merge requests have a pipeline created when using
[pipelines for merge requests](../merge_request_pipelines/index.md).
## Deprecated keywords
The following keywords are deprecated.
### Globally-defined `types`
WARNING:
`types` is deprecated, and could be removed in a future release.
Use [`stages`](#stages) instead.
### Job-defined `type`
WARNING:
`type` is deprecated, and could be removed in one of the future releases.
Use [`stage`](#stage) instead.
### Globally-defined `image`, `services`, `cache`, `before_script`, `after_script`
Defining `image`, `services`, `cache`, `before_script`, and
`after_script` globally is deprecated. Support could be removed
from a future release.
Use [`default:`](#custom-default-keyword-values) instead. For example:
```yaml
default:
image: ruby:2.5
services:
- docker:dind
cache:
paths: [vendor/]
before_script:
- bundle install --path vendor/
after_script:
- rm -rf tmp/
```
<!-- ## Troubleshooting
Include any troubleshooting steps that you can foresee. If you know beforehand what issues
one might have when setting this up, or when something is changed, or on upgrading, it's
important to describe those, too. Think of things that may go wrong and include them here.
This is important to minimize requests for support, and to avoid doc comments with
questions that you know someone might ask.
Each scenario can be a third-level heading, e.g. `### Getting error message X`.
If you have none to add when creating a doc, leave this section in place
but commented out to help encourage others to add to it in the future. -->