| [Shell](https://docs.gitlab.com/runner/executors/shell.html) | Locally, stored under the `gitlab-runner` user's home directory: `/home/gitlab-runner/cache/<user>/<project>/<cache-key>/cache.zip`. |
| [Docker](https://docs.gitlab.com/runner/executors/docker.html) | Locally, stored under [Docker volumes](https://docs.gitlab.com/runner/executors/docker.html#the-builds-and-cache-storage): `/var/lib/docker/volumes/<volume-id>/_data/<user>/<project>/<cache-key>/cache.zip`. |
| [Docker machine](https://docs.gitlab.com/runner/executors/docker_machine.html) (autoscale Runners) | Behaves the same as the Docker executor. |
### How archiving and extracting works
In the most simple scenario, consider that you use only one machine where the
Runner is installed, and all jobs of your project run on the same host.
Let's see the following example of two jobs that belong to two consecutive
stages:
```yaml
stages:
- build
- test
before_script:
- echo "Hello"
job A:
stage: build
script:
- mkdir vendor/
- echo "build" > vendor/hello.txt
cache:
key: build-cache
paths:
- vendor/
after_script:
- echo "World"
job B:
stage: test
script:
- cat vendor/hello.txt
cache:
key: build-cache
```
Here's what happens behind the scenes:
1. Pipeline starts
1.`job A` runs
1.`before_script` is executed
1.`script` is executed
1.`after_script` is executed
1.`cache` runs and the `vendor/` directory is zipped into `cache.zip`.
This file is then saved in the directory based on the
[Runner's setting](#where-the-caches-are-stored) and the `cache: key`.
1.`job B` runs
1. The cache is extracted (if found)
1.`before_script` is executed
1.`script` is executed
1. Pipeline finishes
By using a single Runner on a single machine, you'll not have the issue where
`job B` might execute on a Runner different from `job A`, thus guaranteeing the
cache between stages. That will only work if the build goes from stage `build`
to `test` in the same Runner/machine, otherwise, you [might not have the cache
available](#cache-mismatch).
During the caching process, there's also a couple of things to consider:
- If some other job, with another cache configuration had saved its
cache in the same zip file, it is overwritten. If the S3 based shared cache is
used, the file is additionally uploaded to S3 to an object based on the cache
key. So, two jobs with different paths, but the same cache key, will overwrite
their cache.
- When extracting the cache from `cache.zip`, everything in the zip file is
extracted in the job's working directory (usually the repository which is
pulled down), and the Runner doesn't mind if the archive of `job A` overwrites
things in the archive of `job B`.
The reason why it works this way is because the cache created for one Runner
often will not be valid when used by a different one which can run on a
**different architecture** (e.g., when the cache includes binary files). And
since the different steps might be executed by Runners running on different
machines, it is a safe default.
### Cache mismatch
In the following table, you can see some reasons where you might hit a cache
mismatch and a few ideas how to fix it.
| Reason of a cache mismatch | How to fix it |
| -------------------------- | ------------- |
| You use multiple standalone Runners (not in autoscale mode) attached to one project without a shared cache | Use only one Runner for your project or use multiple Runners with distributed cache enabled |
| You use Runners in autoscale mode without a distributed cache enabled | Configure the autoscale Runner to use a distributed cache |
| The machine the Runner is installed on is low on disk space or, if you've set up distributed cache, the S3 bucket where the cache is stored doesn't have enough space | Make sure you clear some space to allow new caches to be stored. Currently, there's no automatic way to do this. |
| You use the same `key` for jobs where they cache different paths. | Use different cache keys to that the cache archive is stored to a different location and doesn't overwrite wrong caches. |
Let's explore some examples.
---
Let's assume you have only one Runner assigned to your project, so the cache
will be stored in the Runner's machine by default. If two jobs, A and B,
have the same cache key, but they cache different paths, cache B would overwrite
cache A, even if their `paths` don't match:
We want `job A` and `job B` to re-use their
cache when the pipeline is run for a second time.
```yaml
stages:
- build
- test
job A:
stage: build
script: make build
cache:
key: same-key
paths:
- public/
job B:
stage: test
script: make test
cache:
key: same-key
paths:
- vendor/
```
1.`job A` runs
1.`public/` is cached as cache.zip
1.`job B` runs
1. The previous cache, if any, is unzipped
1.`vendor/` is cached as cache.zip and overwrites the previous one
1. The next time `job A` runs it will use the cache of `job B` which is different
and thus will be ineffective
To fix that, use different `keys` for each job.
---
In another case, let's assume you have more than one Runners assigned to your
project, but the distributed cache is not enabled. We want the second time the
pipeline is run, `job A` and `job B` to re-use their cache (which in this case
will be different):
```yaml
stages:
- build
- test
job A:
stage: build
script: build
cache:
key: keyA
paths:
- vendor/
job B:
stage: test
script: test
cache:
key: keyB
paths:
- vendor/
```
In that case, even if the `key` is different (no fear of overwriting), you
might experience the cached files to "get cleaned" before each stage if the
jobs run on different Runners in the subsequent pipelines.
## Clearing the cache
GitLab Runners use [cache](../yaml/README.md#cache) to speed up the execution
of your jobs by reusing existing data. This however, can sometimes lead to an
inconsistent behavior.
To start with a fresh copy of the cache, there are two ways to do that.
### Clearing the cache by changing `cache:key`
All you have to do is set a new `cache: key` in your `.gitlab-ci.yml`. In the
next run of the pipeline, the cache will be stored in a different location.
### Clearing the cache manually
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/41249) in GitLab 10.4.
If you want to avoid editing `.gitlab-ci.yml`, you can easily clear the cache