debian-mirror-gitlab/doc/administration/troubleshooting/postgresql.md

279 lines
11 KiB
Markdown
Raw Normal View History

2019-12-21 20:55:43 +05:30
---
2022-07-23 23:45:48 +05:30
stage: Data Stores
2021-02-22 17:27:13 +05:30
group: Database
2022-11-25 23:54:43 +05:30
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
2019-12-21 20:55:43 +05:30
---
2021-12-11 22:18:48 +05:30
# PostgreSQL **(FREE SELF)**
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
This page contains information about PostgreSQL the GitLab Support team uses
when troubleshooting. GitLab makes this information public, so that anyone can
make use of the Support team's collected knowledge.
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
WARNING:
Some procedures documented here may break your GitLab instance. Use at your
own risk.
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
If you're on a [paid tier](https://about.gitlab.com/pricing/) and aren't sure
how to use these commands, [contact Support](https://about.gitlab.com/support/)
for assistance with any issues you're having.
2019-12-21 20:55:43 +05:30
## Other GitLab PostgreSQL documentation
This section is for links to information elsewhere in the GitLab documentation.
### Procedures
2021-02-22 17:27:13 +05:30
- [Connect to the PostgreSQL console](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-bundled-postgresql-database).
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
- [Omnibus database procedures](https://docs.gitlab.com/omnibus/settings/database.html) including:
2019-12-21 20:55:43 +05:30
- SSL: enabling, disabling, and verifying.
- Enabling Write Ahead Log (WAL) archiving.
- Using an external (non-Omnibus) PostgreSQL installation; and backing it up.
- Listening on TCP/IP as well as or instead of sockets.
- Storing data in another location.
- Destructively reseeding the GitLab database.
2021-02-22 17:27:13 +05:30
- Guidance around updating packaged PostgreSQL, including how to stop it
2021-04-29 21:17:54 +05:30
from happening automatically.
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
- [Information about external PostgreSQL](../postgresql/external.md).
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
- [Running Geo with external PostgreSQL](../geo/setup/external_database.md).
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
- [Upgrades when running PostgreSQL configured for HA](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-gitlab-ha-cluster).
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
- Consuming PostgreSQL from [within CI runners](../../ci/services/postgres.md).
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
- [Using Slony to update PostgreSQL](../../update/upgrading_postgresql_using_slony.md).
- Uses replication to handle PostgreSQL upgrades if the schemas are the same.
- Reduces downtime to a short window for switching to the newer version.
2019-12-21 20:55:43 +05:30
2021-02-22 17:27:13 +05:30
- Managing Omnibus PostgreSQL versions [from the development docs](https://docs.gitlab.com/omnibus/development/managing-postgresql-versions.html).
2019-12-21 20:55:43 +05:30
2020-06-23 00:09:42 +05:30
- [PostgreSQL scaling](../postgresql/replication_and_failover.md)
2021-02-22 17:27:13 +05:30
- Including [troubleshooting](../postgresql/replication_and_failover.md#troubleshooting)
2021-09-04 01:27:46 +05:30
`gitlab-ctl patroni check-leader` and PgBouncer errors.
2019-12-21 20:55:43 +05:30
2022-08-13 15:12:31 +05:30
- [Developer database documentation](../../development/feature_development.md#database-guides),
2021-02-22 17:27:13 +05:30
some of which is absolutely not for production use. Including:
- Understanding EXPLAIN plans.
2019-12-21 20:55:43 +05:30
## Support topics
### Database deadlocks
References:
2021-02-22 17:27:13 +05:30
- [Issue #1 Deadlocks with GitLab 12.1, PostgreSQL 10.7](https://gitlab.com/gitlab-org/gitlab/-/issues/30528).
- [Customer ticket (internal) GitLab 12.1.6](https://gitlab.zendesk.com/agent/tickets/134307)
and [Google doc (internal)](https://docs.google.com/document/d/19xw2d_D1ChLiU-MO1QzWab-4-QXgsIUcN5e_04WTKy4).
- [Issue #2 deadlocks can occur if an instance is flooded with pushes](https://gitlab.com/gitlab-org/gitlab/-/issues/33650).
Provided for context about how GitLab code can have this sort of
unanticipated effect in unusual situations.
2019-12-21 20:55:43 +05:30
2020-04-08 14:13:33 +05:30
```plaintext
2019-12-21 20:55:43 +05:30
ERROR: deadlock detected
```
2021-11-11 11:23:49 +05:30
Three applicable timeouts are identified in the issue [#30528](https://gitlab.com/gitlab-org/gitlab/-/issues/30528); our recommended settings are as follows:
2019-12-21 20:55:43 +05:30
2020-04-08 14:13:33 +05:30
```ini
2019-12-21 20:55:43 +05:30
deadlock_timeout = 5s
statement_timeout = 15s
idle_in_transaction_session_timeout = 60s
```
2021-11-11 11:23:49 +05:30
Quoting from issue [#30528](https://gitlab.com/gitlab-org/gitlab/-/issues/30528):
2019-12-21 20:55:43 +05:30
2022-07-23 23:45:48 +05:30
<!-- vale gitlab.FutureTense = NO -->
2019-12-21 20:55:43 +05:30
> "If a deadlock is hit, and we resolve it through aborting the transaction after a short period, then the retry mechanisms we already have will make the deadlocked piece of work try again, and it's unlikely we'll deadlock multiple times in a row."
2022-07-23 23:45:48 +05:30
<!-- vale gitlab.FutureTense = YES -->
2021-02-22 17:27:13 +05:30
NOTE:
In Support, our general approach to reconfiguring timeouts (applies also to the
HTTP stack) is that it's acceptable to do it temporarily as a workaround. If it
makes GitLab usable for the customer, then it buys time to understand the
problem more completely, implement a hot fix, or make some other change that
addresses the root cause. Generally, the timeouts should be put back to
reasonable defaults after the root cause is resolved.
2019-12-21 20:55:43 +05:30
2022-07-23 23:45:48 +05:30
In this case, the guidance we had from development was to drop `deadlock_timeout`
or `statement_timeout`, but to leave the third setting at 60 seconds. Setting
`idle_in_transaction` protects the database from sessions potentially hanging for
2021-02-22 17:27:13 +05:30
days. There's more discussion in [the issue relating to introducing this timeout on GitLab.com](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1053).
2019-12-21 20:55:43 +05:30
PostgresSQL defaults:
2020-04-22 19:07:51 +05:30
- `statement_timeout = 0` (never)
- `idle_in_transaction_session_timeout = 0` (never)
2019-12-21 20:55:43 +05:30
2021-11-11 11:23:49 +05:30
Comments in issue [#30528](https://gitlab.com/gitlab-org/gitlab/-/issues/30528)
2021-02-22 17:27:13 +05:30
indicate that these should both be set to at least a number of minutes for all
Omnibus GitLab installations (so they don't hang indefinitely). However, 15s
2022-07-23 23:45:48 +05:30
for `statement_timeout` is very short, and is only effective if the
2021-02-22 17:27:13 +05:30
underlying infrastructure is very performant.
2019-12-21 20:55:43 +05:30
See current settings with:
2020-04-08 14:13:33 +05:30
```shell
2019-12-21 20:55:43 +05:30
sudo gitlab-rails runner "c = ApplicationRecord.connection ; puts c.execute('SHOW statement_timeout').to_a ;
2021-03-08 18:12:59 +05:30
puts c.execute('SHOW deadlock_timeout').to_a ;
2019-12-21 20:55:43 +05:30
puts c.execute('SHOW idle_in_transaction_session_timeout').to_a ;"
```
It may take a little while to respond.
2020-04-22 19:07:51 +05:30
```ruby
2019-12-21 20:55:43 +05:30
{"statement_timeout"=>"1min"}
2021-03-08 18:12:59 +05:30
{"deadlock_timeout"=>"0"}
2019-12-21 20:55:43 +05:30
{"idle_in_transaction_session_timeout"=>"1min"}
```
2021-03-08 18:12:59 +05:30
These settings can be updated in `/etc/gitlab/gitlab.rb` with:
```ruby
postgresql['deadlock_timeout'] = '5s'
postgresql['statement_timeout'] = '15s'
postgresql['idle_in_transaction_session_timeout'] = '60s'
```
Once saved, [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
2021-02-22 17:27:13 +05:30
NOTE:
2021-01-29 00:20:46 +05:30
These are Omnibus GitLab settings. If an external database, such as a customer's PostgreSQL installation or Amazon RDS is being used, these values don't get set, and would have to be set externally.
2022-03-02 08:16:31 +05:30
### Temporarily changing the statement timeout
WARNING:
The following advice does not apply in case
[PgBouncer](../postgresql/pgbouncer.md) is enabled,
because the changed timeout might affect more transactions than intended.
In some situations, it may be desirable to set a different statement timeout
without having to [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure),
which in this case would restart Puma and Sidekiq.
For example, a backup may fail with the following errors in the output of the
[backup command](../../raketasks/backup_restore.md#back-up-gitlab)
because the statement timeout was too short:
```plaintext
pg_dump: error: Error message from server: server closed the connection unexpectedly
```
2022-08-27 11:52:29 +05:30
You may also see errors in the [PostgreSQL logs](../logs/index.md#postgresql-logs):
2022-03-02 08:16:31 +05:30
```plaintext
canceling statement due to statement timeout
```
To temporarily change the statement timeout:
1. Open `/var/opt/gitlab/gitlab-rails/etc/database.yml` in an editor
1. Set the value of `statement_timeout` to `0`, which sets an unlimited statement timeout.
1. [Confirm in a new Rails console session](../operations/rails_console.md#using-the-rails-runner)
that this value is used:
```shell
sudo gitlab-rails runner "ActiveRecord::Base.connection_config[:variables]"
```
1. Perform the action for which you need a different timeout
(for example the backup or the Rails command).
1. Revert the edit in `/var/opt/gitlab/gitlab-rails/etc/database.yml`.
2023-01-13 00:05:48 +05:30
## Troubleshooting
### Database is not accepting commands to avoid wraparound data loss
This error likely means that AUTOVACUUM is failing to complete its run:
```plaintext
ERROR: database is not accepting commands to avoid wraparound data loss in database "gitlabhq_production"
```
To resolve the error, run `VACUUM` manually:
1. Stop GitLab with the command `gitlab-ctl stop`.
1. Place the database in single-user mode with the command:
```shell
/opt/gitlab/embedded/bin/postgres --single -D /var/opt/gitlab/postgresql/data gitlabhq_production
```
1. In the `backend>` prompt, run `VACUUM;`. This command can take several minutes to complete.
1. Wait for the command to complete, then press <kbd>Control</kbd> + <kbd>D</kbd> to exit.
1. Start GitLab with the command `gitlab-ctl start`.
### GitLab database requirements
The [database requirements](../../install/requirements.md#database) for GitLab include:
- Support for MySQL was removed in GitLab 12.1; [migrate to PostgreSQL](../../update/mysql_to_postgresql.md).
- Review and install the [required extension list](../../install/postgresql_extensions.md).
### Serialization errors in the `production/sidekiq` log
If you receive errors like this example in your `production/sidekiq` log, read
about [setting `default_transaction_isolation` into read committed](https://docs.gitlab.com/omnibus/settings/database.html#set-default_transaction_isolation-into-read-committed) to fix the problem:
```plaintext
ActiveRecord::StatementInvalid PG::TRSerializationFailure: ERROR: could not serialize access due to concurrent update
```
### PostgreSQL replication slot errors
If you receive errors like this example, read about how to resolve PostgreSQL HA
[replication slot errors](https://docs.gitlab.com/omnibus/settings/database.html#troubleshooting-upgrades-in-an-ha-cluster):
```plaintext
pg_basebackup: could not create temporary replication slot "pg_basebackup_12345": ERROR: all replication slots are in use
HINT: Free one or increase max_replication_slots.
```
### Geo replication errors
If you receive errors like this example, read about how to resolve
[Geo replication errors](../geo/replication/troubleshooting.md#fixing-postgresql-database-replication-errors):
```plaintext
ERROR: replication slots can only be used if max_replication_slots > 0
FATAL: could not start WAL streaming: ERROR: replication slot "geo_secondary_my_domain_com" does not exist
Command exceeded allowed execution time
PANIC: could not write to file 'pg_xlog/xlogtemp.123': No space left on device
```
### Review Geo configuration and common errors
When troubleshooting problems with Geo, you should:
- Review [common Geo errors](../geo/replication/troubleshooting.md#fixing-common-errors).
- [Review your Geo configuration](../geo/replication/troubleshooting.md), including:
- Reconfiguring hosts and ports.
- Reviewing and fixing the user and password mappings.
### Mismatch in `pg_dump` and `psql` versions
If you receive errors like this example, read about how to
[back up and restore a non-packaged PostgreSQL database](https://docs.gitlab.com/omnibus/settings/database.html#backup-and-restore-a-non-packaged-postgresql-database):
```plaintext
Dumping PostgreSQL database gitlabhq_production ... pg_dump: error: server version: 13.3; pg_dump version: 14.2
pg_dump: error: aborting because of server version mismatch
```
### Extension `btree_gist` is not allow-listed
Deploying PostgreSQL on an Azure Database for PostgreSQL - Flexible Server may result in this error:
```plaintext
extension "btree_gist" is not allow-listed for "azure_pg_admin" users in Azure Database for PostgreSQL
```
To resolve this error, [allow-list the extension](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions#how-to-use-postgresql-extensions) prior to install.