2021-01-29 00:20:46 +05:30
---
2021-02-22 17:27:13 +05:30
stage: Enablement
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
2021-01-29 00:20:46 +05:30
---
2021-09-04 01:27:46 +05:30
# Debugging tips **(FREE SELF)**
2016-08-24 12:49:21 +05:30
Sometimes things don't work the way they should. Here are some tips on debugging issues out
in production.
2020-04-22 19:07:51 +05:30
## Starting a Rails console session
2016-08-24 12:49:21 +05:30
2021-01-03 14:25:43 +05:30
Troubleshooting and debugging your GitLab instance often requires a Rails console.
Your type of GitLab installation determines how
[to start a rails console ](../operations/rails_console.md ).
2020-10-24 23:57:45 +05:30
See also:
- [GitLab Rails Console Cheat Sheet ](gitlab_rails_cheat_sheet.md ).
- [Navigating GitLab via Rails console ](navigating_gitlab_via_rails_console.md ).
2016-08-24 12:49:21 +05:30
2020-04-22 19:07:51 +05:30
### Enabling Active Record logging
You can enable output of Active Record debug logging in the Rails console
session by running:
```ruby
2021-09-04 01:27:46 +05:30
ActiveRecord::Base.logger = Logger.new($stdout)
2020-04-22 19:07:51 +05:30
```
This will show information about database queries triggered by any Ruby code
you may run in the console. To turn off logging again, run:
```ruby
ActiveRecord::Base.logger = nil
```
### Disabling database statement timeout
You can disable the PostgreSQL statement timeout for the current Rails console
session by running:
```ruby
ActiveRecord::Base.connection.execute('SET statement_timeout TO 0')
```
2021-10-27 15:23:28 +05:30
This change only affects the current Rails console session and will
2020-04-22 19:07:51 +05:30
not be persisted in the GitLab production environment or in the next Rails
console session.
### Output Rails console session history
If you'd like to output your Rails console command history in a format that's
easy to copy and save for future reference, you can run:
```ruby
puts Readline::HISTORY.to_a
```
2020-11-24 15:15:51 +05:30
## Using the Rails runner
2020-04-22 19:07:51 +05:30
If you need to run some Ruby code in the context of your GitLab production
2020-11-24 15:15:51 +05:30
environment, you can do so using the [Rails runner ](https://guides.rubyonrails.org/command_line.html#rails-runner ). When executing a script file, the script must be accessible by the `git` user.
2020-04-22 19:07:51 +05:30
**For Omnibus installations**
```shell
sudo gitlab-rails runner "RAILS_COMMAND"
# Example with a two-line Ruby script
sudo gitlab-rails runner "user = User.first; puts user.username"
2020-10-24 23:57:45 +05:30
# Example with a ruby script file (make sure to use the full path)
2020-04-22 19:07:51 +05:30
sudo gitlab-rails runner /path/to/script.rb
```
**For installations from source**
```shell
sudo -u git -H bundle exec rails runner -e production "RAILS_COMMAND"
# Example with a two-line Ruby script
sudo -u git -H bundle exec rails runner -e production "user = User.first; puts user.username"
2020-10-24 23:57:45 +05:30
# Example with a ruby script file (make sure to use the full path)
2020-04-22 19:07:51 +05:30
sudo -u git -H bundle exec rails runner -e production /path/to/script.rb
```
## Mail not working
A common problem is that mails are not being sent for some reason. Suppose you configured
an SMTP server, but you're not seeing mail delivered. Here's how to check the settings:
2021-01-03 14:25:43 +05:30
1. Run a [Rails console ](../operations/rails_console.md#starting-a-rails-console-session ).
2016-08-24 12:49:21 +05:30
2019-02-15 15:39:39 +05:30
1. Look at the ActionMailer `delivery_method` to make sure it matches what you
2016-08-24 12:49:21 +05:30
intended. If you configured SMTP, it should say `:smtp` . If you're using
Sendmail, it should say `:sendmail` :
2019-09-30 21:07:59 +05:30
```ruby
irb(main):001:0> ActionMailer::Base.delivery_method
=> :smtp
```
2016-08-24 12:49:21 +05:30
2019-02-15 15:39:39 +05:30
1. If you're using SMTP, check the mail settings:
2016-08-24 12:49:21 +05:30
2019-09-30 21:07:59 +05:30
```ruby
irb(main):002:0> ActionMailer::Base.smtp_settings
2020-04-08 14:13:33 +05:30
=> {:address=>"localhost", :port=>25, :domain=>"localhost.localdomain", :user_name=>nil, :password=>nil, :authentication=>nil, :enable_starttls_auto=>true}
2019-09-30 21:07:59 +05:30
```
2016-08-24 12:49:21 +05:30
2019-09-30 21:07:59 +05:30
In the example above, the SMTP server is configured for the local machine. If this is intended, you may need to check your local mail
2021-09-30 23:02:18 +05:30
logs (for example, `/var/log/mail.log` ) for more details.
2016-08-24 12:49:21 +05:30
2019-09-30 21:07:59 +05:30
1. Send a test message via the console.
2016-08-24 12:49:21 +05:30
2019-09-30 21:07:59 +05:30
```ruby
irb(main):003:0> Notify.test_email('youremail@email.com', 'Hello World', 'This is a test message').deliver_now
```
2016-08-24 12:49:21 +05:30
2021-09-30 23:02:18 +05:30
If you do not receive an email and/or see an error message, then check
2019-09-30 21:07:59 +05:30
your mail server settings.
2016-08-24 12:49:21 +05:30
## Advanced Issues
For more advanced issues, `gdb` is a must-have tool for debugging issues.
2021-03-11 19:13:27 +05:30
### The GNU Project Debugger (GDB)
2016-08-24 12:49:21 +05:30
To install on Ubuntu/Debian:
2020-04-08 14:13:33 +05:30
```shell
2016-08-24 12:49:21 +05:30
sudo apt-get install gdb
```
On CentOS:
2020-04-08 14:13:33 +05:30
```shell
2016-08-24 12:49:21 +05:30
sudo yum install gdb
```
2021-03-11 19:13:27 +05:30
<!-- vale gitlab.Spelling = NO -->
2018-11-18 11:00:15 +05:30
### rbtrace
2021-03-11 19:13:27 +05:30
<!-- vale gitlab.Spelling = YES -->
GitLab 11.2 ships with [`rbtrace` ](https://github.com/tmm1/rbtrace ), which
2018-11-18 11:00:15 +05:30
allows you to trace Ruby code, view all running threads, take memory dumps,
and more. However, this is not enabled by default. To enable it, define the
`ENABLE_RBTRACE` variable to the environment. For example, in Omnibus:
```ruby
gitlab_rails['env'] = {"ENABLE_RBTRACE" => "1"}
```
2021-09-04 01:27:46 +05:30
Then reconfigure the system and restart Puma and Sidekiq. To run this
2018-11-18 11:00:15 +05:30
in Omnibus, run as root:
```ruby
/opt/gitlab/embedded/bin/ruby /opt/gitlab/embedded/bin/rbtrace
```
2016-08-24 12:49:21 +05:30
## Common Problems
Many of the tips to diagnose issues below apply to many different situations. We'll use one
concrete example to illustrate what you can do to learn what is going wrong.
2019-12-21 20:55:43 +05:30
### 502 Gateway Timeout after Unicorn spins at 100% CPU
2016-08-24 12:49:21 +05:30
This error occurs when the Web server times out (default: 60 s) after not
2019-12-21 20:55:43 +05:30
hearing back from the Unicorn worker. If the CPU spins to 100% while this in
2016-08-24 12:49:21 +05:30
progress, there may be something taking longer than it should.
To fix this issue, we first need to figure out what is happening. The
following tips are only recommended if you do NOT mind users being affected by
downtime. Otherwise skip to the next section.
1. Load the problematic URL
2021-09-04 01:27:46 +05:30
1. Run `sudo gdb -p <PID>` to attach to the Puma process.
2021-03-11 19:13:27 +05:30
1. In the GDB window, type:
2016-08-24 12:49:21 +05:30
2020-04-08 14:13:33 +05:30
```plaintext
2019-09-30 21:07:59 +05:30
call (void) rb_backtrace()
```
2016-08-24 12:49:21 +05:30
1. This forces the process to generate a Ruby backtrace. Check
2021-09-04 01:27:46 +05:30
`/var/log/gitlab/puma/puma_stderr.log` for the backtrace. For example, you may see:
2016-08-24 12:49:21 +05:30
2020-04-08 14:13:33 +05:30
```plaintext
2019-09-30 21:07:59 +05:30
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `block in start'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `loop'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:36:in `block (2 levels) in start'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:44:in `sample'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `sample_objects'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `each_with_object'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `each'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:69:in `block in sample_objects'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:69:in `name'
```
2016-08-24 12:49:21 +05:30
1. To see the current threads, run:
2020-04-08 14:13:33 +05:30
```plaintext
2019-09-30 21:07:59 +05:30
thread apply all bt
```
2016-08-24 12:49:21 +05:30
1. Once you're done debugging with `gdb` , be sure to detach from the process and exit:
2020-04-08 14:13:33 +05:30
```plaintext
2019-09-30 21:07:59 +05:30
detach
exit
```
2016-08-24 12:49:21 +05:30
2021-10-27 15:23:28 +05:30
If the Puma process terminates before you are able to run these
2021-03-11 19:13:27 +05:30
commands, GDB will report an error. To buy more time, you can always raise the
2021-09-04 01:27:46 +05:30
Puma worker timeout. For omnibus users, you can edit `/etc/gitlab/gitlab.rb` and
increase it from 60 seconds to 600:
2016-08-24 12:49:21 +05:30
```ruby
2021-09-04 01:27:46 +05:30
gitlab_rails['env'] = {
'GITLAB_RAILS_RACK_TIMEOUT' => 600
}
2016-08-24 12:49:21 +05:30
```
2021-09-30 23:02:18 +05:30
For source installations, set the environment variable.
2021-11-18 22:05:49 +05:30
Refer to [Puma Worker timeout ](../operations/puma.md#worker-timeout ).
2016-08-24 12:49:21 +05:30
2020-04-22 19:07:51 +05:30
[Reconfigure ](../restart_gitlab.md#omnibus-gitlab-reconfigure ) GitLab for the changes to take effect.
2016-08-24 12:49:21 +05:30
#### Troubleshooting without affecting other users
2019-12-21 20:55:43 +05:30
The previous section attached to a running Unicorn process, and this may have
2016-08-24 12:49:21 +05:30
undesirable effects for users trying to access GitLab during this time. If you
are concerned about affecting others during a production system, you can run a
separate Rails process to debug the issue:
1. Log in to your GitLab account.
2021-09-30 23:02:18 +05:30
1. Copy the URL that is causing problems (for example, `https://gitlab.com/ABC` ).
2021-03-11 19:13:27 +05:30
1. Create a Personal Access Token for your user (User Settings -> Access Tokens).
2021-01-03 14:25:43 +05:30
1. Bring up the [GitLab Rails console. ](../operations/rails_console.md#starting-a-rails-console-session )
2016-08-24 12:49:21 +05:30
1. At the Rails console, run:
2019-09-30 21:07:59 +05:30
```ruby
2020-04-22 19:07:51 +05:30
app.get '< URL FROM STEP 2 > /?private_token=< TOKEN FROM STEP 3 > '
2019-09-30 21:07:59 +05:30
```
2016-08-24 12:49:21 +05:30
2019-09-30 21:07:59 +05:30
For example:
2016-08-24 12:49:21 +05:30
2019-09-30 21:07:59 +05:30
```ruby
2020-06-23 00:09:42 +05:30
app.get 'https://gitlab.com/gitlab-org/gitlab-foss/-/issues/1?private_token=123456'
2019-09-30 21:07:59 +05:30
```
2016-08-24 12:49:21 +05:30
2020-04-22 19:07:51 +05:30
1. In a new window, run `top` . It should show this Ruby process using 100% CPU. Write down the PID.
2021-03-11 19:13:27 +05:30
1. Follow step 2 from the previous section on using GDB.
2016-08-24 12:49:21 +05:30
2018-03-17 18:26:18 +05:30
### GitLab: API is not accessible
2019-12-21 20:55:43 +05:30
This often occurs when GitLab Shell attempts to request authorization via the
2021-12-11 22:18:48 +05:30
[internal API ](../../development/internal_api/index.md ) (for example, `http://localhost:8080/api/v4/internal/allowed` ), and
2018-03-17 18:26:18 +05:30
something in the check fails. There are many reasons why this may happen:
2021-09-30 23:02:18 +05:30
1. Timeout connecting to a database (for example, PostgreSQL or Redis)
2018-03-17 18:26:18 +05:30
1. Error in Git hooks or push rules
2021-09-30 23:02:18 +05:30
1. Error accessing the repository (for example, stale NFS handles)
2018-03-17 18:26:18 +05:30
To diagnose this problem, try to reproduce the problem and then see if there
2019-12-21 20:55:43 +05:30
is a Unicorn worker that is spinning via `top` . Try to use the `gdb`
2018-03-17 18:26:18 +05:30
techniques above. In addition, using `strace` may help isolate issues:
```shell
2021-09-04 01:27:46 +05:30
strace -ttTfyyy -s 1024 -p < PID of puma worker > -o /tmp/puma.txt
2018-03-17 18:26:18 +05:30
```
If you cannot isolate which Unicorn worker is the issue, try to run `strace`
2021-04-17 20:07:23 +05:30
on all the Unicorn workers to see where the
2021-12-11 22:18:48 +05:30
[`/internal/allowed` ](../../development/internal_api/index.md ) endpoint gets stuck:
2018-03-17 18:26:18 +05:30
```shell
2021-09-04 01:27:46 +05:30
ps auwx | grep puma | awk '{ print " -p " $2}' | xargs strace -ttTfyyy -s 1024 -o /tmp/puma.txt
2018-03-17 18:26:18 +05:30
```
2021-09-04 01:27:46 +05:30
The output in `/tmp/puma.txt` may help diagnose the root cause.
2018-03-17 18:26:18 +05:30
2019-10-12 21:52:04 +05:30
## More information
2016-08-24 12:49:21 +05:30
2021-10-27 15:23:28 +05:30
- [Debugging Stuck Ruby Processes ](https://newrelic.com/blog/best-practices/debugging-stuck-ruby-processes-what-to-do-before-you-kill-9 )
2021-03-11 19:13:27 +05:30
- [Cheat sheet of using GDB and Ruby processes ](gdb-stuck-ruby.txt )