debian-mirror-gitlab/doc/development/performance.md

542 lines
22 KiB
Markdown
Raw Normal View History

2016-06-02 11:05:42 +05:30
# Performance Guidelines
This document describes various guidelines to follow to ensure good and
consistent performance of GitLab.
## Workflow
The process of solving performance problems is roughly as follows:
2020-04-08 14:13:33 +05:30
1. Make sure there's an issue open somewhere (for example, on the GitLab CE issue
2020-04-22 19:07:51 +05:30
tracker), and create one if there is not. See [#15607](https://gitlab.com/gitlab-org/gitlab-foss/issues/15607) for an example.
2019-02-15 15:39:39 +05:30
1. Measure the performance of the code in a production environment such as
2016-06-02 11:05:42 +05:30
GitLab.com (see the [Tooling](#tooling) section below). Performance should be
measured over a period of _at least_ 24 hours.
2019-02-15 15:39:39 +05:30
1. Add your findings based on the measurement period (screenshots of graphs,
2016-06-02 11:05:42 +05:30
timings, etc) to the issue mentioned in step 1.
2019-02-15 15:39:39 +05:30
1. Solve the problem.
2020-04-08 14:13:33 +05:30
1. Create a merge request, assign the "Performance" label and follow the [performance review process](merge_request_performance_guidelines.md).
2019-02-15 15:39:39 +05:30
1. Once a change has been deployed make sure to _again_ measure for at least 24
2016-06-02 11:05:42 +05:30
hours to see if your changes have any impact on the production environment.
2019-02-15 15:39:39 +05:30
1. Repeat until you're done.
2016-06-02 11:05:42 +05:30
When providing timings make sure to provide:
2019-03-02 22:35:43 +05:30
- The 95th percentile
- The 99th percentile
- The mean
2016-06-02 11:05:42 +05:30
When providing screenshots of graphs, make sure that both the X and Y axes and
the legend are clearly visible. If you happen to have access to GitLab.com's own
monitoring tools you should also provide a link to any relevant
graphs/dashboards.
## Tooling
2018-12-13 13:39:08 +05:30
GitLab provides built-in tools to help improve performance and availability:
2016-06-02 11:05:42 +05:30
2019-03-02 22:35:43 +05:30
- [Profiling](profiling.md).
- [Sherlock](profiling.md#sherlock).
2019-07-07 11:18:12 +05:30
- [Distributed Tracing](distributed_tracing.md)
2019-03-02 22:35:43 +05:30
- [GitLab Performance Monitoring](../administration/monitoring/performance/index.md).
- [Request Profiling](../administration/monitoring/performance/request_profiling.md).
- [QueryRecoder](query_recorder.md) for preventing `N+1` regressions.
- [Chaos endpoints](chaos_endpoints.md) for testing failure scenarios. Intended mainly for testing availability.
2016-06-02 11:05:42 +05:30
2020-04-08 14:13:33 +05:30
GitLab team members can use [GitLab.com's performance monitoring systems](https://about.gitlab.com/handbook/engineering/monitoring/) located at
2018-12-05 23:21:45 +05:30
<https://dashboards.gitlab.net>, this requires you to log in using your
2020-04-08 14:13:33 +05:30
`@gitlab.com` email address. Non-GitLab team-members are advised to set up their
own InfluxDB and Grafana stack.
2016-06-02 11:05:42 +05:30
## Benchmarks
Benchmarks are almost always useless. Benchmarks usually only test small bits of
code in isolation and often only measure the best case scenario. On top of that,
2020-04-08 14:13:33 +05:30
benchmarks for libraries (such as a Gem) tend to be biased in favour of the
2016-06-02 11:05:42 +05:30
library. After all there's little benefit to an author publishing a benchmark
that shows they perform worse than their competitors.
Benchmarks are only really useful when you need a rough (emphasis on "rough")
understanding of the impact of your changes. For example, if a certain method is
slow a benchmark can be used to see if the changes you're making have any impact
on the method's performance. However, even when a benchmark shows your changes
improve performance there's no guarantee the performance also improves in a
production environment.
When writing benchmarks you should almost always use
[benchmark-ips](https://github.com/evanphx/benchmark-ips). Ruby's `Benchmark`
module that comes with the standard library is rarely useful as it runs either a
single iteration (when using `Benchmark.bm`) or two iterations (when using
2020-04-08 14:13:33 +05:30
`Benchmark.bmbm`). Running this few iterations means external factors, such as a
video streaming in the background, can very easily skew the benchmark
2016-06-02 11:05:42 +05:30
statistics.
Another problem with the `Benchmark` module is that it displays timings, not
iterations. This means that if a piece of code completes in a very short period
of time it can be very difficult to compare the timings before and after a
certain change. This in turn leads to patterns such as the following:
```ruby
Benchmark.bmbm(10) do |bench|
bench.report 'do something' do
100.times do
... work here ...
end
end
end
```
This however leads to the question: how many iterations should we run to get
meaningful statistics?
The benchmark-ips Gem basically takes care of all this and much more, and as a
result of this should be used instead of the `Benchmark` module.
In short:
2019-02-15 15:39:39 +05:30
- Don't trust benchmarks you find on the internet.
- Never make claims based on just benchmarks, always measure in production to
2016-06-02 11:05:42 +05:30
confirm your findings.
2019-02-15 15:39:39 +05:30
- X being N times faster than Y is meaningless if you don't know what impact it
2016-06-02 11:05:42 +05:30
will actually have on your production environment.
2019-02-15 15:39:39 +05:30
- A production environment is the _only_ benchmark that always tells the truth
2016-06-02 11:05:42 +05:30
(unless your performance monitoring systems are not set up correctly).
2019-02-15 15:39:39 +05:30
- If you must write a benchmark use the benchmark-ips Gem instead of Ruby's
2016-06-02 11:05:42 +05:30
`Benchmark` module.
2017-08-17 22:00:37 +05:30
## Profiling
By collecting snapshots of process state at regular intervals, profiling allows
you to see where time is spent in a process. The [StackProf](https://github.com/tmm1/stackprof)
gem is included in GitLab's development environment, allowing you to investigate
the behaviour of suspect code in detail.
It's important to note that profiling an application *alters its performance*,
and will generally be done *in an unrepresentative environment*. In particular,
2020-04-08 14:13:33 +05:30
a method is not necessarily troublesome just because it's executed many times,
2017-08-17 22:00:37 +05:30
or takes a long time to execute. Profiles are tools you can use to better
understand what is happening in an application - using that information wisely
is up to you!
Keeping that in mind, to create a profile, identify (or create) a spec that
exercises the troublesome code path, then run it using the `bin/rspec-stackprof`
2020-04-08 14:13:33 +05:30
helper, for example:
2017-08-17 22:00:37 +05:30
2020-03-13 15:44:24 +05:30
```shell
2017-08-17 22:00:37 +05:30
$ LIMIT=10 bin/rspec-stackprof spec/policies/project_policy_spec.rb
2020-04-08 14:13:33 +05:30
2017-08-17 22:00:37 +05:30
8/8 |====== 100 ======>| Time: 00:00:18
Finished in 18.19 seconds (files took 4.8 seconds to load)
8 examples, 0 failures
==================================
Mode: wall(1000)
Samples: 17033 (5.59% miss rate)
GC: 1901 (11.16%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
6000 (35.2%) 2566 (15.1%) Sprockets::Cache::FileStore#get
2018 (11.8%) 888 (5.2%) ActiveRecord::ConnectionAdapters::PostgreSQLAdapter#exec_no_cache
1338 (7.9%) 640 (3.8%) ActiveRecord::ConnectionAdapters::PostgreSQL::DatabaseStatements#execute
3125 (18.3%) 394 (2.3%) Sprockets::Cache::FileStore#safe_open
913 (5.4%) 301 (1.8%) ActiveRecord::ConnectionAdapters::PostgreSQLAdapter#exec_cache
288 (1.7%) 288 (1.7%) ActiveRecord::Attribute#initialize
246 (1.4%) 246 (1.4%) Sprockets::Cache::FileStore#safe_stat
295 (1.7%) 193 (1.1%) block (2 levels) in class_attribute
187 (1.1%) 187 (1.1%) block (4 levels) in class_attribute
```
You can limit the specs that are run by passing any arguments `rspec` would
normally take.
The output is sorted by the `Samples` column by default. This is the number of
samples taken where the method is the one currently being executed. The `Total`
column shows the number of samples taken where the method, or any of the methods
it calls, were being executed.
To create a graphical view of the call stack:
2020-03-13 15:44:24 +05:30
```shell
2019-12-04 20:38:33 +05:30
stackprof tmp/project_policy_spec.rb.dump --graphviz > project_policy_spec.dot
dot -Tsvg project_policy_spec.dot > project_policy_spec.svg
2017-08-17 22:00:37 +05:30
```
To load the profile in [kcachegrind](https://kcachegrind.github.io/):
2020-03-13 15:44:24 +05:30
```shell
2019-12-04 20:38:33 +05:30
stackprof tmp/project_policy_spec.dump --callgrind > project_policy_spec.callgrind
kcachegrind project_policy_spec.callgrind # Linux
qcachegrind project_policy_spec.callgrind # Mac
2017-08-17 22:00:37 +05:30
```
2020-04-08 14:13:33 +05:30
It may be useful to zoom in on a specific method, for example:
2017-08-17 22:00:37 +05:30
2020-03-13 15:44:24 +05:30
```shell
2017-08-17 22:00:37 +05:30
$ stackprof tmp/project_policy_spec.rb.dump --method warm_asset_cache
2020-04-08 14:13:33 +05:30
2017-08-17 22:00:37 +05:30
TestEnv#warm_asset_cache (/Users/lupine/dev/gitlab.com/gitlab-org/gitlab-development-kit/gitlab/spec/support/test_env.rb:164)
samples: 0 self (0.0%) / 6288 total (36.9%)
callers:
6288 ( 100.0%) block (2 levels) in <top (required)>
callees (6288 total):
6288 ( 100.0%) Capybara::RackTest::Driver#visit
code:
| 164 | def warm_asset_cache
| 165 | return if warm_asset_cache?
| 166 | return unless defined?(Capybara)
| 167 |
6288 (36.9%) | 168 | Capybara.current_session.driver.visit '/'
| 169 | end
$ stackprof tmp/project_policy_spec.rb.dump --method BasePolicy#abilities
BasePolicy#abilities (/Users/lupine/dev/gitlab.com/gitlab-org/gitlab-development-kit/gitlab/app/policies/base_policy.rb:79)
samples: 0 self (0.0%) / 50 total (0.3%)
callers:
25 ( 50.0%) BasePolicy.abilities
25 ( 50.0%) BasePolicy#collect_rules
callees (50 total):
25 ( 50.0%) ProjectPolicy#rules
25 ( 50.0%) BasePolicy#collect_rules
code:
| 79 | def abilities
| 80 | return RuleSet.empty if @user && @user.blocked?
| 81 | return anonymous_abilities if @user.nil?
50 (0.3%) | 82 | collect_rules { rules }
| 83 | end
```
Since the profile includes the work done by the test suite as well as the
application code, these profiles can be used to investigate slow tests as well.
However, for smaller runs (like this example), this means that the cost of
setting up the test suite will tend to dominate.
It's also possible to modify the application code in-place to output profiles
whenever a particular code path is triggered without going through the test
suite first. See the
[StackProf documentation](https://github.com/tmm1/stackprof/blob/master/README.md)
for details.
## RSpec profiling
GitLab's development environment also includes the
[rspec_profiling](https://github.com/foraker/rspec_profiling) gem, which is used
to collect data on spec execution times. This is useful for analyzing the
performance of the test suite itself, or seeing how the performance of a spec
may have changed over time.
To activate profiling in your local environment, run the following:
2020-03-13 15:44:24 +05:30
```shell
2019-12-04 20:38:33 +05:30
export RSPEC_PROFILING=yes
rake rspec_profiling:install
2017-08-17 22:00:37 +05:30
```
This creates an SQLite3 database in `tmp/rspec_profiling`, into which statistics
are saved every time you run specs with the `RSPEC_PROFILING` environment
variable set.
Ad-hoc investigation of the collected results can be performed in an interactive
shell:
2020-03-13 15:44:24 +05:30
```shell
2017-08-17 22:00:37 +05:30
$ rake rspec_profiling:console
2020-04-08 14:13:33 +05:30
2017-08-17 22:00:37 +05:30
irb(main):001:0> results.count
=> 231
irb(main):002:0> results.last.attributes.keys
=> ["id", "commit", "date", "file", "line_number", "description", "time", "status", "exception", "query_count", "query_time", "request_count", "request_time", "created_at", "updated_at"]
irb(main):003:0> results.where(status: "passed").average(:time).to_s
=> "0.211340155844156"
```
2019-09-30 21:07:59 +05:30
2017-08-17 22:00:37 +05:30
These results can also be placed into a PostgreSQL database by setting the
`RSPEC_PROFILING_POSTGRES_URL` variable. This is used to profile the test suite
when running in the CI environment.
2019-12-21 20:55:43 +05:30
## Memory profiling
One of the reasons of the increased memory footprint could be Ruby memory fragmentation.
To diagnose it, you can visualize Ruby heap as described in [this post by Aaron Patterson](https://tenderlovemaking.com/2017/09/27/visualizing-your-ruby-heap.html).
2020-04-08 14:13:33 +05:30
To start, you want to dump the heap of the process you're investigating to a JSON file.
2019-12-21 20:55:43 +05:30
2020-04-08 14:13:33 +05:30
You need to run the command inside the process you're exploring, you may do that with `rbtrace`.
`rbtrace` is already present in GitLab `Gemfile`, you just need to require it.
2019-12-21 20:55:43 +05:30
It could be achieved running webserver or Sidekiq with the environment variable set to `ENABLE_RBTRACE=1`.
To get the heap dump:
```ruby
bundle exec rbtrace -p <PID> -e 'File.open("heap.json", "wb") { |t| ObjectSpace.dump_all(output: t) }'
```
Having the JSON, you finally could render a picture using the script [provided by Aaron](https://gist.github.com/tenderlove/f28373d56fdd03d8b514af7191611b88) or similar:
2020-03-13 15:44:24 +05:30
```shell
2019-12-21 20:55:43 +05:30
ruby heapviz.rb heap.json
```
2020-04-08 14:13:33 +05:30
2019-12-21 20:55:43 +05:30
Fragmented Ruby heap snapshot could look like this:
![Ruby heap fragmentation](img/memory_ruby_heap_fragmentation.png)
2020-04-08 14:13:33 +05:30
Memory fragmentation could be reduced by tuning GC parameters as described in [this post by Nate Berkopec](https://www.speedshop.co/2017/12/04/malloc-doubles-ruby-memory.html). This should be considered as a tradeoff, as it may affect overall performance of memory allocation and GC cycles.
2019-12-21 20:55:43 +05:30
2016-06-02 11:05:42 +05:30
## Importance of Changes
When working on performance improvements, it's important to always ask yourself
the question "How important is it to improve the performance of this piece of
code?". Not every piece of code is equally important and it would be a waste to
spend a week trying to improve something that only impacts a tiny fraction of
our users. For example, spending a week trying to squeeze 10 milliseconds out of
a method is a waste of time when you could have spent a week squeezing out 10
seconds elsewhere.
There is no clear set of steps that you can follow to determine if a certain
piece of code is worth optimizing. The only two things you can do are:
1. Think about what the code does, how it's used, how many times it's called and
2020-04-08 14:13:33 +05:30
how much time is spent in it relative to the total execution time (for example, the
2016-06-02 11:05:42 +05:30
total time spent in a web request).
2019-10-12 21:52:04 +05:30
1. Ask others (preferably in the form of an issue).
2016-06-02 11:05:42 +05:30
2020-04-08 14:13:33 +05:30
Some examples of changes that are not really important/worth the effort:
2016-06-02 11:05:42 +05:30
2019-03-02 22:35:43 +05:30
- Replacing double quotes with single quotes.
- Replacing usage of Array with Set when the list of values is very small.
- Replacing library A with library B when both only take up 0.1% of the total
2016-06-02 11:05:42 +05:30
execution time.
2019-03-02 22:35:43 +05:30
- Calling `freeze` on every string (see [String Freezing](#string-freezing)).
2016-06-02 11:05:42 +05:30
## Slow Operations & Sidekiq
2020-04-08 14:13:33 +05:30
Slow operations, like merging branches, or operations that are prone to errors
2016-06-02 11:05:42 +05:30
(using external APIs) should be performed in a Sidekiq worker instead of
directly in a web request as much as possible. This has numerous benefits such
as:
1. An error won't prevent the request from completing.
2019-10-12 21:52:04 +05:30
1. The process being slow won't affect the loading time of a page.
1. In case of a failure it's easy to re-try the process (Sidekiq takes care of
2016-06-02 11:05:42 +05:30
this automatically).
2019-10-12 21:52:04 +05:30
1. By isolating the code from a web request it will hopefully be easier to test
2016-06-02 11:05:42 +05:30
and maintain.
It's especially important to use Sidekiq as much as possible when dealing with
Git operations as these operations can take quite some time to complete
depending on the performance of the underlying storage system.
## Git Operations
Care should be taken to not run unnecessary Git operations. For example,
retrieving the list of branch names using `Repository#branch_names` can be done
without an explicit check if a repository exists or not. In other words, instead
of this:
```ruby
if repository.exists?
repository.branch_names.each do |name|
...
end
end
```
You can just write:
```ruby
repository.branch_names.each do |name|
...
end
```
## Caching
Operations that will often return the same result should be cached using Redis,
in particular Git operations. When caching data in Redis, make sure the cache is
flushed whenever needed. For example, a cache for the list of tags should be
flushed whenever a new tag is pushed or a tag is removed.
When adding cache expiration code for repositories, this code should be placed
in one of the before/after hooks residing in the Repository class. For example,
if a cache should be flushed after importing a repository this code should be
added to `Repository#after_import`. This ensures the cache logic stays within
the Repository class instead of leaking into other classes.
When caching data, make sure to also memoize the result in an instance variable.
While retrieving data from Redis is much faster than raw Git operations, it still
has overhead. By caching the result in an instance variable, repeated calls to
the same method won't end up retrieving data from Redis upon every call. When
memoizing cached data in an instance variable, make sure to also reset the
instance variable when flushing the cache. An example:
```ruby
def first_branch
@first_branch ||= cache.fetch(:first_branch) { branches.first }
end
def expire_first_branch_cache
cache.expire(:first_branch)
@first_branch = nil
end
```
2018-11-18 11:00:15 +05:30
## String Freezing
2016-06-02 11:05:42 +05:30
In recent Ruby versions calling `freeze` on a String leads to it being allocated
2020-03-13 15:44:24 +05:30
only once and re-used. For example, on Ruby 2.3 or later this will only allocate the
2016-06-02 11:05:42 +05:30
"foo" String once:
```ruby
10.times do
'foo'.freeze
end
```
2018-11-18 11:00:15 +05:30
Depending on the size of the String and how frequently it would be allocated
(before the `.freeze` call was added), this _may_ make things faster, but
there's no guarantee it will.
Strings will be frozen by default in Ruby 3.0. To prepare our code base for
2018-12-05 23:21:45 +05:30
this eventuality, we will be adding the following header to all Ruby files:
2018-11-18 11:00:15 +05:30
```ruby
# frozen_string_literal: true
```
This may cause test failures in the code that expects to be able to manipulate
strings. Instead of using `dup`, use the unary plus to get an unfrozen string:
```ruby
test = +"hello"
test += " world"
```
2018-12-05 23:21:45 +05:30
When adding new Ruby files, please check that you can add the above header,
as omitting it may lead to style check failures.
2020-04-22 19:07:51 +05:30
## Reading from files and other data sources
Ruby offers several convenience functions that deal with file contents specifically
or I/O streams in general. Functions such as `IO.read` and `IO.readlines` make
it easy to read data into memory, but they can be inefficient when the
data grows large. Because these functions read the entire contents of a data
source into memory, memory use will grow by _at least_ the size of the data source.
In the case of `readlines`, it will grow even further, due to extra bookkeeping
the Ruby VM has to perform to represent each line.
Consider the following program, which reads a text file that is 750MB on disk:
```ruby
File.readlines('large_file.txt').each do |line|
puts line
end
```
Here is a process memory reading from while the program was running, showing
how we indeed kept the entire file in memory (RSS reported in kilobytes):
```shell
$ ps -o rss -p <pid>
RSS
783436
```
And here is an excerpt of what the garbage collector was doing:
```ruby
pp GC.stat
{
:heap_live_slots=>2346848,
:malloc_increase_bytes=>30895288,
...
}
```
We can see that `heap_live_slots` (the number of reachable objects) jumped to ~2.3M,
which is roughly two orders of magnitude more compared to reading the file line by
line instead. It was not just the raw memory usage that increased, but also how the garbage collector (GC)
responded to this change in anticipation of future memory use. We can see that `malloc_increase_bytes` jumped
to ~30MB, which compares to just ~4kB for a "fresh" Ruby program. This figure specifies how
much additional heap space the Ruby GC will claim from the operating system next time it runs out of memory.
Not only did we occupy more memory, we also changed the behavior of the application
to increase memory use at a faster rate.
The `IO.read` function exhibits similar behavior, with the difference that no extra memory will
be allocated for each line object.
### Recommendations
Instead of reading data sources into memory in full, it is better to read them line by line
instead. This is not always an option, for instance when you need to convert a YAML file
into a Ruby `Hash`, but whenever you have data where each row represents some entity that
can be processed and then discarded, you can use the following approaches.
First, replace calls to `readlines.each` with either `each` or `each_line`.
The `each_line` and `each` functions read the data source line by line without keeping
already visited lines in memory:
```ruby
File.new('file').each { |line| puts line }
```
Alternatively, you can read individual lines explicitly using `IO.readline` or `IO.gets` functions:
```ruby
while line = file.readline
# process line
end
```
This might be preferable if there is a condition that allows exiting the loop early, saving not
just memory but also unnecessary time spent in CPU and I/O for processing lines you're not interested in.
2018-11-18 11:00:15 +05:30
## Anti-Patterns
2016-06-02 11:05:42 +05:30
2020-04-22 19:07:51 +05:30
This is a collection of [anti-patterns](https://en.wikipedia.org/wiki/Anti-pattern) that should be avoided
2020-04-08 14:13:33 +05:30
unless these changes have a measurable, significant, and positive impact on
2018-11-18 11:00:15 +05:30
production environments.
2016-06-02 11:05:42 +05:30
2018-11-18 11:00:15 +05:30
### Moving Allocations to Constants
Storing an object as a constant so you only allocate it once _may_ improve
performance, but there's no guarantee this will. Looking up constants has an
impact on runtime performance, and as such, using a constant instead of
referencing an object directly may even slow code down. For example:
2016-06-02 11:05:42 +05:30
```ruby
SOME_CONSTANT = 'foo'.freeze
9000.times do
SOME_CONSTANT
end
```
The only reason you should be doing this is to prevent somebody from mutating
the global String. However, since you can just re-assign constants in Ruby
there's nothing stopping somebody from doing this elsewhere in the code:
```ruby
SOME_CONSTANT = 'bar'
```
2019-07-07 11:18:12 +05:30
## How to seed a database with millions of rows
You might want millions of project rows in your local database, for example,
in order to compare relative query performance, or to reproduce a bug. You could
2020-04-08 14:13:33 +05:30
do this by hand with SQL commands or using [Mass Inserting Rails
Models](mass_insert.md) functionality.
Assuming you are working with ActiveRecord models, you might also find these links helpful:
2019-07-07 11:18:12 +05:30
2020-04-08 14:13:33 +05:30
- [Insert records in batches](insert_into_tables_in_batches.md)
2019-07-07 11:18:12 +05:30
- [BulkInsert gem](https://github.com/jamis/bulk_insert)
- [ActiveRecord::PgGenerateSeries gem](https://github.com/ryu39/active_record-pg_generate_series)
### Examples
You may find some useful examples in this snippet:
2019-12-04 20:38:33 +05:30
<https://gitlab.com/gitlab-org/gitlab-foss/snippets/33946>