info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
Most of the time, you should use [RSpec](https://github.com/rspec/rspec-rails#feature-specs) for your feature tests. For more information on how to get started with feature tests, see [get started with feature tests](#get-started-with-feature-tests)
Running `yarn jest-debug` runs Jest in debug mode, allowing you to debug/inspect as described in the [Jest docs](https://jestjs.io/docs/troubleshooting#tests-are-failing-and-you-don-t-know-why).
To help facilitate RSpec integration tests we have two test-specific stylesheets. These can be used to do things like disable animations to improve test speed, or to make elements visible when they need to be targeted by Capybara click events:
-`app/assets/stylesheets/disable_animations.scss`
-`app/assets/stylesheets/test_environment.scss`
Because the test environment should match the production environment as much as possible, use these minimally and only add to them when necessary.
Libraries are an integral part of any JavaScript developer's life. The general advice would be to not test library internals, but expect that the library knows what it's supposed to do and has test coverage on its own.
It does not make sense to test our `getFahrenheit` function because underneath it does nothing else but invoking the library function, and we can expect that one is working as intended.
Let's take a short look into Vue land. Vue is a critical part of the GitLab JavaScript codebase. When writing specs for Vue components, a common gotcha is to actually end up testing Vue provided functionality, because it appears to be the easiest thing to test. Here's an example taken from our codebase.
Testing the `hasMetricTypes` computed prop would seem like a given here. But to test if the computed property is returning the length of `metricTypes`, is testing the Vue library itself. There is no value in this, besides it adding to the test suite. It's better to test a component in the way the user interacts with it: checking the rendered template.
Keep an eye out for these kinds of tests, as they just make updating logic more fragile and tedious than it needs to be. This is also true for other libraries. A suggestion here is: if you are checking a `wrapper.vm` property, you should probably stop and rethink the test to check the rendered template instead.
Another common gotcha is that the specs end up verifying the mock is working. If you are using mocks, the mock should support the test, but not be the target of the test.
For example, it's better to use the generated markup to trigger a button click and validate the markup changed accordingly than to call a method manually and verify data structures or computed properties. There's always the chance of accidentally breaking the user flow, while the tests pass and provide a false sense of security.
These some general common practices included as part of our test suite. Should you stumble over something not following this guide, ideally fix it right away. 🎉
Preferentially, this is done by targeting what the user actually sees using [DOM Testing Library](https://testing-library.com/docs/dom-testing-library/intro/).
When selecting by text it is best to use the [`byRole`](https://testing-library.com/docs/queries/byrole/) query
as it helps enforce accessibility best practices. `findByRole` and the other [DOM Testing Library queries](https://testing-library.com/docs/queries/about/) are available when using [`shallowMountExtended` or `mountExtended`](#shallowmountextended-and-mountextended).
- A `data-testid` attribute ([recommended by maintainers of `@vue/test-utils`](https://github.com/vuejs/vue-test-utils/issues/1498#issuecomment-610133465))
When testing Promises you should always make sure that the test is asynchronous and rejections are handled. It's now possible to use the `async/await` syntax in the test suite:
Sometimes we have to test time-sensitive code. For example, recurring events that run every X amount of seconds or similar. Here are some strategies to deal with that:
- [`setTimeout`](https://developer.mozilla.org/en-US/docs/Web/API/setTimeout) because it makes the reason for waiting unclear. Additionally, it is faked in our tests so its usage is tricky.
- [`setImmediate`](https://developer.mozilla.org/en-US/docs/Web/API/Window/setImmediate) because it is no longer supported in Jest 27 and later. See [this epic](https://gitlab.com/groups/gitlab-org/-/epics/7002) for details.
If you cannot register handlers to the `Promise`, for example because it is executed in a synchronous Vue lifecycle hook, take a look at the [`waitFor`](#wait-until-axios-requests-finish) helpers or flush all pending `Promise`s with:
Tests are normally architected in a pattern which requires a recurring setup of the component under test. This is often achieved by making use of the `beforeEach` hook.
With [enableAutoDestroy](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/100389), it is no longer necessary to manually call `wrapper.destroy()`.
However, some mocks, spies, and fixtures do need to be torn down, and we can leverage the `afterEach` hook.
While the latter eventually falls back to leverage [`Object.is`](https://github.com/facebook/jest/blob/master/packages/expect/src/jasmineUtils.ts#L91),
Non-determinism is the breeding ground for flaky and brittle specs. Such specs end up breaking the CI pipeline, interrupting the work flow of other contributors.
1. Make sure your test subject's collaborators (for example, Axios, Apollo, Lodash helpers) and test environment (for example, Date) behave consistently across systems and over time.
1. Make sure tests are focused and not doing "extra work" (for example, needlessly creating the test subject more than once in an individual test).
(for example, `app/assets/javascripts/ide/__mocks__`). **Don't do this.** We want to keep all of our test-related code in one place (the `spec/` folder).
a [Jest mock for the package `monaco-editor`](https://gitlab.com/gitlab-org/gitlab/-/blob/b7f914cddec9fc5971238cdf12766e79fa1629d7/spec/frontend/__mocks__/monaco-editor/index.js#L1).
This mock is helpful because we don't want any unmocked requests to pass any tests. Also, we are able to inject some test helpers such as `axios.waitForAll`.
This mock is helpful because the module itself uses AMD format which webpack understands, but is incompatible with the jest environment. This mock doesn't remove
This mock is helpful because the Monaco package is completely incompatible in a Jest environment. In fact, webpack requires a special loader to make it work. This mock
For example, when testing the difference between an empty search string and a non-empty search string, the use of the array block syntax with the pretty print option would be preferred. That way the differences between an empty string (`''`) and a non-empty string (`'search string'`) would be visible in the spec output. Whereas with a template literal block, the empty string would be shown as a space, which could lead to a confusing developer experience.
RSpec runs complete [feature tests](testing_levels.md#frontend-feature-tests), while the Jest directories contain [frontend unit tests](testing_levels.md#frontend-unit-tests), [frontend component tests](testing_levels.md#frontend-component-tests), and [frontend integration tests](testing_levels.md#frontend-integration-tests).
Before May 2018, `features/` also contained feature tests run by Spinach. These tests were removed from the codebase in May 2018 ([#23036](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/23036)).
The Axios Utils mock module located in `spec/frontend/__helpers__/mocks/axios_utils.js` contains two helper methods for Jest tests that spawn HTTP requests.
-`waitFor(url, callback)`: Runs `callback` after a request to `url` finishes (either successfully or unsuccessfully).
-`waitForAll(callback)`: Runs `callback` once all pending requests have finished. If no requests are pending, runs `callback` on the next tick.
Both functions run `callback` on the next tick after the requests finish (using `setImmediate()`), to allow any `.then()` or `.catch()` handlers to run.
Check an example in [`spec/frontend/alert_management/components/alert_details_spec.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/ac1c9fa4c5b3b45f9566147b1c88fd1339cd7c25/spec/frontend/alert_management/components/alert_details_spec.js#L32).
Some regressions only affect a specific browser version. We can install and test in particular browsers with either Firefox or BrowserStack using the following steps:
You can use it directly through the [live app](https://www.browserstack.com/live) or you can install the [chrome extension](https://chrome.google.com/webstore/detail/browserstack/nkihdmlheodkdfojglpcjjmioefjahjb) for easy access.
1. Drag and drop the application to any other folder but the `Applications` folder.
1. Rename the application to something like `Firefox_Old`.
1. Move the application to the `Applications` folder.
1. Open up a terminal and run `/Applications/Firefox_Old.app/Contents/MacOS/firefox-bin -profilemanager` to create a new profile specific to that Firefox version.
1. Once the profile has been created, quit the app, and run it again like normal. You now have a working older Firefox version.
[Jest snapshot tests](https://jestjs.io/docs/snapshot-testing) are a useful way to prevent unexpected changes to the HTML output of a given component. They should **only** be used when other testing methods (such as asserting elements with `vue-tests-utils`) do not cover the required usecase. To use them within GitLab, there are a few guidelines that should be highlighted:
- Care for the output of the snapshot, otherwise, it's not providing any real value. This will usually involve reading the generated snapshot file as you would read any other piece of code
Think of a snapshot test as a simple way to store a raw `String` representation of what you've put into the item being tested. This can be used to evaluate changes in a component, a store, a complex piece of generated output, etc. You can see more in the list below for some recommended `Do's and Don'ts`.
Jest provides a great set of docs on [best practices](https://jestjs.io/docs/snapshot-testing#best-practices) that we should keep in mind when creating snapshots.
A snapshot is purely a stringified version of what you ask to be tested on the left-hand side of the function call. This means any kind of changes you make to the formatting of the string has an impact on the outcome. This process is done by leveraging serializers for an automatic transform step. For Vue this is already taken care of by leveraging the `vue-jest` package, which offers the proper serializer.
Should the outcome of your spec be different from what is in the generated snapshot file, you'll be notified about it by a failing test in your test suite.
- Provides a good warning against accidental changes of important HTML structures
- Ease of setup
**Cons**
- Lacks the clarity or guard rails that `vue-tests-utils` provides by finding elements and asserting their presence directly
- Creates unnecessary noise when updating components purposefully
- High risk of taking a snapshot of bugs, which then turns the tests against us since the test will now fail when fixing the issue
- No meaningful assertions or expectations within snapshots makes them harder to reason about or replace
- When used with dependencies like [GitLab UI](https://gitlab.com/gitlab-org/gitlab-ui), it creates fragility in tests when the underlying library changes the HTML of a component we are testing
### When to use
**Use snapshots when**
- Protecting critical HTML structures so it doesn't change by accident
- Asserting JS object or JSON outputs of complex utility functions
### When not to use
**Don't use snapshots when**
- Tests could be written using `vue-tests-utils` instead
- Asserting the logic of a component
- Predicting data structure(s) outputs
- There are UI elements outside of the repository (think of GitLab UI version updates)
### Examples
As you can see, the cons of snapshot tests far outweight the pros in general. To illustrate this better, this section will show a few examples of when you might be tempted to
use snapshot testing and why they are not good patterns.
#### Example #1 - Element visiblity
When testing elements visibility, favour using `vue-tests-utils (VTU)` to find a given component and then a basic `.exists()` method call on the VTU wrapper. This provides better readability and more resilient testing. If you look at the examples below, notice how the assertions on the snapshots do not tell you what you are expecting to see. We are relying entirely on `it` description to give us context and on the assumption that the snapshot has captured the desired behavior.
```vue
<template>
<my-componentv-if="isVisible"/>
</template>
```
Bad:
```javascript
it('hides the component', () => {
createComponent({ props: { isVisible: false }})
expect(wrapper.element).toMatchSnapshot()
})
it('shows the component', () => {
createComponent({ props: { isVisible: true }})
expect(wrapper.element).toMatchSnapshot()
})
```
Good:
```javascript
it('hides the component', () => {
createComponent({ props: { isVisible: false }})
expect(findMyComponent().exists()).toBe(false)
})
it('shows the component', () => {
createComponent({ props: { isVisible: true }})
expect(findMyComponent().exists()).toBe(true)
})
```
Not only that, but imagine having passed the wrong prop to your component and having the wrong visibility: the snapshot test would still pass because you would have captured the HTML **with the issue** and so unless you double-checked the output of the snapshot, you would never know that your test is broken.
#### Example #2 - Presence of text
Finding text within a component is very easy by using the `vue-test-utils` method `wrapper.text()`. However, there are some cases where it might be tempting to use snapshots when the value returned has a lot of inconsistent spacing due to formatting or HTML nesting.
In these instances, it is better to assert each string individually and make multiple assertions than to use a snapshot to ignore the spaces. This is because any change to the DOM layout will fail the snapshot test even if the text is still perfectly formatted.
expect(findOtherText().text()).toBe("My second message")
})
```
#### Example #3 - Complex HTML
When we have very complex HTML, we should focus on asserting specific sensitive and meaningful points rather than capturing it as a whole. The value in a snapshot test is to **warn developers** that they might have accidentally change an HTML structure that they did not intend to change. If the output of the change is hard to read, which is often the case with complex HTML output, then **is the signal itself that something changed** sufficient? And if it is, can it be accomplished without snapshots?
A good example of a complex HTML output is `GlTable`. Snapshot testing might feel like a good option since you can capture rows and columns structure, but we should instead try to assert text we expect or count the number of rows and columns manually.
Although more verbose, this now means that our tests are not going to break if `GlTable` changes its internal implementation, we communicate to other developers (or ourselves in 6 months) what is important to preserve when refactoring or adding to our table.
When this test runs the first time a fresh `.snap` file will be created. It will look something like this:
```txt
// Jest Snapshot v1, https://goo.gl/fbAQLP
exports[`makes the name look pretty`] = `
Sir Homer Simpson the Third
`
```
Now, every time you call this test, the new snapshot will be evaluated against the previously created version. This should highlight the fact that it's important to understand the content of your snapshot file and treat it with care. Snapshots will lose their value if the output of the snapshot is too big or complex to read, this means keeping snapshots isolated to human-readable items that can be either evaluated in a merge request review or are guaranteed to never change.
The above test will create two snapshots. It's important to decide which of the snapshots provide more value for codebase safety. That is, if one of these snapshots changes, does that highlight a possible break in the codebase? This can help catch unexpected changes if something in an underlying dependency changes without our knowledge.
A [feature test](testing_levels.md#white-box-tests-at-the-system-level-formerly-known-as-system--feature-tests), also known as `white-box testing`, is a test that spawns a browser and has Capybara helpers. This means the test can:
- Locate an element in the browser.
- Click that element.
- Call the API.
Feature tests are expensive to run. You should make sure that you **really want** this type of test before running one.
All of our feature tests are written in `Ruby` but often end up being written by `JavaScript` engineers, as they implement the user-facing feature. So, the following section assumes no prior knowledge of `Ruby` or `Capybara`, and provide a clear guideline on when and how to use these tests.
### When to use feature tests
You should use a feature test when the test:
- Is across multiple components.
- Requires that a user navigate across pages.
- Is submitting a form and observing results elsewhere.
- Would result in a huge number of mocking and stubbing with fake data and components if done as a unit test.
Feature tests are especially useful when you want to test:
- That multiple components are working together successfully.
- Complex API interactions. Feature tests interact with the API, so they are slower but do not need any level of mocking or fixtures.
### When not to use feature tests
You should use `jest` and `vue-test-utils` unit tests instead of a feature test if you can get the same test results from these methods. Feature tests are quite expensive to run.
You should use a unit test if:
- The behavior you are implementing is all in one component.
- You can simulate other components' behavior to trigger the desired effect.
- You can already select UI elements in the virtual DOM to trigger the desired effects.
Also, if a behavior in your new code needs multiple components to work together, you should consider testing your behavior higher in the component tree. For example, let's say that we have a component called `ParentComponent` with the code:
-`ParentComponent` changes its `internalData` value.
-`ParentComponent` passes the props down to `ChildComponent2`.
You can use a unit test instead by:
- From inside the `ParentComponent` unit test file, emitting the expected event from `childComponent1`
- Making sure the prop is passed down to `childComponent2`.
Then each child component unit tests what happens when the event is emitted and when the prop changes.
This example also applies at larger scale and with deeper component trees. It is definitely worth using unit tests and avoiding the extra cost of feature tests if you can:
- Confidently mount child components.
- Emit events or select elements in the virtual DOM.
- Get the test behavior that you want.
### Where to create your test
Feature tests live in `spec/features` folder. You should look for existing files that can test the page you are adding a feature to. Within that folder, you can locate your section. For example, if you wanted to add a new feature test for the pipeline page, you would look in `spec/features/projects/pipelines` and see if the test you want to write exists here.
### How to run a feature test
1. Make sure that you have a working GDK environment.
1. Start your `gdk` environment with `gdk start` command.
1. In your terminal, run:
```shell
bundle exec rspec path/to/file:line_of_my_test
```
You can also prefix this command with `WEBDRIVER_HEADLESS=0` which will run the test by opening an actual browser on your computer that you can see, which is very useful for debugging.
### How to write a test
#### Basic file structure
1. Make all string literals unchangeable
In all feature tests, the very first line should be:
```ruby
# frozen_string_literal: true
```
This is in every `Ruby` file and makes all string literals unchangeable. There are also some performance benefits, but this is beyond the scope of this section.
1. Import dependencies.
You should import the modules you need. You will most likely always need to require `spec_helper`:
1. Create a global scope for RSpec to define our tests, just like what we do in jest with the initial describe block.
Then, you need to create the very first `RSpec` scope.
```ruby
RSpec.describe 'Pipeline', :js do
...
end
```
What is different though, is that just like everything in Ruby, this is actually a `class`. Which means that right at the top, you can `include` modules that you'd need for your test. For example, you could include the `RoutesHelpers` to navigate more easily.
```ruby
RSpec.describe 'Pipeline', :js do
include RoutesHelpers
...
end
```
After all of this implementation, we have a file that looks something like this:
```ruby
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe 'Pipeline', :js do
include RoutesHelpers
end
```
#### Seeding data
Each test is in its own environment and so you must use a factory to seed the required data. To continue on with the pipeline example, let's say that you want a test that takes you to the main pipeline page, which is at the route `/namespace/project/-/pipelines/:id/`.
Most feature tests at least require you to create a user, because you want to be signed in. You can skip this step if you don't have to be signed in, but as a general rule, you should **always create a user unless you are specifically testing a feature looked at by an anonymous user**. This makes sure that you explicitly set a level of permission that you can edit in the test as needed to change or test a new level of permission as the section changes. To create a user:
```ruby
let(:user) { create(:user) }
```
This creates a variable that holds the newly created user and we can use `create` because we imported the `spec_helper`.
However, we have not done anything with this user yet because it's just a variable. So, in the `before do` block of the spec, we could sign in with the user so that every spec starts with an authenticated user.
Now that we have a user, we should look at what else we'd need before asserting anything on a pipeline page. If you look at the route `/namespace/project/-/pipelines/:id/` we can determine we need a project and a pipeline.
So we'd create a project and pipeline, and link them together. Usually in factories, the child element requires its parent as an argument. In this case, a pipeline is a child of a project. So we can create the project first, and then when we create the pipeline, we are pass the project as an argument which "binds" the pipeline to the project. A pipeline is also owned by a user, so we need the user as well. For example, this creates a project and a pipeline:
There are many factories that already exists, so make sure to look at other existing files to see if what you need is available.
#### Navigation
You can navigate to a page by using the `visit` method and passing the path as an argument. Rails automatically generates helper paths, so make sure to use these instead of a hardcoded string. They are generated using the route model, so if we want to go to a pipeline, we'd use:
```ruby
visit project_pipeline_path(project, pipeline)
```
Before executing any page interaction when navigating or making asynchronous call through the UI, make sure to use `wait_for_requests` before proceeding with further instructions.
If you want to follow a link, then there is `click_link`:
```ruby
click_link 'Text inside the link tag'
```
You can use `fill_in` to fill input / form elements. The first argument is the selector, the second is `with:` which is the value to pass in.
```ruby
fill_in 'current_password', with: '123devops'
```
Alternatively, you can use the `find` selector paired with `send_keys` to add keys in a field without removing previous text, or `set` which completely replaces the value of the input element.
To assert anything in a page, you can always access `page` variable, which is automatically defines and actually means the page document. This means you can expect the `page` to have certain components like selectors or content. Here are a few examples:
By default, every feature flag is enabled **regardless of the YAML definition or the flags you've set manually in your GDK**. To test when a feature flag is disabled, you must manually stub the flag, ideally in a `before do` block.
```ruby
stub_feature_flags(my_feature_flag: false)
```
If you are stubbing an `ee` feature flag, then use:
```ruby
stub_licensed_features(my_feature_flag: false)
```
### Debugging
You can run your spec with the prefix `WEBDRIVER_HEADLESS=0` to open an actual browser. However, the specs goes though the commands quickly and leaves you no time to look around.
To avoid this problem, you can write `binding.pry` on the line where you want Capybara to stop execution. You are then inside the browser with normal usage. To understand why you cannot find certain elements, you can:
- Select elements.
- Use the console and network tab.
- Execute selectors inside the browser console.
Inside the terminal, where capybara is running, you can also execute `next` which goes line by line through the test. This way you can check every single interaction one by one to see what might be causing an issue.