Best Practices for Writing Effective and Maintainable QA Tests
This article provides an advanced look at best practices for QA testing, building upon our foundational testing guide.
Table of Contents
- Structure and Naming Conventions
- Test Implementation Best Practices
- Prioritize API for Test Setup
- Avoid Redundant Expectations
- Use aggregate_failures for Multiple Assertions
- Isolate Actions in raise_error Blocks
- Optimize for Parallelization
- Use let vs. Instance Variables Strategically
- Restrict UI Operations in Hooks
- Ensure a Clean Login State
- Tag Tests Requiring Admin Privileges
- Prefer the Commit Resource over ProjectPush
- Blur Elements Correctly
- Write Efficient expect Statements
- Use the Logger, Not puts
Structure and Naming Conventions
Class and Module Naming
Our QA framework utilizes Zeitwerk for automatic loading of classes and modules. By default, Zeitwerk's inflector converts filenames written in snake_case to PascalCase for class or module names. It is highly recommended to adhere to this convention to prevent the need for manual inflection overrides.
If a unique inflection rule is necessary, it can be defined in the qa.rb file within the loader.inflector.inflect method.
Linking Tests to Test Cases
Every automated test should be linked to a formal test case. This is achieved by adding an RSpec metadata tag, testcase, to your test definition.
When a test is associated with its test case URL, our reporting scripts can automatically update the test case status and corresponding results issue when run in a pipeline with reporting enabled. If a results issue doesn't exist, the script creates one.
Here is how you can link a test to its test case:
RSpec.describe 'Stage' do
describe 'General description of the feature under test' do
it 'test name', testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/:test_case_id' do
# ... test implementation ...
end
it 'another test', testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/:another_test_case_id' do
# ... test implementation ...
end
end
end
Handling Shared or Dynamic Tests
For tests that are not one-to-one, such as parallelized, templated, or shared examples, you must pass the test case URL as an argument.
Consider these shared examples:
RSpec.shared_examples 'unselected maintainer' do |testcase|
it 'user fails to push', testcase: testcase do
# ...
end
end
RSpec.shared_examples 'selected developer' do |testcase|
it 'user pushes and merges', testcase: testcase do
# ...
end
end
When these examples are used, the specific test case URL is passed in, ensuring each variation is properly tracked.
RSpec.describe 'Create' do
describe 'Restricted protected branch push and merge' do
context 'when only one user is allowed to merge and push' do
# ...
it_behaves_like 'unselected maintainer', 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347775'
it_behaves_like 'selected developer', 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347774'
end
end
end
Guidelines for Test Naming
Test names should be structured to form a clear, readable sentence that describes the test's purpose.
Recommended Structure:
# The `RSpec.describe` block should name the DevOps Stage
RSpec.describe 'Plan', product_group: :knowledge do
# The `describe` block specifies the feature under test
describe 'wiki content creation' do
# The `context` block outlines the specific condition
context 'in a project'
# The `it` block states the expected outcome
it 'adds a home page'
# ...
end
end
end
end
This structure generates a test name like: Plan wiki content creation in a project adds a home page.
- Keep descriptions concise.
- The top-level Rspec.describe should be the DevOps stage.
- Use context blocks starting with words like when, with, for, etc.
- The it block defines the specific pass/fail criterion.
Test Implementation Best Practices
Prioritize API for Test Setup
Whenever possible, use API calls to create and configure the resources needed for your tests. Fabricating resources via the API is significantly faster and more reliable than using the UI, saving both time and execution costs.
Avoid Redundant Expectations
Keep tests focused by only including expect() statements that are directly related to the behavior being tested. Extraneous assertions make tests slower and harder to maintain.
Good Example:
# Good: Focuses on the core expectation
Flow::Login.sign_in
Page::Main::Menu.perform do |menu|
expect(menu).to be_signed_in
end
Bad Example:
# Bad: Contains unnecessary and redundant checks
Flow::Login.sign_in(as: user)
Page::Main::Menu.perform do |menu|
expect(menu).to be_signed_in
expect(page).to have_content(user.name) # Redundant, already covered by be_signed_in
expect(menu).to have_element(:nav_bar) # Unrelated to the login test's purpose
end
Use aggregate_failures for Multiple Assertions
When a single test legitimately requires multiple expectations, use aggregate_failures. This ensures that all assertions are run and all failures are reported, rather than the test stopping at the first failure.
Good Example (consecutive expectations):
aggregate_failures 'testing search results' do
expect(search).to have_file_in_project(template[:file_name], project.name)
expect(search).to have_file_with_content(template[:file_name], content[0..33])
end
If the expectations are separated by other actions, attach the :aggregate_failures metadata to the test definition.
Good Example (separated expectations):
it 'searches', :aggregate_failures do
# ...
expect(search).to have_file_in_project(...)
search.switch_to_code
expect(search).to have_file_with_content(...)
end
Isolate Actions in raise_error Blocks
Avoid wrapping multiple actions inside a single expect { ... }.to raise_error block. Doing so can obscure the true source of a failure and make debugging difficult. Instead, isolate the specific action that is expected to raise an error.
Optimize for Parallelization
Our test framework parallelizes execution at the file level. To maximize this benefit, split tests into separate files whenever possible. Only group tests in a single file if they share a computationally expensive setup that would be inefficient to run multiple times.
Use let vs. Instance Variables Strategically
While let is generally preferred for defining variables, it re-creates the resource for each test example. For expensive resources that can be shared across multiple tests within a file, use an instance variable defined in a before(:all) block to improve performance.
Restrict UI Operations in Hooks
Limit before(:context) and after hooks to API calls and non-UI operations. UI actions in these hooks can interfere with automatic screenshot capture on failure, as the screenshot is taken in a separate after hook. Performing UI actions can change the state of the page, leading to a screenshot that does not reflect the actual point of failure.
Ensure a Clean Login State
Tests should always start from a signed-out state. If UI actions are necessary within an after(:all) block, ensure the session is properly logged out at the end to prevent subsequent tests from failing.
Tag Tests Requiring Admin Privileges
Tests that need administrator access should be tagged with the :requires_admin RSpec metadata. This allows us to exclude these tests from test suites run against production or other sensitive environments.
Prefer the Commit Resource over ProjectPush
For creating commits, use the Resource::Repository::Commit object, which interacts with the API. The ProjectPush object uses shell commands and should be reserved for tests specifically targeting SSH or Git CLI functionality.
Blur Elements Correctly
To blur a focused element (like an input field), click another non-interactive element on the page. Avoid clicking the
element, as this can trigger unintended actions. If the UI is obscured by an overlay, use the click_element_coordinates method to simulate a click at a specific point.Write Efficient expect Statements
For asynchronous checks, use eventually_ matchers to wait for a condition to be met within a defined timeout.
expect { async_value }.to eventually_eq(value).within(max_duration: 120)
For negative checks, create custom negatable matchers. A standard not_to have_... check can introduce unnecessary delays because it waits for the full timeout period before confirming an element's absence. A custom matcher with a match_when_negated block allows for an immediate pass as soon as the element is confirmed to be absent, making the test much faster.
Use the Logger, Not puts
Use the built-in logger for outputting information during test execution. It provides essential features like log levels, tagging, and automatic formatting that are not available with a simple puts statement.