Parallel Execution
Parallel test execution in ContextQA lets multiple test cases run simultaneously, cutting total plan duration and surfacing failures faster.
Who is this for? SDETs, QA managers, and engineering managers who want to reduce test plan run time by distributing test cases across multiple concurrent browser sessions.
Parallel execution: A test plan configuration mode where ContextQA distributes independent test cases across multiple concurrent execution slots, reducing total wall-clock time proportionally to the degree of parallelism.
Waiting for hundreds of sequential test cases to finish defeats the purpose of continuous testing. ContextQA solves this through parallel execution slots controlled by the Parallel Nodes setting on every test plan. This page explains how parallelism works, how to configure it, and how to reason about dependency chains that must stay sequential.
What is parallel test execution?
Parallel test execution means ContextQA launches more than one test case at the same time inside a single test plan run. ContextQA orchestrates execution through its gcpn1.contextqa.com service, which manages a pool of Playwright-backed browser sessions. When Parallel Nodes is set to a value greater than one, ContextQA assigns independent test cases to separate sessions and runs them concurrently. The final plan result is computed only after every session completes, then aggregated into a single pass/fail report.
ContextQA does not share browser state between parallel slots. Each slot receives its own browser context, cookies, and local storage — isolation is total. This is intentional: a flaky session in slot 2 cannot corrupt slot 1.
How to configure parallel nodes in a test plan
Open Test Plans in the left navigation.
Select the test plan you want to configure.
Click the Settings tab inside the plan detail view.
Locate the Parallel Nodes field.
Enter a numeric value between
1and the maximum allowed by your subscription tier.Click Save to apply the change.
The new parallelism value takes effect on the next execution. In-flight runs are not affected.
Parallel nodes reference
1
Fully sequential; one test case runs at a time
Smoke suites where strict ordering is required, or plans with many prerequisite chains
2
Two test cases run simultaneously
Small teams, shared staging environment with limited capacity
4
Four concurrent sessions
Standard CI pipelines with moderate test suites (50–200 cases)
8
Eight concurrent sessions
Large regression suites (200+ cases) where execution time is the primary constraint
16+
Maximum throughput
Elite tier; nightly full-regression runs; dedicated execution environments
Higher values reduce wall-clock time but increase concurrent load on the application under test (AUT). If your AUT's staging environment has rate limits, connection pool limits, or shared database locks, aggressive parallelism can cause failures that would not appear in production. Start at 4, measure, and scale up only when the AUT can sustain the load.
Trade-offs: speed versus resource pressure
Every parallel slot opens a full Playwright browser session with its own network connection. The practical implications are:
Faster results. A 200-case suite that takes 40 minutes sequentially may complete in 10 minutes at parallelNode 4 — assuming no dependency constraints. The theoretical speedup is sequential_time / parallelNode, though real-world gains are slightly lower due to scheduling overhead and test cases with different durations.
Increased AUT load. Eight parallel sessions create eight simultaneous users. If your staging environment cannot handle that concurrency, some requests will time out or return errors that are infrastructure failures, not application bugs. ContextQA's AI root cause analysis will often classify these as environment issues, but the most reliable fix is to right-size parallelism to the AUT's capacity.
Increased execution resource consumption. Each slot consumes CPU and memory on the ContextQA execution infrastructure. Premium and Elite tiers offer higher slot ceilings because they have proportionally more allocated execution capacity. Attempting to set parallelNode above your tier's ceiling will be capped silently at the maximum allowed value — ContextQA will not error, it will run at the cap.
Network artifact volume. Every parallel session produces its own screenshot set, WebM video, HAR file, and console log. At high parallelism, storage and download times for evidence packages grow linearly. Plan accordingly if you are programmatically downloading artifacts after every run.
Dependency chains and sequential execution within a parallel plan
ContextQA respects prerequisite relationships between test cases. When test case B declares test case A as a prerequisite, ContextQA will not start B until A has completed and passed — regardless of the parallelNode setting.
This means: in a plan with parallelNode 8 and a chain A → B → C → D, those four cases always run sequentially in that order. The eight slots are used for other independent test cases concurrently. Dependency chains are the exception, not the rule; most test cases in a well-designed suite are independent.
Designing for maximum parallelism:
Keep test cases self-contained. Each case should set up its own preconditions (log in, navigate to the start URL) rather than relying on state left by a previous case.
Use prerequisite chains only when the dependency is semantically required (for example, a test that verifies an edit can only run after a test that verifies creation).
Group cases with many prerequisites into dedicated sequential suites, and run independent cases in a high-parallelism plan.
How results are aggregated
After all parallel slots complete, ContextQA computes the test plan result as follows:
PASSED: Every test case in the plan passed.
FAILED: One or more test cases failed (regardless of pass count).
PARTIAL: Execution was interrupted before all cases completed.
The aggregate result is what ContextQA sends to CI/CD integrations (GitHub Actions, Jenkins, GitLab CI) and to Slack or Jira notifications. The individual case results are always visible in the Execution Report regardless of the aggregate.
The Execution Dashboard in the Analytics section shows per-case results, timing, and failure classification for every case in the run. Parallel cases appear with overlapping timestamps in the timeline view, making it straightforward to verify that parallelism is actually occurring.
Frequently Asked Questions
Does parallelNode affect mobile test execution?
Yes, the same Parallel Nodes setting controls both web and mobile slots within a mixed plan. Mobile slots consume real or simulated device capacity. Check your mobile concurrency limit using the get_mobile_concurrency MCP tool before setting high parallelNode values on plans that include mobile test cases.
Can I set different parallelism for different suites within one plan?
No. The Parallel Nodes setting applies to the entire test plan. If you need different parallelism for different groups of test cases, place them in separate test plans and trigger both from your CI pipeline.
What happens if a test case in a parallel slot crashes the browser session?
ContextQA marks that test case as failed and releases the slot for the next queued case. The session crash does not block other parallel slots. The crashed session's HAR, console log, and partial video are still captured and available in the execution report.
Why are some test cases still running sequentially even though parallelNode is set to 8?
The most common cause is prerequisite chains. Check whether the cases running sequentially have Prerequisite relationships configured. Removing unnecessary prerequisites, or restructuring tests to be self-contained, will allow ContextQA to dispatch them to parallel slots.
Related
10× faster with parallel execution across browsers and devices. Book a Demo → — See ContextQA run your full test suite in parallel CI/CD execution.
Last updated
Was this helpful?