Running Tests
How to execute individual test cases, test suites, and test plans in ContextQA, including live execution monitoring, parallel execution, and programmatic triggering via the MCP server.
Who is this for? SDETs, QA managers, and engineering managers who need to run test cases, suites, and plans — manually, on a schedule, or triggered from a CI/CD pipeline.
ContextQA supports three granularities of test execution: a single test case, a test suite, and a full test plan. Each produces an execution record with the same evidence set — screenshots, video, network logs, console logs, and root cause analysis. Executions can be triggered manually from the portal, from CI/CD pipelines via the API, or from AI coding assistants via the MCP server.
Prerequisites
You have at least one test case created and saved.
You have an environment configured with a valid base URL.
For test plan execution: you have a test plan configured with at least one suite and an environment selected.
Running a Single Test Case
Running a single test case is the primary feedback loop during test authoring. Use it to verify a new test case immediately after creation, or to investigate a failing step in isolation.
Steps
Open the test case from Test Development → Test Cases.
In the test case editor, click the Run button (▶) in the top toolbar.
A dialog appears asking you to confirm the execution environment. Select the environment from the dropdown and click Run.
The test is queued immediately. A banner appears at the top of the editor showing the execution status.
Click View Live Execution (or the execution ID link in the banner) to open the live execution viewer.
The test case executes step by step. Each step shows a real-time pass/fail indicator as it completes.
Running a Test Suite
Running a suite executes all test cases within it as a batch. This is the standard way to run a group of related tests during development or before a deployment.
Steps
Navigate to Test Development → Test Suites.
Click on the suite name to open it.
Click the Run Suite button in the suite header.
Select the execution environment from the dialog.
Click Run.
The suite execution begins. All test cases in the suite are queued. Depending on the suite's parallel/sequential setting:
Sequential: test cases execute one at a time in listed order.
Parallel: test cases execute simultaneously in separate browser instances.
A suite execution record appears in Execution History with an aggregate pass/fail status and a breakdown by test case.
Running a Test Plan
A test plan execution is the most complete form of test execution. It runs multiple suites against configured browsers and an environment, and is the entry point for CI/CD automation and scheduled runs.
Steps
Navigate to Test Development → Test Plans.
Click on the test plan name to open it.
Click the Execute button.
A confirmation dialog shows the plan configuration:
Suites included
Target browsers or devices
Selected environment
Click Confirm Execution.
The plan begins executing. All configured suites start based on the plan's parallelism settings. Results are grouped by suite and test case in the execution report.
Live Execution View
When any test executes, clicking the execution link opens the live execution viewer. The viewer updates in real time via a WebSocket connection.
Layout:
Left panel — the full step list with real-time status icons:
✓ Green — step passed
✗ Red — step failed
▶ Blue — step currently executing
○ Grey — step not yet reached
Right panel — a live screenshot of the browser at the current step. The screenshot refreshes after each step completes.
Bottom panel — the network request log, showing each HTTP request as it is made during execution.
The live view remains open after execution completes, allowing you to scroll through screenshots and network entries without leaving the view.
Viewing Results After Execution
When execution completes, click View Detailed Report from the execution summary banner or from the Execution History list.
The detailed report contains:
Execution Summary
Overall status: PASSED / FAILED / PARTIALLY FAILED
Duration: total execution time
Browser and version
Environment used
Number of steps: total, passed, failed, skipped
Step-by-Step Breakdown
Each step is listed with:
Pass/fail status
Step description
Screenshot captured at that step (click to view full size)
Duration for that step
Self-healing indicator (if the step was auto-healed)
For failed steps: the AI-generated root cause analysis
Video Recording
A full MP4 recording of the browser session is embedded in the report. Use the video to see exactly what happened in context — especially useful for failures that are hard to diagnose from static screenshots alone.
Network Log
The complete HAR log is accessible from the Network tab in the report. Each request shows method, URL, status code, response time, and the full request/response body. Use the network log to:
Identify 4xx or 5xx responses that caused UI failures
Verify that the correct API endpoints were called with the correct payloads
Diagnose authentication token expiry
Console Log
The Console tab shows all browser console entries captured during execution, timestamped and correlated with the active step. Look for console.error entries to find JavaScript exceptions that may explain unexpected UI behavior.
Root Cause Analysis
For any failed step, the Root Cause tab shows the AI-generated analysis. The root cause analysis:
Identifies the most likely reason for the failure (element not found, assertion mismatch, network error, JavaScript exception)
Distinguishes between test-level failures (the step was wrong) and application-level failures (the app has a bug)
Suggests specific corrective actions (update the step description, fix the element reference, investigate the API error)
Parallel Execution
Parallel execution runs multiple test cases or test suites simultaneously across multiple browser instances, reducing total execution time.
Enabling Parallel Execution for a Suite
Open the test suite.
Click the Settings (gear icon).
Set Execution Mode to Parallel.
Save.
Enabling Parallel Execution at the Test Plan Level
Open the test plan.
In the plan settings, set Suite Execution Mode to Parallel to run all suites simultaneously, or Sequential to run suites one at a time.
Save the plan.
Concurrent browser limits are enforced per workspace based on your subscription tier. If all concurrent slots are occupied, new test cases queue and start as slots become available.
Execution from MCP / API
ContextQA exposes execution capabilities through the MCP server, allowing AI coding assistants and CI/CD scripts to trigger and monitor tests programmatically.
Execute a Single Test Case
Execute a Test Suite
Execute a Test Plan
Polling for Execution Completion
Retrieving Step Results
Re-running Failed Tests
After reviewing a failed execution, you can re-run either the full test case or only the failed steps.
Re-run full test case:
Open the execution report.
Click Re-run Test Case in the report header.
Re-run from a specific step:
Open the execution report.
Locate the first failed step.
Click the three-dot menu on that step → Re-run from This Step.
Re-running from a step is useful when the first few steps are setup steps that are known to be correct — skipping them saves time when iterating on a failing assertion.
Tips & Best Practices
Always verify a new test case with a manual run before adding it to a test plan. Running the test case once from the editor catches obvious issues before they pollute automated suite results.
Use parallel execution for independent tests. If your test cases do not share state (each one starts from a fresh browser session), parallel execution provides the fastest feedback. Sequential execution should be reserved for cases with explicit state dependencies.
Monitor network logs for intermittent failures. If a test fails sometimes and passes other times (flaky behavior), the network log often reveals an intermittent API timeout or a race condition in the application's data loading.
Set up Slack notifications for critical plan executions. In the test plan settings, configure a Slack channel to receive execution results. This ensures failures are surfaced immediately rather than discovered during the next manual review.
Use the video recording for stakeholder communication. When reporting a genuine application bug found by ContextQA, include the video recording in the bug report. Stakeholders who are not familiar with the test tool can immediately see what went wrong from the video.
Troubleshooting
Execution is stuck in QUEUED status for more than 5 minutes Your workspace may have exhausted its concurrent execution slots. Check the Execution History list to see how many executions are currently RUNNING. If multiple long-running test plans are occupying all slots, wait for them to complete or contact support to increase your concurrency limit.
Steps are failing due to "element not found" but the element is visible in the screenshot The screenshot captures the browser state after the step attempted to act, not before. The element may have been present before the step but changed state (disappeared, became disabled, or was replaced by a loading spinner) during the step's execution. Check the network log for a slow API call that might have caused a loading state at the critical moment.
The video recording is not playing in the report Video recordings are processed asynchronously after execution completes. If you access the report within 30–60 seconds of execution finishing, the video may not yet be ready. Refresh the report page after a minute. If the video is still unavailable after several minutes, contact support.
Parallel execution is producing intermittent failures that do not reproduce in sequential mode This typically indicates that the test cases are sharing state they should not be sharing — for example, using the same user account in multiple parallel tests. Each parallel test should use a distinct user account or dataset to avoid conflicts.
Related Pages
10× faster with parallel execution across browsers and devices. Book a Demo → — See ContextQA run your full test suite in parallel CI/CD execution.
Last updated
Was this helpful?