Test Results

circle-info

Who is this for? QA managers, engineering managers, and developers who need to inspect step-by-step execution results, video recordings, network logs, and AI root cause analysis.

Every ContextQA test execution produces a comprehensive report with per-step screenshots, a full session video, network and console logs, and AI-generated insights. This page explains how to access and interpret all of it.


Accessing Test Results

For Individual Test Cases

  1. Navigate to Test Development in the left sidebar

  2. Click on the test case you want to review

  3. Click the Results tab (or History tab — both lead to the execution list)

  4. Each row in the list represents one execution: date, duration, status (PASSED / FAILED / RUNNING), and the browser/environment used

  5. Click any execution row to open the detailed report for that run

For Test Plan Runs

  1. Navigate to Test ExecutionTest Plans in the left sidebar

  2. Click on the test plan

  3. Click the Executions tab to see all runs of that plan

  4. Click a run to open the plan-level execution dashboard

Via MCP


Execution Dashboard (Test Plans)

When you run a test plan — a configuration that executes multiple suites across one or more browsers and environments — the execution dashboard gives you an at-a-glance view of the entire run.

Dashboard Metrics

Metric
Description

Pass Rate

Percentage of test cases that passed in this run

Total Tests

Total number of test cases included in the plan

Passed

Number of test cases that passed

Failed

Number of test cases that failed

Skipped

Test cases skipped due to unmet pre-requisites or configuration

Duration

Total wall-clock time for the plan execution

Start / End Time

Execution window timestamps

Drill-Down Navigation

The dashboard supports three levels of drill-down:

Level 1 — Plan overview: Shows all suites included in the run, with pass/fail counts per suite.

Level 2 — Suite detail: Shows all test cases in a suite, with individual pass/fail status, duration, and the browser/device used.

Level 3 — Test case detail: Opens the full step-by-step report for an individual test case execution.

Use the breadcrumb navigation at the top of the report to move between levels.


Detailed Report View

The detailed report for a single test case execution shows every step in sequence. Each step row contains:

Step Information

Status indicator:

  • Green checkmark — step passed

  • Red X — step failed

  • Grey dash — step was skipped

  • Orange wrench icon — step was auto-healed before passing

Step description: The natural language step text as written in the test case.

Duration: How long this step took to execute, in milliseconds or seconds.

Screenshot: A thumbnail of the page state after this step. Click to open the full-size screenshot in a lightbox. Screenshot URL is publicly accessible and can be shared.

Failure reason: Visible only on failed steps. Shows the error message, which element was targeted, and what was expected vs. what was found.

Auto-heal indicator: If the step was auto-healed (the original locator failed but the Self-Healing Agent found an alternative), an indicator shows the original locator and the replacement locator used.

Example Step Report (Failed)


Video Recording

Every execution includes a complete .webm video of the browser session from navigation to final step.

Accessing the Video

In the detailed report view, click the Video icon or the Watch Recording button at the top of the report. The video plays in an embedded player within ContextQA.

The video player is synchronized with the step list. Clicking on any step in the step list jumps the video to the moment that step began execution. Conversely, as the video plays, the step list highlights the currently executing step.

What the Video Shows

  • The full browser viewport throughout the session (no crop or zoom)

  • Mouse cursor movements and clicks (visible as a highlighted dot)

  • Keyboard input as it is typed

  • Page transitions and animations in real time

  • Any browser dialogs (alerts, confirms, file pickers)

  • The complete end state of the session (useful when the failure is in the final step)

Video Retention

Videos are retained for 30 days after the execution date. After 30 days, the video is deleted but the step screenshots, HAR log, console log, and AI reasoning data are retained for 90 days. The trace file follows the same 30-day retention policy as videos.


Playwright Trace

Each execution generates a Playwright trace file — a binary archive that captures the complete execution state at a level of detail that screenshots and video alone cannot provide.

Accessing the Trace

  1. In the detailed report view, click View Playwright Trace

  2. ContextQA opens the trace viewer at trace.playwright.dev in a new tab with the trace file pre-loaded

  3. No installation, no account — the trace viewer is a public web tool

Alternatively, retrieve the trace URL via MCP:

What the Trace Shows

The Playwright trace viewer has three panels:

Timeline panel (top): A horizontal timeline of every action taken during the execution. Hover over any action to see a before/after screenshot comparison.

Action detail panel (left): For the selected action: the action type, the target locator used, duration, and any error message.

DOM/Network/Console panel (right): For the selected action: a snapshot of the exact DOM state (not just a screenshot — the actual HTML structure), all network requests made during or before this action, and console output at this point in time.

The trace is the definitive debugging tool when you need to understand why an element could not be found — the DOM snapshot shows exactly what was on the page when the locator was attempted.


AI Root Cause Analysis

For failed executions, ContextQA's AI can analyze all evidence artifacts and explain the failure in plain English.

Accessing Root Cause Analysis

In the detailed report view, click the AI Insights button (or Root Cause Analysis button, depending on your UI version). The AI runs an analysis in 5-15 seconds.

Via MCP:

Analysis Output

The root cause analysis returns a structured report:

Summary: A 1-3 sentence plain English explanation of what failed and why. For example:

"The test failed at Step 4 when attempting to click the Submit button. The button was in a disabled state because the 'Terms and Conditions' checkbox was not checked. This checkbox, added in the most recent release, is required before the form can be submitted."

Affected step: The step number and description where the failure occurred.

Evidence used: Which artifacts the AI analyzed (screenshots, network log, console log, DOM state from trace).

Suggested fix: A specific, actionable recommendation. For example:

"Add a step between Step 3 and Step 4: 'Check the Terms and Conditions checkbox'. The checkbox has id='terms-checkbox'."

Classification: Whether this is:

  • A test bug (the test steps need updating — the application is correct)

  • An application bug (the test is correctly testing behavior that is broken)

  • A flaky failure (the test sometimes passes — likely a timing or infrastructure issue)

  • An environment issue (the test environment is misconfigured or unavailable)

AI Insights

In addition to root cause on failed tests, the AI provides insights on completed test plan runs:

  • Reliability score: Percentage of test cases with no history of flakiness

  • Slowest tests: Top N tests by execution time, with recommendations

  • Coverage recommendations: Application areas detected during test runs that have no dedicated test cases

  • Regression patterns: Test cases that have started failing after a period of consistent passing, suggesting a recent code change broke them

Access via MCP:


Network Log

The network log captures every HTTP/HTTPS request made by the browser during the test session in HAR format.

Accessing the Network Log

In the detailed report view, click the Network tab. The log displays as a table with columns for method, URL, status code, duration, and size. Click any row to expand it and see full request/response headers and body.

Via MCP:

Using the Network Log for Debugging

Finding failed API calls: Filter the log by status code to show only 4xx and 5xx responses. A 422 Unprocessable Entity on a form submission, for example, indicates a server-side validation failure that may not be visible in the UI.

Timing analysis: The duration column shows how long each request took. Requests taking more than 2-3 seconds are candidates for performance investigation.

Request payload verification: For tests involving form submissions or API interactions, verify that the request body contains the expected values. This is useful for confirming that form fields were filled correctly even when the UI does not show clear confirmation.

Missing requests: If you expect a specific API call to have been made (e.g., a POST to create an order) but do not see it in the log, this confirms that the UI interaction did not trigger the expected backend call.


Console Log

The console log captures all browser console output during the test session.

Accessing the Console Log

In the detailed report view, click the Console tab. Entries are color-coded by level: red for errors, yellow for warnings, white for info/log.

Via MCP:

Common Console Findings

JavaScript errors: Uncaught TypeError: Cannot read property 'id' of undefined — a common error that breaks UI logic silently. The console log captures the file and line number.

Framework warnings: React and Angular produce console warnings for deprecated API usage, missing keys in lists, and other common issues. While these do not necessarily cause test failures, they indicate code quality concerns.

Application errors: Well-instrumented applications log error events to the console: ERROR: Payment processing failed - reason: card_declined. These provide more detail than what is shown in the UI.

Performance warnings: Some browsers log long task warnings ([Violation] 'click' handler took 3,412ms) which indicate UI responsiveness issues.


Data-Driven Test Results

When a test case uses a test data profile to run with multiple datasets, the result page shows a separate result row for each dataset iteration. Each iteration has its own pass/fail status, screenshots, and step log, so you can see exactly which data combination caused a failure without re-running manually.

To review data-driven results:

  1. Open the execution report for a data-driven test run

  2. Click View Detailed Report

  3. Browse the dataset rows — each is labeled with its dataset name (e.g., "Data Set 1", "Data Set 2") and its individual status


Exporting and Sharing Results

PDF Export

To export a test run report as a PDF:

  1. Open the detailed report or plan execution dashboard

  2. Click ExportDownload PDF

  3. The PDF includes all step statuses, screenshots, and the AI root cause summary (for failed runs)

Every test execution report has a permanent URL. Copy the URL from your browser's address bar to share with a team member. Report URLs require a ContextQA login to view.

Slack Integration

Configure Slack notifications in Settings → Integrations → Notifications → Slack. You can send:

  • Pass/fail summary to a channel after every test plan run

  • Failure alerts with a direct link to the report when any test fails

  • Daily/weekly summary digests

Jira Integration

When creating a defect ticket from a test failure (via the Create Bug button or the create_defect_ticket MCP tool), the Jira issue is automatically populated with:

  • A link back to the ContextQA execution report

  • The failure screenshot as an attachment

  • The failing step description and error message


Result Retention Policy

Artifact
Retention Period

Step screenshots

90 days

HAR network log

90 days

Console log

90 days

AI reasoning log

90 days

Video recording

30 days

Playwright trace

30 days

Test case result records

Indefinite (no automatic deletion)

Test plan execution records

Indefinite

After the retention period, binary artifacts (screenshots, video, trace) are deleted from storage, but the result record (pass/fail status, step statuses, timestamps) is retained indefinitely for historical trend analysis.

circle-info

Get release readiness reports your stakeholders understand. Book a Demo →arrow-up-right — See the analytics dashboard, failure analysis, and flaky test detection for your test suite.

Last updated

Was this helpful?