Analytics Dashboard

A guide to the dashboards and analytics surfaces in ContextQA, including execution trends, coverage, AI insights, and risk-based testing.

circle-info

Who is this for? QA managers, engineering managers, and VPs who need trend data, coverage visibility, and risk-based testing insights to make informed release decisions.

ContextQA provides several analytics dashboards that together give a complete picture of your QA program's health. All dashboards are accessible from the left sidebar and update as executions complete.


Main Dashboard

Navigate to Dashboard in the left sidebar to see the top-level platform summary.

Test case volume summary

At the top of the Dashboard, a compact summary shows the total number of active test cases split by type:

  • Web test cases

  • Mobile test cases

  • API test cases

This gives QA managers and engineering leads an instant read on the breadth of automated coverage.

The activity trend widget compares two lines over a configurable date range:

  • AI Actions (green) — steps executed, healed, or root-cause-analyzed automatically by the ContextQA AI engine with no human input

  • Human Interventions (purple) — manual reviews, step edits, or manually triggered executions performed by team members

A healthy test suite shows a high AI-to-human ratio. Spikes in human activity often indicate a debugging session following a failed release or a batch of new test authoring work.

Toggle between line and bar chart modes and adjust the date range picker to scope the chart to a specific sprint or release window.

Daily test case activity bar chart

To the right of the trend widget, a bar chart breaks each day into color-coded segments:

Segment
Meaning

Created

New test cases authored that day

Reviewed

Cases reviewed by a team member

Executed

Cases run (manually or automatically)

Root cause identified

Cases where AI produced a root cause explanation

Auto healed

Cases where the AI self-healing engine updated a step

Use this chart to identify whether a day with many failures was also a day with a high auto-heal rate — a sign the application UI changed but the tests adapted automatically.


Execution Dashboard

Navigate to Execution Dashboard in the left sidebar for run-level analytics.

Run history and execution trend graph

The top of the Execution Dashboard shows:

  • Total executed tests for the selected period

  • Count of passed, failed, and aborted tests

  • Success rate versus the previous period

Below this summary, the Execution Trend Graph plots daily passed, failed, and aborted counts. Use it to spot failure spikes that correlate with deployments or to confirm that a fix reduced the failure rate.

Test distribution widget

The distribution widget shows which environments and platforms your tests cover: web, mobile, and API. Use it to identify platform gaps — for example, if mobile coverage is low relative to web coverage in a period when a mobile feature shipped.

Consistently failing tests

A ranked list shows the test cases that have failed most frequently in the selected period. Each card shows:

  • Test case ID and name

  • Root cause (if AI analysis is available)

  • Failure count

Click any card to navigate directly to the failure analysis detail for that test case.


Coverage Dashboard

Select the Coverage tab inside the Execution Dashboard.

Each application module appears as a card summarizing:

  • Positive, negative, and ad-hoc scenarios covered

  • Whether coverage was authored by AI or manually

  • A red badge on modules with unresolved issues, enabling priority-based triage

The Coverage Dashboard helps QA managers answer "which features are we not testing?" before a release rather than after.


Insight Tab and AI observations

Select the Insight tab inside the Execution Dashboard.

Test health and readiness

This panel surfaces critical blockers preventing tests from running successfully — for example, missing test data or broken prerequisite steps. Each blocker is listed with:

  • What is missing or broken

  • Priority level

  • Source (AI-detected or manually flagged)

  • Current status

Resolve the listed blockers before re-running the affected tests to avoid misleading failure data.

Source distribution

The Source tab shows how test cases were created: through interactive AI-assisted recording or bulk AI uploads. Visualizations help you understand team adoption of AI-assisted authoring over time.


Risk-Based Testing (RBT) heatmap

Select the RBT tab inside the Execution Dashboard.

The heatmap matrix maps test cases and defects by two dimensions:

  • Business priority (columns): Critical, High, Medium, Minor

  • Usage frequency (rows): how often each feature area is exercised by end users

Each cell shows how many test cases and defects fall in that priority-frequency zone. Use the heatmap to direct testing effort toward high-priority, high-frequency areas first, and to identify areas that have many defects but low test coverage.


Activity Log

Select the Activity Log tab inside the Execution Dashboard to browse a timestamped audit trail of all system and user actions. Each entry shows:

  • Action performed

  • Who performed it (user or AI)

  • Affected test case or item

  • Exact timestamp

Filter by action type, user, or date range to zero in on relevant changes. The activity log is useful for compliance audits and for understanding what changed between a passing run and a failing run.


Filtering dashboards

All dashboard panels respect a common set of filters available in the toolbar:

  • Date range — scope all widgets to a sprint, release window, or custom range using the date range picker

  • Test plan — filter execution data to a specific test plan

  • Environment — isolate results from staging, production, or a custom environment profile

  • Labels — filter by the labels applied to test cases (e.g., by JIRA ticket number or feature area)

Applying these filters simultaneously is the fastest way to answer "how did the checkout feature perform in staging during the last sprint?" without leaving the dashboard.

circle-info

Get release readiness reports your stakeholders understand. Book a Demo →arrow-up-right — See the analytics dashboard, failure analysis, and flaky test detection for your test suite.

Last updated

Was this helpful?