playInteractive Demo

Explore the ContextQA Reporting & Analytics dashboard — track pass rates, detect flaky tests, classify failures by root cause, and share results with stakeholders.

The ContextQA analytics dashboard gives teams a real-time view of test health across every execution. This demo walks through the key panels: KPI summary, daily pass rate trends, AI failure classification, and flaky test detection.

1

KPI Summary

The top row shows the four most important health metrics for the selected time period at a glance.

Metric
Value
Trend

Pass Rate

94.2%

↑ 2.1% vs last week

Total Executions

847

↑ 18% vs last week

Failures

49

↓ 12 fewer than last week

Flaky Tests

7

↑ 2 new this week

circle-info

AI Agent: KPIs update in real time after every execution. Pass rate, failure count, and flaky test count are all computed automatically — no manual tagging required.

2

Daily Pass Rate Chart

The bar chart shows passed, failed, and self-healed test counts for each day in the period.

7-day breakdown (passed / failed / healed):

  • Monday: 98 / 5 / 2

  • Tuesday: 112 / 8 / 3

  • Wednesday: 89 / 12 / 1

  • Thursday: 134 / 6 / 4

  • Friday: 156 / 9 / 2

  • Saturday: 78 / 3 / 1

  • Sunday: 180 / 6 / 2

circle-info

AI Agent: "Healed" counts are tests that failed on first attempt but passed after self-healing — these are tracked separately from clean passes so teams can monitor UI churn over time.

3

AI Failure Classification

Every failure is automatically classified by the AI into one of four categories — no engineer investigation needed to triage.

Classification
Count
Share

Application Bug

25

51%

Flaky Failure

14

29%

Test Bug

7

14%

Environment Issue

3

6%

circle-info

AI Agent: AI classifies failures based on error type, console logs, and network traces. Application bugs are real regressions that need developer attention; flaky failures are retry-passed runs that need stabilization; test bugs indicate the test steps themselves need updating.

4

Flaky Test Detection ✓

ContextQA identifies tests that pass inconsistently across runs and surfaces them in a dedicated list with their flakiness rate and likely cause.

Top flaky tests this week:

Test
Flakiness
Likely Cause

Checkout — credit card payment

30%

Timing issue

Search — autocomplete results

20%

Race condition

Upload — large file (>10MB)

10%

Timeout

Email notification delivery

10%

Async timing

circle-check
Capability
Detail

Failure Classification

Automatic — no manual tagging

Flaky Test Detection

Based on run variance across history

Shareable Reports

One-click export or shareable link

Stakeholder View

Summary dashboard with no login required


Last updated

Was this helpful?