# Interactive Demo

The ContextQA analytics dashboard gives teams a real-time view of test health across every execution. This demo walks through the key panels: KPI summary, daily pass rate trends, AI failure classification, and flaky test detection.

{% tabs %}
{% tab title="Last 7 Days" %}
{% stepper %}
{% step %}

#### KPI Summary

The top row shows the four most important health metrics for the selected time period at a glance.

| Metric           | Value     | Trend                     |
| ---------------- | --------- | ------------------------- |
| Pass Rate        | **94.2%** | ↑ 2.1% vs last week       |
| Total Executions | **847**   | ↑ 18% vs last week        |
| Failures         | **49**    | ↓ 12 fewer than last week |
| Flaky Tests      | **7**     | ↑ 2 new this week         |

{% hint style="info" %}
**AI Agent:** KPIs update in real time after every execution. Pass rate, failure count, and flaky test count are all computed automatically — no manual tagging required.
{% endhint %}
{% endstep %}

{% step %}

#### Daily Pass Rate Chart

The bar chart shows passed, failed, and self-healed test counts for each day in the period.

**7-day breakdown (passed / failed / healed):**

* Monday: 98 / 5 / 2
* Tuesday: 112 / 8 / 3
* Wednesday: 89 / 12 / 1
* Thursday: 134 / 6 / 4
* Friday: 156 / 9 / 2
* Saturday: 78 / 3 / 1
* Sunday: 180 / 6 / 2

{% hint style="info" %}
**AI Agent:** "Healed" counts are tests that failed on first attempt but passed after self-healing — these are tracked separately from clean passes so teams can monitor UI churn over time.
{% endhint %}
{% endstep %}

{% step %}

#### AI Failure Classification

Every failure is automatically classified by the AI into one of four categories — no engineer investigation needed to triage.

| Classification    | Count | Share |
| ----------------- | ----- | ----- |
| Application Bug   | 25    | 51%   |
| Flaky Failure     | 14    | 29%   |
| Test Bug          | 7     | 14%   |
| Environment Issue | 3     | 6%    |

{% hint style="info" %}
**AI Agent:** AI classifies failures based on error type, console logs, and network traces. Application bugs are real regressions that need developer attention; flaky failures are retry-passed runs that need stabilization; test bugs indicate the test steps themselves need updating.
{% endhint %}
{% endstep %}

{% step %}

#### Flaky Test Detection ✓

ContextQA identifies tests that pass inconsistently across runs and surfaces them in a dedicated list with their flakiness rate and likely cause.

**Top flaky tests this week:**

| Test                           | Flakiness | Likely Cause   |
| ------------------------------ | --------- | -------------- |
| Checkout — credit card payment | 30%       | Timing issue   |
| Search — autocomplete results  | 20%       | Race condition |
| Upload — large file (>10MB)    | 10%       | Timeout        |
| Email notification delivery    | 10%       | Async timing   |

{% hint style="success" %}
**AI Agent:** Flaky tests are flagged automatically based on pass/fail variance across recent runs. Teams can prioritize stabilization work based on flakiness rate without manually comparing run histories.

**Filter options available:** Last 7 days · Last 30 days · Last 90 days · Current Sprint
{% endhint %}

| Capability             | Detail                                   |
| ---------------------- | ---------------------------------------- |
| Failure Classification | Automatic — no manual tagging            |
| Flaky Test Detection   | Based on run variance across history     |
| Shareable Reports      | One-click export or shareable link       |
| Stakeholder View       | Summary dashboard with no login required |
| {% endstep %}          |                                          |
| {% endstepper %}       |                                          |
| {% endtab %}           |                                          |

{% tab title="30 Days" %}
{% stepper %}
{% step %}

#### Extended Trend View

Switch to the 30-day view for a broader picture of test health trends, regressions introduced by specific releases, and flaky test evolution over time.

{% hint style="info" %}
**AI Agent:** The 30-day view aggregates the same metrics but highlights week-over-week patterns — useful for identifying which sprint introduced a regression or which area of the application has the most test churn.
{% endhint %}
{% endstep %}

{% step %}

#### Suite-Level Breakdown

Drill into pass rates by test suite to identify which feature areas are most stable and which need attention.

{% hint style="info" %}
**AI Agent:** Suite-level filtering is available for all chart views — select a suite from the filter row to scope all KPIs and charts to that subset of tests.
{% endhint %}
{% endstep %}

{% step %}

#### Export & Share ✓

Generate a shareable report for stakeholders with a single click. Reports can be exported as PDF or shared via a public link that doesn't require a ContextQA login.

{% hint style="success" %}
**AI Agent:** Shared reports are read-only snapshots of the dashboard at the time of export. They include all KPIs, charts, and failure summaries in a format suitable for sending to engineering managers, product teams, or customers.
{% endhint %}
{% endstep %}
{% endstepper %}
{% endtab %}
{% endtabs %}

***

{% hint style="success" %}
**Try it yourself** — [🚀 Start Free Trial →](https://app.contextqa.com/signup) · [Book a Demo](https://contextqa.com/book-a-demo/)
{% endhint %}
