# For QA Managers

{% hint style="info" %}
**Who is this for?** QA Managers, Test Leads, and QA Directors responsible for test strategy, team productivity, and release quality across one or more products.
{% endhint %}

Your job is to answer two questions before every release: *Is this ready to ship?* and *How confident are we?* ContextQA gives you the dashboards, analytics, and reporting to answer both — without spending three hours aggregating spreadsheets before a go/no-go meeting.

***

## Management Overview

| What You Need                 | Where to Find It                                                          |
| ----------------------------- | ------------------------------------------------------------------------- |
| Release readiness at a glance | Test Plan execution summary: pass rate, failed count, duration            |
| Coverage gaps                 | Analytics Dashboard → untested flows by feature area                      |
| Flaky test trends             | Flaky Test Detection report — recurring failures vs true regressions      |
| Team velocity                 | Test cases created per sprint, executions per week                        |
| Failure root causes           | AI failure classification: application bug / test bug / environment issue |
| Exportable reports            | Share URLs or PDF export for stakeholder reviews                          |
| CI/CD quality gates           | Automated pass/fail status integrated into your deployment pipeline       |

***

## Test Plans: Your Release Gates

A Test Plan is a named execution configuration that runs specific test suites against specific environments and browsers. Think of it as your release checklist, automated.

**Typical setup:**

* **Smoke Plan** — 15 critical path tests, runs on every commit (< 3 minutes)
* **Regression Plan** — Full suite, runs nightly or before every release (30–60 minutes parallel)
* **Release Gate Plan** — Smoke + Regression + API tests on Production-equivalent environment

Each plan returns a single **pass/fail/partial** result you can wire into your deployment pipeline.

→ [Running Tests](https://learning.contextqa.com/execution/running-tests) | [Parallel Execution](https://learning.contextqa.com/execution/parallel-execution)

***

## Analytics Dashboard

The Analytics Dashboard gives you a time-series view of your test suite health:

{% tabs %}
{% tab title="Pass Rate Trends" %}
Track pass rate over time per suite, per environment, and per browser. Spot when a release introduced new failures. Drill into any data point to see the individual test results.
{% endtab %}

{% tab title="Failure Analysis" %}
View failures grouped by:

* **Root cause type** — application bug, test bug, flaky, environment
* **Feature area** — based on suite organization
* **Browser/device** — identify browser-specific regressions

AI-generated summaries explain the most impactful failures in plain English.
{% endtab %}

{% tab title="Coverage Gaps" %}
Use the `analyze_coverage_gaps` MCP tool or the Coverage view in the portal to identify:

* User flows with no test coverage
* High-risk code paths with low test density
* New features added this sprint with no associated tests
  {% endtab %}

{% tab title="Flaky Tests" %}
ContextQA tracks test stability over time. The Flaky Test report shows:

* Tests that failed then passed on immediate re-run (likely flaky)
* Tests failing consistently (likely application regression)
* Failure frequency and affected environments
  {% endtab %}
  {% endtabs %}

→ [Analytics Dashboard](https://learning.contextqa.com/reporting/analytics-dashboard) | [Flaky Test Detection](https://learning.contextqa.com/reporting/flaky-test-detection)

***

## Reporting for Stakeholders

### Shareable Report Links

Every test plan execution generates a shareable summary URL. Send it to engineering leads, product managers, or executive stakeholders — no login required to view.

### Failure Analysis Reports

After a failed release candidate, generate a failure analysis that includes:

* Total failures with severity breakdown
* AI classification (is this a test problem or an application problem?)
* Screenshot evidence for each failure
* Suggested remediation steps

### Export Options

* **PDF/HTML report** — for release documentation
* **Playwright code export** — for engineering teams who want to reproduce failures locally
* **CSV data export** — for custom dashboards or BI tools

→ [Exporting Reports](https://learning.contextqa.com/reporting/exporting-reports)

***

## Team & Access Management

### Roles and Permissions

Control what each team member can do:

| Role        | Capabilities                                                    |
| ----------- | --------------------------------------------------------------- |
| **Admin**   | Full access including workspace settings, integrations, billing |
| **Manager** | Create/manage test plans, view all results, manage team members |
| **Tester**  | Create and run test cases, view results                         |
| **Viewer**  | Read-only access to results and reports                         |

→ [Roles & Permissions](https://learning.contextqa.com/administration/roles-and-permissions)

### Team Organization

Organize testers by product area, feature team, or testing type (web, mobile, API). Each team member works in the same shared workspace with full visibility into each other's work.

→ [Team Management](https://learning.contextqa.com/administration/team-management)

***

## Scheduling and Continuous Testing

Set test plans to run automatically:

* **On commit** — trigger via GitHub Actions, Jenkins, or GitLab CI webhook
* **On schedule** — nightly regression, Monday morning smoke test
* **On demand** — one-click execution from the portal

Slack notifications alert the right people when a plan fails — with a direct link to the failure report.

→ [Scheduling](https://learning.contextqa.com/execution/scheduling) | [Slack Integration](https://learning.contextqa.com/integrations/slack)

***

## Communicating QA Value to Leadership

Use these metrics in your sprint reviews and executive reports:

| Metric                            | How to Measure in ContextQA                                 |
| --------------------------------- | ----------------------------------------------------------- |
| **Test coverage %**               | Analytics Dashboard → Coverage view                         |
| **Defects caught pre-production** | Failure Analysis → classified as "Application Bug"          |
| **Mean time to detect (MTTD)**    | Time from commit to first test failure notification         |
| **Test maintenance effort**       | Track self-healing events — each heal = manual work avoided |
| **Release confidence score**      | Test Plan pass rate on release candidate build              |

***

{% hint style="success" %}
**Recommended path for QA Managers:**

1. [Analytics Dashboard](https://learning.contextqa.com/reporting/analytics-dashboard) — understand what's available
2. [Test Results](https://learning.contextqa.com/reporting/test-results) — navigate the results interface
3. [Flaky Test Detection](https://learning.contextqa.com/reporting/flaky-test-detection) — clean up your test suite
4. [AI Self-Healing](https://learning.contextqa.com/web-testing/self-healing) — understand how tests maintain themselves
5. [Jira Integration](https://learning.contextqa.com/integrations/jira) — connect Jira for defect tracking
6. [Roles & Permissions](https://learning.contextqa.com/administration/roles-and-permissions) — set up your team
7. [Scheduling](https://learning.contextqa.com/execution/scheduling) — automate your regression runs
   {% endhint %}

***

{% hint style="info" %}
**Want to see your team's test coverage gaps?** [**Book a QA Strategy Demo →**](https://contextqa.com/book-a-demo/) — We'll analyze your current test suite and show you exactly where ContextQA closes the gaps.

*QA managers using ContextQA report 3× more releases per quarter with the same team size.*
{% endhint %}
