# For Product Managers

{% hint style="info" %}
**Who is this for?** Product Managers and Product Owners who define features, write acceptance criteria, and need confidence that what ships matches what was specified.
{% endhint %}

You write the requirements. You define done. But by the time a feature reaches QA, your acceptance criteria has often been interpreted, compressed, or partially tested. ContextQA closes that loop: your Jira ticket becomes automated test cases automatically, so every feature is tested against your original specification — not an engineer's interpretation of it.

***

## From Ticket to Test in One Step

Paste your Jira ticket URL into ContextQA (or ask your AI assistant with ContextQA MCP connected) and get automated test cases generated from your acceptance criteria:

**Your Jira ticket:**

```
PROJ-456: Add discount code field to checkout
Acceptance Criteria:
- Valid discount codes reduce the order total by the configured percentage
- Invalid codes show an error message "Invalid discount code"
- Expired codes show "This discount code has expired"
- The discount amount is visible in the order summary
```

**ContextQA generates:**

* Test Case 1: Apply valid 10% discount code → verify order total reduced
* Test Case 2: Enter invalid code → verify error message displayed
* Test Case 3: Enter expired code → verify expiry message displayed
* Test Case 4: Verify discount line item appears in order summary

Your acceptance criteria, automated. No manual interpretation.

→ [AI Test Generation from Jira](https://learning.contextqa.com/ai-features/ai-test-generation)

***

## Release Readiness at a Glance

Before every release, you need one number: *what percentage of the acceptance criteria is passing?* The Test Plan summary gives you exactly that:

| Metric                | What It Tells You                         |
| --------------------- | ----------------------------------------- |
| **Pass rate**         | % of tests passing right now              |
| **Failed count**      | Number of failing acceptance criteria     |
| **Test coverage**     | Features with tests vs features without   |
| **Blocking failures** | Critical path tests that are failing      |
| **AI root cause**     | Plain-English explanation of each failure |

You can share this directly with engineering, stakeholders, or in your release review meeting — no technical interpretation required.

→ [Test Results](https://learning.contextqa.com/reporting/test-results) | [Analytics Dashboard](https://learning.contextqa.com/reporting/analytics-dashboard)

***

## Feature Coverage Tracking

Track which features have test coverage and which don't:

{% tabs %}
{% tab title="Coverage by Feature" %}
Organize test suites to mirror your feature areas:

* `checkout-flow/` suite → tests for all checkout AC
* `user-account/` suite → profile, settings, preferences
* `notifications/` suite → email, in-app, push

The Analytics Dashboard shows pass rate and coverage percentage per suite — giving you a feature-level view of quality.
{% endtab %}

{% tab title="Sprint Coverage" %}
After each sprint, check: do all stories from the sprint have test coverage?

Use the `analyze_coverage_gaps` tool (via your team's AI assistant) to identify stories from the last sprint that have no associated test cases — and flag them before they ship.
{% endtab %}

{% tab title="Release Signoff" %}
Create a **Release Gate Test Plan** that runs every test covering the features in the current release. Before shipping:

1. Run the Release Gate plan
2. Review the summary
3. Sign off on green or escalate failures

This creates an auditable quality record tied to each release.
{% endtab %}
{% endtabs %}

***

## Communicating Quality to Stakeholders

### Release Quality Reports

ContextQA generates shareable reports that non-technical stakeholders can understand:

* **Pass rate trend** — is quality improving or declining?
* **Feature risk map** — which features have low test coverage?
* **Failure breakdown** — how many failures are application bugs vs test configuration?
* **Regression detection** — how many regressions were caught before reaching production?

Export as PDF or share via link — no login required for stakeholders.

→ [Exporting Reports](https://learning.contextqa.com/reporting/exporting-reports)

### Bug Prevention Metrics

Use these figures to quantify QA's value to leadership:

| Metric                            | Source                                                   |
| --------------------------------- | -------------------------------------------------------- |
| Regressions caught pre-production | Failures classified as "Application Bug" in ContextQA    |
| Test coverage %                   | Analytics Dashboard coverage view                        |
| Release speed                     | Compare release cadence before/after ContextQA adoption  |
| Manual testing time saved         | Self-healing events = manual investigation hours avoided |

***

## Working with Your Engineering Team

### Connecting Jira for Defect Tracking

When ContextQA finds a failure, it can automatically create a Jira issue:

1. The test fails
2. AI generates root cause analysis
3. A Jira ticket is created with: failure summary, affected step, screenshot evidence, and suggested fix
4. The ticket is assigned to the relevant engineer

No manual bug filing. No screenshot copy-paste. The defect goes directly into your workflow.

→ [Jira Integration](https://learning.contextqa.com/integrations/jira)

### Slack Notifications

Get notified in the right Slack channel when:

* A nightly regression run fails
* A critical path test fails after a deployment
* A new test plan completes with a summary

→ [Slack Integration](https://learning.contextqa.com/integrations/slack)

***

## Generating Tests from Figma Designs

If your team uses Figma for design specs, ContextQA can generate tests directly from design files — before the feature is even built:

```
generate_tests_from_figma(
    figma_url="https://figma.com/file/ABC123/Checkout-Redesign",
    workspace_version_id=YOUR_WV_ID
)
```

Tests are created based on the UI elements and interaction patterns defined in the design. When engineering ships the feature, the tests are already waiting.

→ [AI Test Generation](https://learning.contextqa.com/ai-features/ai-test-generation)

***

{% hint style="success" %}
**Recommended path for Product Managers:**

1. [AI Test Generation](https://learning.contextqa.com/ai-features/ai-test-generation) — understand how tickets become tests
2. [Analytics Dashboard](https://learning.contextqa.com/reporting/analytics-dashboard) — your release readiness view
3. [Jira Integration](https://learning.contextqa.com/integrations/jira) — connect defect tracking
4. [Test Results](https://learning.contextqa.com/reporting/test-results) — read and share execution reports
   {% endhint %}

***

{% hint style="info" %}
**Ship features that actually match your acceptance criteria.** [**Book a Product Demo →**](https://contextqa.com/book-a-demo/) — See how your Jira tickets become automated test cases in minutes.

*Teams using ContextQA catch acceptance criteria failures before sprint review — not after production release.*
{% endhint %}
