# Test Generation

{% hint style="info" %}
**Who is this for?** SDETs, developers, and DevOps engineers integrating ContextQA with AI coding assistants (Claude, Cursor) or CI/CD pipelines.
{% endhint %}

These 10 tools are the fastest path from any source artifact to a runnable ContextQA test case. Each accepts a different input format and returns fully structured test cases ready to execute.

***

## `generate_tests_from_code_change`

Generates targeted test cases by analyzing a git diff or pull request description. The tool identifies which user-facing flows are affected by the code change and creates regression tests for them.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name          | Required | Type   | Description                                                   |
| ------------- | -------- | ------ | ------------------------------------------------------------- |
| `diff_text`   | ✅        | string | The raw `git diff` output or a PR description                 |
| `app_url`     | ✅        | string | Base URL of the application being changed                     |
| `name_prefix` | ❌        | string | Prefix to add to generated test case names (e.g., `PR-1234_`) |

### Returns

JSON object with:

* `test_cases_created` — number of test cases generated
* `changed_files` — list of files identified in the diff
* `test_cases` — array of created test case details (IDs, names, step counts)

### Workflow

```bash
# Get the diff from your PR
git diff main...feature/my-branch > diff.txt

# Pass it to the tool
generate_tests_from_code_change(
  diff_text=open("diff.txt").read(),
  app_url="https://staging.myapp.com",
  name_prefix="PR-456_"
)
```

### Tips

* Works best with focused diffs (one feature area per run).
* For large PRs with 50+ changed files, split into smaller diff segments for more targeted tests.

### Related Tools

`analyze_test_impact` • `create_test_case` • `generate_edge_cases`

***

## `generate_tests_from_jira_ticket`

Reads a Jira or Azure DevOps ticket — including its description, acceptance criteria, and comments — and generates corresponding test cases.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name                          | Required | Type    | Description                                                                         |
| ----------------------------- | -------- | ------- | ----------------------------------------------------------------------------------- |
| `ticket_id`                   | ✅        | string  | Ticket identifier (e.g., `APP-1234`, `CQA-567`)                                     |
| `include_acceptance_criteria` | ❌        | boolean | Whether to parse acceptance criteria into separate test scenarios (default: `true`) |

### Returns

JSON object with generated test scenarios including IDs and step previews.

### Notes

* The integration must be configured in ContextQA Settings → Integrations → Product Management before this tool can read ticket content.
* Each acceptance criterion becomes its own test case.

### Related Tools

`generate_tests_from_linear_ticket` • `create_defect_ticket`

***

## `generate_tests_from_linear_ticket`

Creates test cases from a Linear issue. Accepts ticket fields directly — fetch the issue from the Linear MCP first, then pass the data here.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name                 | Required | Type   | Description                               |
| -------------------- | -------- | ------ | ----------------------------------------- |
| `ticket_id`          | ✅        | string | Linear issue identifier (e.g., `ENG-789`) |
| `title`              | ✅        | string | Issue title                               |
| `description`        | ✅        | string | Full issue description                    |
| `app_url`            | ✅        | string | URL of the application to test            |
| `steps_to_reproduce` | ❌        | string | Steps to reproduce (for bug tickets)      |
| `expected_behavior`  | ❌        | string | Expected outcome                          |
| `actual_behavior`    | ❌        | string | Actual outcome (what's wrong)             |

### Returns

JSON object with created test case details.

### Workflow with Linear MCP

```
1. list_issues (Linear MCP) → get ENG-789 fields
2. generate_tests_from_linear_ticket(ticket_id="ENG-789", title=..., description=..., app_url=...)
3. execute_test_case(test_case_id=<returned id>)
```

### Related Tools

`generate_tests_from_jira_ticket` • `reproduce_from_ticket`

***

## `generate_tests_from_figma`

Analyzes a Figma design file to extract UI flows and generate corresponding test cases. The AI examines screen designs, interactive components, and flow connections.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name        | Required | Type   | Description                                                              |
| ----------- | -------- | ------ | ------------------------------------------------------------------------ |
| `figma_url` | ✅        | string | Figma file or frame URL (must be publicly accessible or shared via link) |

### Returns

JSON object with generated test scenarios derived from the design.

### Tips

* Share the specific frame or flow you want tested, not the entire file, to get the most focused results.
* Works best with annotated designs that include interaction notes.
* Generated tests reflect the *intended* design — run them against staging to verify the implementation matches the design.

### Related Tools

`generate_tests_from_requirements` • `create_test_case`

***

## `generate_tests_from_requirements`

Converts a block of plain-text requirements into automated test scenarios. Suitable for PRDs, feature specs, or user story documents.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name                | Required | Type   | Description                                                 |
| ------------------- | -------- | ------ | ----------------------------------------------------------- |
| `requirements_text` | ✅        | string | Raw requirements text (paste the document content directly) |

### Returns

JSON object with generated test scenarios mapped to requirement sections.

### Example

```json
{
  "requirements_text": "The user must be able to reset their password by clicking 'Forgot Password' on the login page. The system sends a reset link to the user's registered email. The link expires after 24 hours. The user must set a password that is at least 8 characters, contains one number, and contains one special character."
}
```

This generates separate test cases for: the forgot password link, the email delivery, link expiry behavior, and password complexity validation.

### Related Tools

`generate_tests_from_excel` • `generate_tests_from_figma`

***

## `generate_tests_from_excel`

Parses an Excel or CSV file containing manual test cases and converts them into automated ContextQA tests. Useful for migrating existing manual test libraries.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name         | Required | Type   | Description                                       |
| ------------ | -------- | ------ | ------------------------------------------------- |
| `file_path`  | ✅        | string | Absolute local path to the `.xlsx` or `.csv` file |
| `sheet_name` | ❌        | string | Specific sheet to parse (default: first sheet)    |

### Returns

JSON object with generated test cases matched to spreadsheet rows.

### Expected spreadsheet format

The tool recognizes common test case template formats. For best results, include columns:

* `Test Case Name` or `Title`
* `Steps` or `Test Steps`
* `Expected Result`
* `URL` (optional)

### Related Tools

`generate_tests_from_requirements` • `migrate_repo_to_contextqa`

***

## `generate_tests_from_swagger`

Ingests an OpenAPI/Swagger specification and generates comprehensive API contract and coverage tests.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name               | Required | Type   | Description                                               |
| ------------------ | -------- | ------ | --------------------------------------------------------- |
| `file_path_or_url` | ✅        | string | Local file path or URL to the OpenAPI spec (JSON or YAML) |

### Returns

JSON object with generated API test cases covering endpoints, methods, and response schemas.

### Coverage

For each endpoint discovered, the tool generates:

* **Happy path** — valid request with expected 2xx response
* **Authentication failure** — missing or invalid token → 401
* **Validation errors** — missing required fields → 400/422
* **Not found** — requests with non-existent resource IDs → 404

### Example

```json
{
  "file_path_or_url": "https://api.myapp.com/openapi.json"
}
```

### Related Tools

`generate_tests_from_requirements` • `execute_test_suite`

***

## `generate_tests_from_video`

Analyzes a screen recording (`.mp4`, `.webm`) of a user performing actions in the application, and converts the observed user journey into an automated test.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name                  | Required | Type    | Description                                                                         |
| --------------------- | -------- | ------- | ----------------------------------------------------------------------------------- |
| `video_file_path`     | ✅        | string  | Absolute local path to the video file                                               |
| `extract_transcripts` | ❌        | boolean | Whether to use audio transcription to extract additional context (default: `false`) |

### Returns

JSON object with generated test cases derived from the video analysis.

### Tips

* Record at a standard browser resolution (1280×800 or 1920×1080) for best OCR accuracy.
* Keep recordings under 10 minutes; longer recordings may produce overly broad test cases.
* Enable `extract_transcripts: true` if your recording includes narration describing test intent.

### Related Tools

`generate_tests_from_requirements` • `create_test_case`

***

## `generate_tests_from_analytics_gap`

Converts a high-traffic, untested user flow identified by `analyze_coverage_gaps` into an automated test case.

**Category:** Test Generation / Analytics & Coverage **Authentication required:** Yes

### Parameters

| Name                  | Required | Type  | Description                                                                                            |
| --------------------- | -------- | ----- | ------------------------------------------------------------------------------------------------------ |
| `flow_event_sequence` | ✅        | array | Ordered list of analytics event names representing the user flow (from `analyze_coverage_gaps` output) |

### Returns

JSON object with the generated test case that covers the identified gap.

### Workflow

```
1. analyze_coverage_gaps(analytics_provider="mixpanel")
   → returns untested flows with event sequences

2. generate_tests_from_analytics_gap(
     flow_event_sequence=["page_view_home", "click_signup", "form_submit_register", "page_view_dashboard"]
   )
   → returns created test case
```

### Related Tools

`analyze_coverage_gaps` • `create_test_case`

***

## `generate_edge_cases`

Generates boundary and negative test scenarios for a given feature or component using AI inference. Produces test cases that typical happy-path test generation misses.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name            | Required | Type   | Description                                                        |
| --------------- | -------- | ------ | ------------------------------------------------------------------ |
| `context_query` | ✅        | string | Description of the feature or component to generate edge cases for |

### Returns

JSON object with edge case scenarios including:

* Boundary value tests (min/max inputs)
* Invalid data formats
* Concurrent access scenarios
* Session timeout handling
* Error recovery paths

### Example

```json
{
  "context_query": "User registration form with email, password, and phone number fields. Email must be unique. Password must be 8+ characters with one number and one symbol."
}
```

Generated edge cases include: duplicate email registration, password exactly at 8 characters, password at 7 characters (should reject), phone number with country code, special characters in email local part, etc.

### Related Tools

`generate_tests_from_requirements` • `create_test_case`

***

## `generate_contextqa_tests_from_n8n`

Generates ContextQA test cases from an n8n workflow. Tests the happy path through the workflow, triggering it and validating each node's execution result.

**Category:** Test Generation **Authentication required:** Yes

### Parameters

| Name               | Required | Type   | Description                                                                                                                                   |
| ------------------ | -------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------- |
| `file_path_or_url` | ✅        | string | Local path to an n8n workflow JSON export, a direct JSON URL, or an n8n Cloud workflow page URL (requires `N8N_API_KEY` environment variable) |
| `app_url`          | ❌        | string | Base URL of the application the workflow interacts with                                                                                       |

### Returns

JSON object with `status` and an array of created test cases, one per workflow path.

### n8n API key configuration

For n8n Cloud URLs, set the `N8N_API_KEY` environment variable on the MCP server before starting:

```bash
export N8N_API_KEY=your-n8n-api-key
```

### Related Tools

`create_test_case` • `execute_test_suite`
