# AI Test Generation

{% hint style="info" %}
**Who is this for?** All roles — developers, QA engineers, and managers — who want to generate test cases automatically from Jira tickets, Figma designs, Swagger specs, videos, or plain English descriptions.
{% endhint %}

ContextQA can generate complete test cases from 10 different source types. You do not need to write steps manually — supply a source artifact (a ticket, a design file, a video, a git diff) and the AI produces a ready-to-execute test case with all steps, assertions, and expected results filled in.

This page covers each generation method, when to use it, and how to invoke it — both from the ContextQA UI and from the MCP tool interface.

***

## 1. From natural language (most common)

The simplest and most flexible generation path. Describe what the test should do in plain English and ContextQA generates all the steps.

### In the UI

1. Navigate to **Test Development** in the left sidebar
2. Click the **+** button to open the test case creation dialog
3. Select **Start with AI Assistance** from the method selection screen
4. *(Optional)* Select **Prerequisites** — existing test cases to run before this test
5. Enter the **URL** of the application page you want to test
6. Type your test scenario in the **Description** field — describe the user journey in plain English
7. Select the **Target Platform** — **Web Application** or **Mobile**
8. *(Optional)* Expand **Advanced Settings** to configure AI behavior:
   * **Enable AI Smartness** — Choose `Expert` (thorough analysis), `Fast` (quick generation), `Strict` (follows description exactly), or `Organization Default`
   * **AI Action** — Choose `Create Steps`, `Dynamic Steps`, `Action`, or `Organization Default`
   * **Knowledge Base** — Select a knowledge base to provide application context
   * **Environments** — Select a target environment
9. Click **Generate & Execute Test Case** — ContextQA creates and runs the test case with all steps filled in
10. Review the generated test cases on the verification screen, then click **Save** or **Save All Test Cases**

{% hint style="info" %}
If the **Generate From Crawl** feature is enabled on your plan, you can click **Generate From Crawl** instead of **Generate & Execute Test Case**. This option crawls the target URL and generates test cases based on the discovered pages and interactions.
{% endhint %}

**Example task descriptions:**

```
Log in as admin@test.com with password Test123!, navigate to the
Products page, search for "wireless headphones", and verify that
at least one product appears in the search results.
```

```
Open the checkout page with one item in the cart, fill in the
shipping address form with valid UK address data, proceed to
payment, and verify the order summary shows the correct total.
```

### AI Smartness modes

| Mode                     | Behavior                                                                                                              |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------- |
| **Expert**               | The AI takes more time to analyze the application and produces thorough, detailed steps with comprehensive assertions |
| **Fast**                 | The AI prioritizes speed, generating steps quickly with less in-depth analysis                                        |
| **Strict**               | The AI follows your description exactly with minimal interpretation or additional steps                               |
| **Organization Default** | Uses the AI Smartness setting configured by your organization administrator                                           |

### Via MCP

```python
create_test_case(
    url="https://myapp.com/login",
    task_description="Log in as admin@test.com with password Test123!, navigate to the Products page, search for 'wireless headphones', and verify at least one product appears in the results.",
    name="Products - Search Smoke Test"
)
```

**Best for:**

* Creating new tests quickly when you know the user journey
* Exploratory testing where you are discovering application behavior
* One-off tests for specific regression scenarios
* Any scenario where you can describe the workflow in 1–3 sentences

**Tips for better results:**

* Include the starting URL in the task description or `url` parameter
* Mention specific field names, button labels, and page section names as they appear in the UI
* Include any setup state the test needs (e.g., "already logged in", "with an empty cart")
* For assertions, be specific: "verify the success message says 'Order placed'" is better than "verify success"
* Use **Expert** AI Smartness for complex flows where thoroughness matters more than speed
* Attach a **Knowledge Base** when the AI needs context about your application's domain or terminology

***

## 2. From Jira / Azure DevOps tickets

Generate tests directly from user stories and bug tickets. The AI reads the ticket description and acceptance criteria to create comprehensive test scenarios.

### Prerequisites

Connect your issue tracker via **Settings → Integrations → Bug Tracking → Jira** (or Azure DevOps). You will need:

* Jira base URL (e.g., `https://yourorg.atlassian.net`)
* Email address associated with your Jira account
* Jira API token (create one at id.atlassian.com/manage-profile/security/api-tokens)

### In the UI

1. Navigate to **Test Development → + New Test Case**
2. Select **Generate from Jira Ticket**
3. Enter the ticket ID (e.g., `MYAPP-123`)
4. Choose whether to include acceptance criteria
5. Click **Generate**

ContextQA creates one test case for the main scenario described in the ticket. If acceptance criteria are present and you enabled that option, it creates additional test cases — one per acceptance criterion — covering each condition separately.

### Via MCP

```python
generate_tests_from_jira_ticket(
    ticket_id="MYAPP-567",
    include_acceptance_criteria=True
)
```

**What the AI reads:**

* Ticket summary (title)
* Ticket description
* Acceptance criteria (if present)
* Labels and priority (used to suggest edge cases for high-priority or bug tickets)

**Best for:**

* Agile teams who want test cases that map directly to user stories
* Ensuring every acceptance criterion has a corresponding automated test
* Bug tickets — the AI generates a reproduction test case from the steps-to-reproduce section
* Teams that manage requirements in Jira and want to maintain traceability

**Example output for a ticket like:**

> MYAPP-123: As a user, I want to reset my password so I can regain access if I forget it. AC1: User can request a reset link by entering their email AC2: Reset link expires after 24 hours AC3: New password must be at least 8 characters

ContextQA generates three test cases:

* "Password Reset - Request Link Flow" (main scenario)
* "Password Reset - Link Expiry After 24 Hours" (AC2)
* "Password Reset - Password Minimum Length Validation" (AC3)

***

## 3. From Figma designs

Generate tests from Figma design files before the application is even built. The AI analyzes design screens and creates tests matching the intended UX flows.

### Prerequisites

* A Figma URL with view access (the AI does not require edit access)
* For private files: Figma personal access token configured in Settings → Integrations → Design

### Via MCP

```python
generate_tests_from_figma(
    figma_url="https://www.figma.com/file/ABC123/My-App-Designs?node-id=1%3A2"
)
```

### How it works

The AI receives the Figma file and:

1. Identifies all screens and frames in the design
2. Analyzes the interactive elements (buttons, inputs, links, navigation)
3. Infers the intended user flows by connecting screens that share navigation patterns
4. Creates test cases for each distinct flow it identifies

For a three-screen checkout flow (cart → shipping → payment → confirmation), the AI creates a test case that navigates through each screen in sequence, fills the forms with realistic test data, and asserts the correct content on each screen.

**Best for:**

* Design-driven development — testing the intended UX before it is built
* Catching mismatches between design intent and implementation during QA
* Generating a test suite in parallel with development to reduce QA bottlenecks
* Design reviews — sharing generated test scenarios with stakeholders to validate flows

**Tips:**

* Use frames (not groups) for individual screens in Figma for best results
* Name your frames descriptively — "Step 1: Shipping Address" is more useful to the AI than "Frame 123"
* For complex flows, link the Figma file URL to the specific flow's starting screen using `node-id`

***

## 4. From Excel / CSV files

Migrate existing manual test case libraries into ContextQA. The AI parses your spreadsheet and maps columns to test steps, expected results, and metadata.

### Supported formats

* `.xlsx` (Excel 2007+)
* `.xls` (Excel 97-2003)
* `.csv` (comma-separated)

### Expected column structure

ContextQA recognizes common column names automatically:

| Detected Column                            | Mapped To          |
| ------------------------------------------ | ------------------ |
| `Test Case Name`, `Name`, `Title`          | Test case name     |
| `Step`, `Step Description`, `Action`       | Step description   |
| `Expected Result`, `Expected`, `Assertion` | Expected result    |
| `URL`, `Page`, `Base URL`                  | Test case URL      |
| `Tags`, `Labels`, `Category`               | Test case tags     |
| `Priority`, `Severity`                     | Test case priority |

If your column names do not match these patterns, ContextQA presents a mapping UI where you can assign each column to the appropriate field before importing.

### Via MCP

```python
generate_tests_from_excel(
    file_path="/path/to/test-cases.xlsx"
)
```

For remote files:

```python
generate_tests_from_excel(
    file_path="https://example.com/shared/test-cases.xlsx"
)
```

**Best for:**

* Migrating from manual QA processes to automation
* Teams that maintain test cases in shared spreadsheets
* One-time import of a large legacy test library
* Taking over a QA process from another team that used Excel

***

## 5. From Swagger / OpenAPI specifications

Generate API contract tests for every endpoint in your OpenAPI specification. ContextQA creates test cases that verify each endpoint's request/response contract, status codes, and data shapes.

### Supported spec formats

* OpenAPI 3.0 (`.json` or `.yaml`)
* OpenAPI 3.1
* Swagger 2.0

### Via MCP

```python
generate_tests_from_swagger(
    file_path_or_url="https://api.myapp.com/openapi.json"
)
```

Or from a local file:

```python
generate_tests_from_swagger(
    file_path_or_url="/path/to/openapi.yaml"
)
```

### What gets generated

For each endpoint, ContextQA creates:

* **Happy path test**: valid request with all required parameters, asserts 2xx response
* **Validation test**: missing required fields, asserts 4xx response
* **Response schema test**: verifies the response body matches the declared schema
* **Auth test** (if security schemes are defined): verifies unauthorized requests get 401/403

**Best for:**

* API testing coverage — ensure every endpoint has at least one automated test
* Contract testing — catch breaking changes when the API changes
* Microservice teams where API test coverage is a release gate requirement
* Generating a baseline test suite immediately after a new service is deployed

***

## 6. From video screen recordings

Convert screen recordings of user journeys into automated tests. The AI watches the video and extracts each distinct user action as a test step.

### Supported video formats

* `.mp4` (H.264 codec recommended)
* `.mov` (QuickTime)
* `.webm`

### Via MCP

```python
generate_tests_from_video(
    video_file_path="/path/to/user-journey-demo.mp4"
)
```

### How it works

The AI processes the video frame by frame to identify:

* Navigation events (URL changes, page loads)
* Click actions (identifies what was clicked based on visual context)
* Text input (captures what was typed, including field context)
* Assertions implied by visible state changes (a success banner appearing, a list populating)

Each identified action becomes a test step. The AI uses the visual context of each action (what is on screen, what the user interacted with) to write the step description in natural language.

**Best for:**

* Converting screen recordings from user research sessions into regression tests
* Capturing complex multi-page workflows that would take a long time to write manually
* Onboarding documentation — record a product demo once and generate tests from it
* Creating tests for legacy applications where no design files or specifications exist

**Tips for best video quality:**

* Record at normal speed — avoid fast-forwarding through steps
* Keep the cursor visible — use a cursor highlight tool if possible
* Pause briefly on each page before clicking to allow the AI to capture the page state
* Avoid recording over remote desktop connections (additional latency causes frame artifacts)

***

## 7. From requirements documents

Paste a requirements document in plain text format and ContextQA generates test scenarios covering all stated requirements.

### Via MCP

```python
generate_tests_from_requirements(
    requirements_text="""
    User Registration Requirements:

    1. The registration form shall collect: first name, last name, email address, and password.
    2. Email addresses must be unique across all user accounts.
    3. Passwords must be at least 8 characters, contain at least one uppercase letter,
       one lowercase letter, and one number.
    4. A verification email shall be sent to the provided address upon successful registration.
    5. The user shall be redirected to the dashboard after successful registration.
    6. If registration fails (duplicate email, invalid password), an error message shall
       be displayed inline next to the relevant field.
    """
)
```

### What gets generated

ContextQA generates test cases for:

* Each positive requirement (the happy path that must work)
* Validation rules (each constraint described in the requirements)
* Error states (each failure mode mentioned)

For the example above, it would generate tests for: successful registration, duplicate email error, password too short, password no uppercase, password no number, email verification sent, redirect after registration, and inline error message display.

**Best for:**

* Documentation-driven projects (government, regulated industries, large enterprises)
* Teams that maintain formal requirements specifications
* Converting BRDs (Business Requirements Documents) or FRDs into test suites
* Ensuring full traceability between requirements and tests

***

## 8. From code changes (PR-level testing)

Analyze a git diff and generate tests that specifically target the application flows affected by the changed code. This is the recommended approach for CI/CD integration.

### Via MCP

```python
generate_tests_from_code_change(
    diff_text="""
    diff --git a/src/checkout/payment.py b/src/checkout/payment.py
    index 3f4a1b2..8c9d0e1 100644
    --- a/src/checkout/payment.py
    +++ b/src/checkout/payment.py
    @@ -45,6 +45,12 @@ def process_payment(order_id, payment_method):
    +    if payment_method.type == 'crypto':
    +        raise PaymentMethodNotSupportedError('Crypto payments are not supported')
    """,
    app_url="https://staging.myapp.com"
)
```

### How it works

The AI analyzes the diff to understand:

* Which files changed and what those files are responsible for (route handlers, models, UI components)
* What the behavioral change is (new validation rule, new feature, bug fix, configuration change)
* Which user-facing flows are affected by the changed code paths

It then generates tests that specifically exercise those affected flows — in the example above, it generates a test that attempts to check out with a crypto payment method and asserts the application displays the appropriate error message.

**Best for:**

* CI/CD pipelines — run targeted tests on every pull request instead of the full suite
* Reducing test execution time in PRs while maintaining meaningful coverage
* Automated test generation as part of a code review process
* Ensuring every meaningful code change has a corresponding test

### GitHub Actions example

```yaml
- name: Generate tests from PR changes
  env:
    CONTEXTQA_USERNAME: ${{ secrets.CONTEXTQA_USERNAME }}
    CONTEXTQA_PASSWORD: ${{ secrets.CONTEXTQA_PASSWORD }}
  run: |
    DIFF=$(git diff origin/main...HEAD)
    python -c "
    from app.contextqa_client import ContextQAClient
    import os
    client = ContextQAClient(os.environ['CONTEXTQA_USERNAME'], os.environ['CONTEXTQA_PASSWORD'])
    result = client.generate_tests_from_code_change(
        diff_text='''$DIFF''',
        app_url='https://staging.myapp.com'
    )
    print(result)
    "
```

***

## 9. From n8n workflows

Map n8n automation workflow nodes to ContextQA test steps. Each node type in the n8n workflow becomes a corresponding verification step in the test case.

### Via MCP

```python
generate_contextqa_tests_from_n8n(
    file_path_or_url="/path/to/workflow.json"
)
```

Or from a published n8n workflow URL:

```python
generate_contextqa_tests_from_n8n(
    file_path_or_url="https://n8n.io/workflows/1234-my-workflow"
)
```

### Node mapping

| n8n Node Type    | Generated Test Step                                                    |
| ---------------- | ---------------------------------------------------------------------- |
| HTTP Request     | API call assertion: verify endpoint returns expected status and body   |
| Code             | Logic verification: verify output data matches expected transformation |
| AI / LLM         | AI response validation: verify response contains expected content      |
| Webhook          | Webhook trigger test: send test payload and verify processing          |
| Database         | Data assertion: verify database state after workflow execution         |
| Email            | Notification test: verify email delivery and content                   |
| Conditional (IF) | Branch coverage: generate separate test cases for each branch          |
| Loop             | Iteration test: verify correct behavior across N iterations            |

**Best for:**

* Teams using n8n for business automation who want to test their workflows
* Ensuring n8n workflow changes do not break downstream processes
* Validating webhook integrations end to end
* CI/CD testing for n8n workflow deployments

***

## 10. Edge case generation

Generate boundary conditions, invalid input scenarios, and error state tests for any feature described in natural language.

### Via MCP

```python
generate_edge_cases(
    context_query="user login with email address and password validation"
)
```

### Example output

For the login edge case query, ContextQA generates scenarios including:

* Empty email field submission
* Empty password field submission
* Email address without @ symbol
* Email address with consecutive dots
* Password exceeding maximum length (if any)
* Login with valid email but wrong password (N times, to trigger lockout if applicable)
* Login with correct credentials after account lockout
* Login with SQL injection attempt in the email field
* Login with XSS payload in the password field
* Login with Unicode characters in the password
* Login with leading/trailing whitespace in both fields
* Concurrent login from two different browsers

**Best for:**

* QA engineers who want comprehensive negative test coverage without writing each scenario manually
* Security-minded teams who need to cover injection attacks and boundary conditions
* Feature completeness reviews before release — run edge case generation on every new feature
* Augmenting a happy path test suite with systematic negative testing

***

## Comparing generation methods

| Method           | Best Input Quality                    | Time to Generate | Test Quality                                      |
| ---------------- | ------------------------------------- | ---------------- | ------------------------------------------------- |
| Natural language | Any — works with vague descriptions   | Seconds          | High for clear descriptions, lower for vague ones |
| Jira ticket      | Well-written tickets with ACs         | Seconds          | Very high — traces directly to requirements       |
| Figma design     | Complete, screen-based designs        | 15-30 seconds    | High for UI flows                                 |
| Excel/CSV        | Structured, well-labeled spreadsheets | 30-60 seconds    | Depends on existing test quality                  |
| Swagger/OpenAPI  | Any valid spec                        | 30-60 seconds    | Very high for API tests                           |
| Video            | Clear, normal-speed recordings        | 1-3 minutes      | High — captures exact interactions                |
| Requirements     | Formal, numbered requirements         | Seconds          | Very high — systematic coverage                   |
| Code change      | Git diff with clear intent            | 15-30 seconds    | High for targeted regression                      |
| n8n workflow     | Valid n8n JSON export                 | 15-30 seconds    | High for integration tests                        |
| Edge cases       | Any feature description               | Seconds          | High breadth, AI-inferred scenarios               |

***

## After generation

Regardless of which method you use, after generation you should:

1. **Review on the verification screen** — when generating from the UI (Natural Language or Import File methods), the creation dialog presents a verification screen showing each generated test case with its title, description, steps, and expected result. Review each case and click **Save** individually or **Save All Test Cases** to save them all. Any skipped test cases appear with a reason explaining why they were omitted.
2. **Execute the test** — run it once to verify it works against your current application state. Use `execute_test_case` via MCP or the **Run** button in the UI.
3. **Review the results** — if any step fails on the first run, use AI root cause analysis to understand whether the failure is a test issue (step was generated incorrectly) or an application issue (a real bug).
4. **Add to a suite** — once the test passes, add it to the appropriate test suite so it runs as part of your regular regression cycle.

## Related pages

* [Autonomous Agent Pipeline](https://learning.contextqa.com/ai-features/autonomous-agent-pipeline) — how the AI executes your generated tests
* [Creating Test Cases](https://learning.contextqa.com/web-testing/creating-test-cases) — all four test case creation methods in the unified creation dialog
* [Knowledge Base](https://learning.contextqa.com/ai-features/knowledge-base) — provide application context to improve generation accuracy
* [Custom Agents](https://learning.contextqa.com/ai-features/custom-agents) — create domain-specific agents for specialized test generation
* [Running Tests](https://learning.contextqa.com/execution/running-tests) — execute your generated tests

{% hint style="info" %}
**70% less human effort with AI test generation and self-healing.** [**Book a Demo →**](https://contextqa.com/book-a-demo/) — See AI generate, execute, and maintain tests for your application.
{% endhint %}
