# Creating Test Cases

{% hint style="info" %}
**Who is this for?** QA engineers, testers, and developers who want to create web test cases using any of the four creation methods available in ContextQA.
{% endhint %}

## Overview

ContextQA provides a unified test case creation panel with four methods: **AI Assistance**, **Import File**, **Record & Play**, and **Manual creation**. Click the **+** icon from the test cases list, the left sidebar, or the global add button to open the creation panel and select your method.

***

## Opening the creation panel

{% stepper %}
{% step %}

#### Navigate to test development

Open <https://app.contextqa.com> and sign in to your workspace. Select **Test Development** from the left sidebar to open the test cases list.
{% endstep %}

{% step %}

#### Click the + icon

Click the **+** icon in the test cases list header, the left sidebar **Create Case** button, or the global add button. A full-height panel slides in from the right side of the screen.
{% endstep %}

{% step %}

#### Select a creation method

The panel presents a method selection screen with up to four options (availability depends on your plan and feature flags):

| Method            | Description                                                                   | Button Label                    |
| ----------------- | ----------------------------------------------------------------------------- | ------------------------------- |
| **AI Assistance** | Provide a description and let ContextQA generate test cases automatically     | **Start with AI Assistance**    |
| **Import File**   | Upload an Excel file, video, or Figma design to generate test cases           | **Import File**                 |
| **Record & Play** | Record your actions in the browser while ContextQA captures each step         | **Start Recording**             |
| **Manual**        | Create a test case with full control over metadata, fields, and configuration | **Manually Create a Test Case** |

After selecting a method, the panel switches to the corresponding view. Use the **back arrow** in the top-left corner to return to the method selection screen at any time.

{% hint style="info" %}
The **Import File** option requires the Upload Excel feature to be enabled on your plan. The **Record & Play** option requires a paid plan with the Record Playback feature enabled.
{% endhint %}
{% endstep %}
{% endstepper %}

***

## Creation methods in detail

{% tabs %}
{% tab title="AI Assistance" %}

#### AI-assisted test generation

Describe your test scenario in plain English. ContextQA's AI generates all steps, locators, and assertions automatically.

**Fields:**

| Field                      | Required | Description                                                                                                                               |
| -------------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| **Prerequisites**          | No       | Select existing test cases to run before this test. Available on paid plans.                                                              |
| **URL**                    | Yes      | The starting URL for the test (must begin with `http://` or `https://`). Hidden when prerequisites are selected or when targeting mobile. |
| **Description**            | Yes      | A plain-English description of the test scenario.                                                                                         |
| **Select Target Platform** | Yes      | Choose **Web Application** or **Mobile**. Mobile is available when mobile execution is enabled on your plan.                              |

**Example description:**

```
Log in as admin@test.com with password Test123!, navigate to the
Products page, search for "wireless headphones", and verify that
at least one product appears in the search results.
```

**Advanced settings (optional):**

Expand the **Advanced Settings** section to configure AI behavior:

| Setting                 | Options                                                   | Default              |
| ----------------------- | --------------------------------------------------------- | -------------------- |
| **Enable AI Smartness** | Organization Default, Expert, Fast, Strict                | Organization Default |
| **AI Action**           | Organization Default, Create Steps, Dynamic Steps, Action | Organization Default |
| **Knowledge Base**      | Select a knowledge base to provide application context    | None                 |
| **Environments**        | Select a target environment                               | None                 |

Click **Generate & Execute Test Case** to create and run the test case. If the **Generate From Crawl** feature is enabled on your plan, you can click **Generate From Crawl** to create test cases by crawling the target URL.

The AI agent parses your description into discrete steps, generates locators for each element, handles dynamic content and waits automatically, and captures screenshots and video during execution.

**Best for:** Complex multi-page flows, form submissions, checkout workflows
{% endtab %}

{% tab title="Import File" %}

#### Import test cases or requirements

Upload files to generate test cases automatically. Two modes are available:

**Mode selection:**

| Mode                   | Description                                                                               |
| ---------------------- | ----------------------------------------------------------------------------------------- |
| **Import Test Cases**  | Upload an Excel spreadsheet (`.xlsx`, `.xls`) containing test case definitions            |
| **Import Requirement** | Upload requirements files or connect via Figma to generate test cases from specifications |

**Select Target Platform:**

| Platform            | Availability                              |
| ------------------- | ----------------------------------------- |
| **Web Application** | Always available                          |
| **Mobile**          | Always available                          |
| **API**             | Available in Import Requirement mode only |

**Publish mode:**

Select how imported test cases are published:

| Mode                  | Description                                                                                                |
| --------------------- | ---------------------------------------------------------------------------------------------------------- |
| **Auto Publish**      | Generated test cases are saved and published immediately                                                   |
| **Required Approval** | Generated test cases are saved in a pending state and require manual review and approval before publishing |

**Optional configuration:**

| Option                | Description                                                                                              |
| --------------------- | -------------------------------------------------------------------------------------------------------- |
| **Create Test Suite** | Automatically create a test suite containing the imported test cases. Enter a suite name.                |
| **Create Test Plan**  | Create a test plan for the imported cases. Enter a plan name. Automatically enables test suite creation. |
| **Execute Test Plan** | Run the test plan immediately after import. Available when Create Test Plan is enabled.                  |

**File upload — Import Test Cases mode:**

Upload `.xlsx` or `.xls` files. A **Download Sample Format** link is available to get the expected spreadsheet structure.

**File upload — Import Requirement mode:**

Accepted file types depend on the target platform:

* **Web / Mobile:** `.xlsx`, `.xls`, or video files (`.flv`, `.mov`, `.mpeg`, `.mpg`, `.mp4`, `.webm`, `.wmv`, `.3gp`)
* **API:** `.json` files

When uploading requirements, the import follows a guided multi-step flow:

1. **Inputs** — Upload your file and configure platform, publish mode, and optional settings
2. **Analysis** — ContextQA analyzes the uploaded document and extracts testable requirements
3. **Clarifications** — The AI may ask follow-up questions about ambiguous requirements. Answer them to improve test case accuracy.
4. **Generate** — Test cases are generated from the analyzed requirements
5. **Review** — Review the generated test cases before saving

{% hint style="info" %}
The Import Test Cases mode uses a shorter flow: **Inputs** → **Generate** → **Review**.
{% endhint %}

**Figma integration:**

When the Figma feature is enabled, select **Figma** as the source instead of file upload. Enter a Figma file URL (matching the pattern `https://www.figma.com/file/...`, `https://www.figma.com/proto/...`, or `https://www.figma.com/design/...`).

Click **Create Test Cases** to start the generation process.

{% hint style="info" %}
When importing requirements, the AI may ask clarification questions before generating test cases. After generation, you can review AI analysis, coverage gaps, and metadata from the **Requirements Details** page. See [Requirements Management](https://learning.contextqa.com/web-testing/requirements-management) for the full workflow.
{% endhint %}

**Best for:** Migrating existing test libraries, generating tests from requirements documents, creating tests from design files
{% endtab %}

{% tab title="Record & Play" %}

#### Browser recording

Record your actions in the browser while ContextQA captures each interaction as a test step.

{% hint style="warning" %}
This method requires the **ContextQA Recorder** Chrome extension. If the extension is not detected, the panel displays installation instructions with a link to the Chrome Web Store.
{% endhint %}

**When the extension is installed:**

| Field             | Required | Description                                                                  |
| ----------------- | -------- | ---------------------------------------------------------------------------- |
| **Prerequisites** | No       | Select existing test cases to run before this test                           |
| **URL**           | Yes      | The starting URL for recording. Shows your open browser tabs as suggestions. |

Click **Create Test Case** to start recording. ContextQA opens the target URL with the recorder active. Navigate and interact with your application normally — the recorder captures each click, text entry, and navigation as a step. Click **Stop Recording** to finalize.

**When the extension is not installed:**

1. Click the link to install the **ContextQA Recorder** from the Chrome Web Store
2. Enable the extension in incognito mode
3. Close and reopen the creation panel

**Best for:** UI exploration, click-heavy workflows, onboarding flows
{% endtab %}

{% tab title="Manual Creation" %}

#### Manual test case creation

Create test cases directly within the creation panel with complete control over metadata and configuration. Select **Manually Create a Test Case** from the method selection screen to open the manual creation form.

**Form fields:**

| Field                | Required | Description                                                            |
| -------------------- | -------- | ---------------------------------------------------------------------- |
| **Name**             | Yes      | Descriptive test case name (4–250 characters)                          |
| **Priority**         | No       | Select from available priority levels                                  |
| **Type**             | No       | Select the test case type                                              |
| **Status**           | No       | Draft, Ready, In Review, Approved, Obsolete, or Rework. Default: Ready |
| **Prerequisites**    | No       | Select existing test cases to run before this test                     |
| **Tags/Labels**      | No       | Add tags for organization and filtering                                |
| **Testcase Timeout** | No       | Maximum execution time in minutes (1–40). Default: 20                  |
| **Description**      | No       | Rich text description of the test case                                 |

**Test data options:**

| Field                   | Description                                                                |
| ----------------------- | -------------------------------------------------------------------------- |
| **Test Data Profile**   | Select a data profile for parameterized testing                            |
| **Data Driven**         | Enable to run the test across multiple data sets from the selected profile |
| **Data Set**            | When not data-driven, select a specific data set from the profile          |
| **Iteration From / To** | When data-driven, define the range of data set iterations                  |

**Additional toggles:**

| Toggle                        | Description                                         |
| ----------------------------- | --------------------------------------------------- |
| **Mobile Testing**            | Switch the test case type from web to mobile        |
| **Extension Used**            | Mark whether the ContextQA Chrome extension is used |
| **Avoid auto wait for steps** | Disable automatic wait insertion between steps      |

Click **Create** to save the test case and open the test case details screen.

**Best for:** Precise control over test metadata, data-driven testing configuration, complex assertions
{% endtab %}
{% endtabs %}

***

## Mobile platform selection

When you select **Mobile** as the target platform (from the AI Assistance or Import File tabs), a mobile device setup screen appears before starting execution.

**Concurrency indicators** appear in the header showing parallel execution slots and queue status.

| Field                    | Required | Description                                                                                            |
| ------------------------ | -------- | ------------------------------------------------------------------------------------------------------ |
| **Select Platform**      | Yes      | Choose **iOS** (IPA) or **Android** (APK)                                                              |
| **Select Device**        | Yes      | Pick a device from the available device pool (options load based on selected platform)                 |
| **App Build**            | Yes      | Select the app build to test. If no builds are found, click **Go to Upload Page** to upload one first. |
| **Desired Capabilities** | No       | Review and edit key-value capability pairs. Some fields (like `app_url` and `os_type`) are read-only.  |

Click **Start Execution** to launch the mobile test. If maximum concurrency is reached, ContextQA queues the test and you can click **Skip and continue** to close the dialog while the test waits in the queue.

For more details on mobile test setup, see the [Mobile Testing](https://learning.contextqa.com/mobile-testing/mobile-testing) section.

***

## Verifying generated test cases

After AI generates test cases (from the AI Assistance or Import File methods), a verification screen displays the results.

Each generated test case shows:

* **Title** with an internal ID badge (e.g., `TC-1`)
* **Description** of the test scenario
* **Steps** with detailed actions
* **Expected Result** for the test case
* **Source badges** indicating where the test case was derived from

**Actions available:**

* Click **Save** on an individual test case to save it to your project
* Click **Save All Test Cases** to save all generated test cases at once
* Click **Cancel** to discard the generated test cases

If any test cases were skipped during generation, a **Skipped Test Cases** section appears with the reason each case was skipped.

***

## Frequently asked questions

<details>

<summary>How long does it take to create a test case?</summary>

With AI Assistance, describing a 10-step workflow takes under 60 seconds. The AI generates and validates all steps automatically. Browser recording typically takes 2–5 minutes for a complete user journey. Manual step creation varies by complexity but most teams create 20–30 step test cases in under 10 minutes.

</details>

<details>

<summary>Can I import existing test cases from spreadsheets?</summary>

Yes. Select **Import File** in the creation panel, choose **Import Test Cases** mode, and upload an `.xlsx` or `.xls` file. Download the sample format to see the expected column structure. ContextQA maps your spreadsheet columns to test case fields automatically.

</details>

<details>

<summary>What browsers and devices are supported?</summary>

ContextQA supports all major browsers for test execution:

* **Chrome** (desktop + headless)
* **Firefox**
* **Safari** (macOS)
* **Edge**
* **Mobile Chrome / Safari** (via device emulation)

For real-device mobile testing, see the [Mobile Testing](https://learning.contextqa.com/mobile-testing/mobile-testing) section.

</details>

<details>

<summary>Are screenshots and videos stored automatically?</summary>

Yes. Every test execution automatically captures:

* **Per-step screenshots** — A screenshot is taken after each step completes
* **Full session video** — Complete recording from start to finish (WebM format)
* **Playwright trace** — Detailed trace file with network logs, DOM snapshots, and timing (downloadable ZIP)

ContextQA stores and links all evidence directly in the execution results. No configuration required.

</details>

<details>

<summary>Can I run the same test case across multiple environments?</summary>

Yes. ContextQA supports environment-based test execution. You can define multiple environments (Development, Staging, Production) and run any test case against any environment by selecting it at execution time. Environment-specific variables (base URLs, credentials, API keys) are managed separately so your test cases remain portable.

</details>

<details>

<summary>How does self-healing work if my application changes?</summary>

ContextQA's AI Configuration (self-healing) continuously monitors element locators. When an element changes — for example, a button moves or an ID changes — the AI automatically finds the updated element using visual context, text content, and semantic analysis. The system flags failed locators for review and heals them with one click. See [Self-Healing Tests](https://learning.contextqa.com/web-testing/self-healing) for configuration details.

</details>

<details>

<summary>What are the AI Smartness modes?</summary>

AI Smartness controls how the AI generates test steps:

* **Expert** — The AI takes more time to analyze the application and produces thorough, detailed steps
* **Fast** — The AI prioritizes speed and generates steps quickly with less analysis
* **Strict** — The AI follows your description exactly with minimal interpretation
* **Organization Default** — Uses the AI Smartness setting configured by your organization administrator

</details>

<details>

<summary>What is the difference between Auto Publish and Required Approval?</summary>

When importing test cases via the **Import File** method, you can choose a publish mode:

* **Auto Publish** saves and publishes generated test cases immediately — they are ready to run right away.
* **Required Approval** saves generated test cases in a pending state. A team member must review and approve each test case before it becomes available for execution. This is useful for teams that require peer review of test content.

</details>

***

## Best practices

{% hint style="success" %}
**Name tests descriptively** — Use the format `[Page] - [Action] - [Expected Result]` (e.g., `Login Page - Valid Credentials - Dashboard Loads`). This makes test results immediately understandable.
{% endhint %}

{% hint style="info" %}
**Start with happy-path tests** — Create and validate your positive test cases first, then add negative scenarios (invalid inputs, error states) once the base flow is verified.
{% endhint %}

{% hint style="warning" %}
**Avoid hard-coded waits** — Don't use `Wait 5 seconds` steps. Instead, use `Wait for element` or `Wait for network idle` actions. ContextQA's AI handles timing automatically.
{% endhint %}

***

## Related documentation

* [Requirements Management](https://learning.contextqa.com/web-testing/requirements-management) — Upload requirements, review AI analysis, and track coverage gaps
* [Debugging Test Cases](https://learning.contextqa.com/web-testing/debugging-test-cases) — Step through execution with breakpoints and live variables
* [Test Steps Editor](https://learning.contextqa.com/web-testing/test-steps-editor) — Detailed guide to all available step actions
* [Version History](https://learning.contextqa.com/web-testing/version-history) — Track, compare, and restore previous test case versions
* [Managing Test Suites](https://learning.contextqa.com/web-testing/managing-test-suites) — Organize test cases into suites for batch execution
* [Self-Healing Tests](https://learning.contextqa.com/web-testing/self-healing) — AI-powered test maintenance
* [Test Data Management](https://learning.contextqa.com/web-testing/test-data-management) — Variables, CSV imports, and data-driven testing
* [AI Test Generation](https://learning.contextqa.com/ai-features/ai-test-generation) — All 10 AI test generation methods
* [Mobile Testing](https://learning.contextqa.com/mobile-testing/mobile-testing) — Mobile test creation and execution
* [Execution & Reporting](https://learning.contextqa.com/execution/execution) — Run tests, view results, set up CI/CD

***

{% hint style="info" %}
**Ready to create your first test?**

[Open ContextQA Platform](https://app.contextqa.com) · [View Test Steps Reference](https://learning.contextqa.com/web-testing/test-steps-editor) · [Book a Demo](https://contextqa.com/book-a-demo/)
{% endhint %}
