# Test Steps Editor

{% hint style="info" %}
**Who is this for?** Testers and SDETs who need fine-grained control over test step logic — including conditional branches, loops, API calls, and AI verification assertions.
{% endhint %}

The test steps editor is where you build, review, and refine the sequence of actions and assertions that make up a test case. Each step represents one atomic instruction — a click, a typed value, a navigation, an API call, or a verification. The editor supports multiple step types, giving you precise control over complex flows that include conditional logic, data iteration, API interactions, database verification, custom code, document generation, and AI-powered assertions.

## Prerequisites

* You have created or opened a test case.
* You understand the difference between action steps and verification steps (see [Core Concepts](https://learning.contextqa.com/getting-started/core-concepts)).

***

## Opening the Test Case Details Screen

From the Test Cases list, click on a test case name to open it. The test case details screen has three main areas:

* **Header bar** — Displays the test case name (click the edit icon to rename inline), status badge, priority badge, and action buttons (**Record**, **Run**, and a **More** menu with options like Debug, Duplicate, Delete, and Generate API test case). Warning badges appear if test data or prerequisites are missing.
* **Main content area** — Contains a collapsible **Prerequisites** section at the top, followed by a tab bar with **Test Steps** and **Relationships** tabs. The **Test Steps** tab is selected by default.
* **Side panel** (collapsible) — A right-side panel with three icon tabs:
  * **Test Case** — View and edit the description, metadata, labels, status, and AI settings (Default AI Action, AI Smartness level).
  * **Variables** — Manage local key-value parameters for the test case. Add, edit, delete, and bulk-delete variables directly from this panel.
  * **Run History** — View past execution results for this test case.

### Adding steps

Click **Add Step** at the bottom of the step list. A **Create a New Step** modal opens with a category sidebar on the left (see [Step builder categories](#step-builder-categories) below). You can also add a step at the first position using the **+** icon above the step list, or insert a step between two existing steps using the inline add action.

### Reordering steps

Click the **Reorder** button in the tab bar to enter reorder mode. Drag steps using the handles to rearrange them, then click **Done** to save or **Cancel** to discard changes.

> **Note**: Step group steps cannot be reordered from within the parent test case. Open the step group to reorder its internal steps.

### Bulk step actions

Select multiple steps using the checkboxes to reveal a floating toolbar at the bottom of the screen with these actions:

| Action        | Description                                             |
| ------------- | ------------------------------------------------------- |
| **Group**     | Wrap the selected steps into a new step group           |
| **Loop**      | Wrap the selected steps inside a new loop               |
| **Condition** | Wrap the selected steps inside a new conditional branch |
| **Duplicate** | Copy the selected steps                                 |
| **Delete**    | Remove the selected steps                               |

### Relationships tab

The **Relationships** tab shows two sub-views:

* **Test Plans** — Lists all test plans that include this test case.
* **Used as Prerequisite in** — Lists test cases that depend on this test case as a prerequisite, with an option to replace it.

***

## Step Builder Categories

When you click **Add Step**, the step builder modal opens with nine categories in the left sidebar. Select a category to see its builder form on the right.

| Category        | Description                                                                                                                                                                                               |
| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Actions**     | Standard browser actions — click, type, navigate, select, scroll, hover, upload. The default category.                                                                                                    |
| **Loops**       | Create For Loop or While Loop steps to repeat a set of inner steps.                                                                                                                                       |
| **Conditions**  | Create If / Else If / Else conditional branches based on variable comparisons.                                                                                                                            |
| **Database**    | Run database verification queries against a connected database.                                                                                                                                           |
| **Api**         | Make REST API calls (GET, POST, PUT, PATCH, DELETE) and store responses in variables. Uses a three-step wizard: (1) configure the request, (2) store the response in a variable, (3) verify the response. |
| **Ai Agent**    | Write a plain-English instruction for the AI execution engine to interpret and perform.                                                                                                                   |
| **Custom Code** | Execute custom JavaScript or TypeScript code within the test flow.                                                                                                                                        |
| **Document**    | Generate TXT or CSV documents with configurable delimiters, templates, and key-value data mappings.                                                                                                       |
| **Step Group**  | Insert a reusable step group by searching and selecting from available groups in the workspace.                                                                                                           |

### Edit in Depth

Right-click or use the step menu on any existing step and select **Edit in Depth** to open a full-screen editor. This modal provides access to the step's template selector, template variable fields, and advanced settings — useful for fine-tuning steps that require precise configuration beyond the inline editor.

***

## Step Types

The following sections describe each step type in detail. These correspond to the categories available in the step builder modal.

### AI Agent Step (Default)

The AI Agent step is the default step type and the most commonly used. You write a plain-English instruction and the AI execution engine interprets it and performs the corresponding browser action at runtime.

**When to use:** Any browser interaction — clicking, typing, selecting, hovering, scrolling, navigating — that can be described in a sentence.

**Action field examples:**

```
Click the Sign In button
Type test@example.com in the Email Address field
Select "United States" from the Country dropdown
Scroll down to the footer
Hover over the Profile menu
Press the Tab key
Upload the file /test-data/sample.pdf to the Document Upload area
```

**Tips:**

* Use the exact text that appears on the element in the UI (button labels, field placeholders, dropdown option text).
* For fields identified by placeholder or label text, reference that text: "Type hello in the field labelled Search products".
* For elements with no visible text, describe their visual appearance or position: "Click the blue Download icon next to the first row".

***

### Navigate

The Navigate step directs the browser to a specific URL. Unlike the AI Agent step, the Navigate step uses the browser's direct URL navigation rather than simulating a user click on a link.

**When to use:** When you need to jump to a specific URL directly — a deep link to a page that requires direct URL access, or an absolute URL that includes query parameters.

**Fields:**

* **URL** — the destination URL. Supports environment variable substitution: `${ENV.BASE_URL}/settings/billing`.

**Example:**

```
URL: ${ENV.BASE_URL}/admin/users?role=admin&active=true
```

***

### REST API Call

The REST API Call step makes an HTTP request from within the test, independent of the browser. The response can be stored in a variable and used in subsequent steps.

**When to use:** Setting up test data before the UI test (e.g., creating a user via API before testing the UI that displays that user), tearing down test data after a test, or verifying backend state that is not visible in the UI.

The step builder uses a three-step wizard for API steps:

**Step 1 — Configure the request:**

| Field         | Description                                                            |
| ------------- | ---------------------------------------------------------------------- |
| Method        | GET, POST, PUT, PATCH, DELETE                                          |
| URL           | The full endpoint URL. Supports variables: `${ENV.BASE_URL}/api/users` |
| Headers       | Key-value pairs. Use `${ENV.API_KEY}` for authorization tokens         |
| Authorization | No Auth, Bearer Token, or OAuth 2.0                                    |
| Request Body  | Raw JSON, form-urlencoded, or form-data. Supports variables            |
| Parameters    | Query parameters as key-value pairs                                    |
| Environment   | Select an environment to use its base URL                              |

**Step 2 — Store the response:**

| Field                      | Description                                       |
| -------------------------- | ------------------------------------------------- |
| Store response in variable | Name of the variable to store the response object |

**Step 3 — Verify the response (optional):**

| Field                | Description                                                    |
| -------------------- | -------------------------------------------------------------- |
| JSON path            | Path to a specific field in the response body for verification |
| Comparison           | Comparison operator (equals, contains, etc.)                   |
| Data type            | Expected data type of the response field                       |
| Expected value       | The value to compare against                                   |
| Expected status code | If set, the step fails if the response status does not match   |

**Using API response data in later steps:**

If you set "Store response in variable" to `loginResponse`, subsequent steps can reference:

* `${loginResponse.status}` — HTTP status code
* `${loginResponse.body.token}` — a field from the JSON response body
* `${loginResponse.body.user.id}` — nested JSON path access
* `${loginResponse.headers.content-type}` — a response header value

**Example flow:**

```
Step 1 (REST API): POST ${ENV.BASE_URL}/api/auth/login
  Body: { "email": "${ENV.ADMIN_EMAIL}", "password": "${ENV.ADMIN_PASSWORD}" }
  Store response in: authResponse

Step 2 (AI Agent): Navigate to ${ENV.BASE_URL}/dashboard
  (Uses the session cookie set by the API login)

Step 3 (REST API): GET ${ENV.BASE_URL}/api/users/${authResponse.body.userId}
  Headers: Authorization: Bearer ${authResponse.body.token}
  Store response in: userProfile

Step 4 (AI Verification): Verify the username displayed on the profile page
  matches ${userProfile.body.name}
```

***

### Conditional (If / Else If / Else)

The Conditional step creates branching logic in the test execution. You can chain multiple conditions together: an **If** branch, one or more **Else If** branches, and an optional **Else** branch. The test engine evaluates conditions from top to bottom and executes the steps inside the first branch whose condition is true. If no condition matches, the Else branch runs (when defined).

**When to use:** When the test must handle different application states — for example, a feature flag that changes which UI is shown, a user role that determines which dashboard loads, or a form field whose value depends on test data.

**Fields:**

* **Condition** — a variable comparison expression. Each If or Else If branch has its own condition:
  * `${localVar} == "expected value"`
  * `${apiResponse.body.status} == "active"`
  * `${ENV.FEATURE_FLAG} == "true"`
* **If-true steps** — the steps to execute when the If condition is met.
* **Else If condition and steps** — additional branches, each with its own condition and steps. You can add as many Else If branches as you need.
* **Else steps** — optional final branch that executes when no preceding condition is true.

**Creating a conditional step:**

1. Click **Add Step** and select the **Conditions** category in the step builder.
2. Define the If condition and add the steps for the If-true branch.
3. To add an Else If branch, hover over the If step in the step list and select **Else If** from the step menu. Define the new condition and its steps.
4. Repeat step 3 for additional Else If branches.
5. To add an Else branch, hover over the last Else If step and select **Else**.
6. Drag steps into the appropriate branch containers using reorder mode if needed.

> **Tip**: You can also create a conditional from existing steps. Select multiple steps using the checkboxes, then click **Condition** in the floating toolbar to wrap them inside a new If branch.

**Simple example (If / Else):**

```
If ${userType} == "admin"
  Then: Click the Admin Panel link
Else:
  Click the My Account link
```

**Multi-branch example (If / Else If / Else):**

```
If ${qualification} == "skills"
  Then: Select "Skill" from the Qualification dropdown
Else If ${qualification} == "education"
  Then: Select "Education" from the Qualification dropdown
Else If ${qualification} == "experience"
  Then: Select "Experience" from the Qualification dropdown
Else:
  Select "Other" from the Qualification dropdown
```

**How evaluation works:**

* The engine evaluates each condition from top to bottom.
* It stops at the first branch where the condition is true and executes those steps.
* All remaining branches are skipped.
* If no condition matches and an Else branch exists, the Else steps run.
* After the conditional block completes, execution continues with the next step in the test case.

***

### For Loop

The For Loop step repeats a set of inner steps a fixed number of times or once per item in a data set.

**When to use:** When you need to add 5 items to a cart, dismiss a recurring modal, iterate over rows in a table, or repeat a form submission multiple times.

**Modes:**

**Count-based:** Repeat N times.

```
Repeat 5 times:
  Click the Add Item button
  Verify the cart count increased
```

**Data-based:** Iterate over a list variable. If `${productList}` contains `["Widget A", "Widget B", "Widget C"]`, the loop runs three times with `${item}` set to each value in turn:

```
For each item in ${productList}:
  Type ${item} in the Search field
  Click Search
  Verify at least one result is displayed
```

***

### While Loop

The While Loop step repeats a set of inner steps until a specified condition becomes false (or until a maximum iteration count is reached, as a safety guard).

**When to use:** When you need to wait for an asynchronous process to complete — polling for a background job to finish, waiting for a status field to change, retrying until a dynamic element appears.

**Fields:**

* **Condition** — the loop continues while this condition is true: `${jobStatus} != "complete"`
* **Max iterations** — safety limit to prevent infinite loops (default: 10)
* **Inner steps** — the steps to execute on each iteration

**Example:**

```
While ${jobStatus} != "complete" (max 20 iterations):
  Wait 3 seconds
  REST API: GET ${ENV.BASE_URL}/api/jobs/${jobId}
    Store response in: jobPollResponse
  Set ${jobStatus} = ${jobPollResponse.body.status}
```

***

### Step Group

The Step Group step inserts a named, reusable step group at a specific point in the test case. At execution time, the step group expands and its individual steps run in sequence as if they were inline steps.

**When to use:** Any repeated setup or teardown pattern — login sequences, navigation to a module, closing persistent UI elements (cookie banners, chat widgets), completing a checkout flow that precedes the actual test assertion.

**Fields:**

* **Step Group** — search and select from the available step groups in the workspace.

**Behavior:**

* Changes to the step group are reflected in all test cases that use it without requiring the test cases to be re-saved.
* Step group steps appear individually in the execution report, not as a collapsed entry — each step shows its own pass/fail status and screenshot.

***

### AI Verification

The AI Verification step uses a vision-language AI model to evaluate a natural language condition against the current browser state. It is the most flexible assertion type, handling dynamic, non-deterministic content that cannot be verified with exact string matching.

**When to use:**

* Verifying that a dynamically generated value (order ID, timestamp, random token) exists and is plausible.
* Confirming visual states ("the chart rendered successfully", "the map shows a marker in London").
* Checking relative conditions ("the error message is displayed below the Password field").
* Any condition that requires understanding context rather than matching a literal string.

**Action field examples:**

```
Verify the dashboard loaded and the main navigation menu is visible
Verify a confirmation message is displayed indicating the order was placed
Verify the table contains at least three rows of data
Verify the user's profile photo is displayed in the top-right corner
Verify the error message states that the email address is already in use
Verify the chart displays data for the last 30 days
```

**How it differs from a standard AI Agent step:** A standard AI Agent step performs an action. An AI Verification step produces a pass/fail result based on evaluating the current screenshot against the stated condition. Use the AI Verification type explicitly for assertion steps so the execution report correctly categorizes them as verifications rather than actions.

***

## Step Editor Fields Reference

Every step, regardless of type, has the following common configuration fields:

| Field                       | Description                                                                                                                                                    |
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Step Name / Description** | The plain-English instruction or assertion. This is what appears in the step list and in the execution report.                                                 |
| **Step Type**               | Dropdown selecting the step type (AI Agent, Navigate, REST API Call, Conditional, For Loop, While Loop, Step Group, or AI Verification).                       |
| **Wait Condition**          | Optional condition that must be satisfied before this step executes: Wait for element visible, Wait for URL to contain, Wait for network idle, Wait N seconds. |
| **Screenshot Capture**      | Controls when a screenshot is captured: Always (default), On Failure Only, Never.                                                                              |
| **Mark as Optional**        | If enabled, a failure on this step does not fail the overall test case. Useful for non-critical UI enhancements or known-flaky elements.                       |
| **Timeout**                 | Maximum time (in seconds) to wait for the step action to complete before marking it as failed. Default: 30 seconds.                                            |

***

## Variables in Steps

ContextQA supports four variable scopes. All variable types are referenced with the `${variableName}` syntax.

### Local Variables

Scoped to the test case. Defined in the **Variables** tab of the side panel or set dynamically by REST API Call steps.

```
${username}
${orderId}
${searchTerm}
```

**Setting a local variable dynamically:** In a REST API Call step, use "Store response in variable" to save the response. Individual fields are accessed via dot notation on the stored object.

### Global Variables

Workspace-scoped. Available to every test case in the workspace. Defined in Settings → Global Variables.

```
${globalAdminEmail}
${defaultTimeout}
${testProjectId}
```

Global variables are useful for values that are shared across many test cases but are not environment-specific — a default admin account email, a test project name, a default search term.

### Environment Parameters

Values from the currently selected environment. Referenced with the `${ENV.KEY}` prefix.

```
${ENV.BASE_URL}
${ENV.API_KEY}
${ENV.ADMIN_PASSWORD}
${ENV.DB_HOST}
```

Environment parameters allow the same test step to work against staging, QA, and production without modification. The environment is selected at the Test Plan level.

### Test Data Profile Variables

When a test case has a test data profile attached, each column in the profile becomes a variable available in the test steps.

```
${username}      ← from column "username" in the data profile
${password}      ← from column "password" in the data profile
${expectedRole}  ← from column "expectedRole" in the data profile
```

Each profile row is a separate test run. Variables are substituted per row.

***

## Tips & Best Practices

* **Keep AI Agent step descriptions action-oriented.** Start with a verb: "Click", "Type", "Select", "Scroll", "Verify", "Navigate". The AI execution engine recognizes these action verbs and dispatches them to the correct browser interaction handler.
* **Use AI Verification steps for all assertions.** Even when an assertion could theoretically be expressed as a standard AI Agent step ("verify the heading says Dashboard"), using the explicit AI Verification type ensures the step is tracked as an assertion in the execution report and counted in pass/fail statistics correctly.
* **Set timeouts explicitly for slow operations.** If your application has pages that take more than 30 seconds to load (reports, data exports, dashboard renders), increase the timeout on the relevant steps to avoid premature failures.
* **Mark setup steps as Optional when appropriate.** Cookie banner dismissal steps, onboarding tour dismissal, and similar incidental actions that may or may not appear can be marked Optional so they don't fail the test if the UI state doesn't require them.
* **Use For Loop for repetitive data entry.** When a test needs to add multiple rows to a table or submit a form multiple times with the same structure, use For Loop with a data set variable rather than duplicating steps.

## Troubleshooting

**An AI Agent step is clicking the wrong element** Refine the step description to be more specific. Add context about the element's location ("in the header", "in the confirmation dialog"), its label ("the button labelled Submit Order, not the Cancel button"), or its visual appearance ("the blue primary action button").

**A REST API Call step fails with a 401 Unauthorized response** Verify that the Authorization header is correctly formatted and that the token variable it references has been set by an earlier step. Check the network log in the execution report to see the exact request headers that were sent.

**A Conditional step is not taking the expected branch** Add a temporary AI Agent step before the conditional that logs the value of the condition variable: "Verify the page shows the value of ${myVar}". Run the test and check the screenshot to see what value the variable actually holds.

**Step Group steps are not appearing individually in the report** This behavior was changed in a recent release. If you are on an older workspace version, the step group may appear as a collapsed entry. Contact support to upgrade your workspace to the current execution engine.

**While Loop is hitting the max iteration limit** The default maximum is 10 iterations. If your polling loop needs more iterations, increase the max iterations field. Also consider increasing the wait time between iterations to give the background process more time to complete before each poll.

## Related Pages

* [Creating Test Cases](https://learning.contextqa.com/web-testing/creating-test-cases)
* [Debugging Test Cases](https://learning.contextqa.com/web-testing/debugging-test-cases)
* [Database Steps](https://learning.contextqa.com/web-testing/database-steps)
* [Custom Code Steps](https://learning.contextqa.com/web-testing/custom-code-steps)
* [Document Generation Steps](https://learning.contextqa.com/web-testing/document-generation-steps)
* [Test Data Management](https://learning.contextqa.com/web-testing/test-data-management)
* [AI Self-Healing](https://learning.contextqa.com/web-testing/self-healing)
* [Core Concepts](https://learning.contextqa.com/getting-started/core-concepts)
* [Running Tests](https://learning.contextqa.com/execution/running-tests)

{% hint style="info" %}
**70% less manual test maintenance with AI self-healing.** [**Book a Demo →**](https://contextqa.com/book-a-demo/) — See ContextQA create and maintain tests for your web application.
{% endhint %}
