Test Steps Editor

A comprehensive reference for the ContextQA step editor, covering all step types, field options, variable syntax, AI verification steps, and conditional and loop logic.

circle-info

Who is this for? Testers and SDETs who need fine-grained control over test step logic — including conditional branches, loops, API calls, and AI verification assertions.

The test steps editor is where you build, review, and refine the sequence of actions and assertions that make up a test case. Each step represents one atomic instruction — a click, a typed value, a navigation, an API call, or a verification. The editor supports eight distinct step types, giving you precise control over complex flows that include conditional logic, data iteration, API interactions, and AI-powered assertions.

Prerequisites

  • You have created or opened a test case.

  • You understand the difference between action steps and verification steps (see Core Concepts).


Opening the Steps Editor

From the Test Cases list, click on a test case name to open it. The steps editor occupies the center panel. The left side shows the ordered step list; clicking any step opens its configuration panel on the right.

To add a new step, click Add Step at the bottom of the step list. A new step is inserted after the currently selected step. Use the drag handles (⋮⋮) on the left of each step to reorder them by dragging.


Step Types

AI Agent Step (Default)

The AI Agent step is the default step type and the most commonly used. You write a plain-English instruction and the AI execution engine interprets it and performs the corresponding browser action at runtime.

When to use: Any browser interaction — clicking, typing, selecting, hovering, scrolling, navigating — that can be described in a sentence.

Action field examples:

Tips:

  • Use the exact text that appears on the element in the UI (button labels, field placeholders, dropdown option text).

  • For fields identified by placeholder or label text, reference that text: "Type hello in the field labelled Search products".

  • For elements with no visible text, describe their visual appearance or position: "Click the blue Download icon next to the first row".


The Navigate step directs the browser to a specific URL. Unlike the AI Agent step, the Navigate step uses the browser's direct URL navigation rather than simulating a user click on a link.

When to use: When you need to jump to a specific URL directly — a deep link to a page that is not easily reachable by clicking, or an absolute URL that includes query parameters.

Fields:

  • URL — the destination URL. Supports environment variable substitution: ${ENV.BASE_URL}/settings/billing.

Example:


REST API Call

The REST API Call step makes an HTTP request from within the test, independent of the browser. The response can be stored in a variable and used in subsequent steps.

When to use: Setting up test data before the UI test (e.g., creating a user via API before testing the UI that displays that user), tearing down test data after a test, or verifying backend state that is not visible in the UI.

Fields:

Field
Description

Method

GET, POST, PUT, PATCH, DELETE

URL

The full endpoint URL. Supports variables: ${ENV.BASE_URL}/api/users

Headers

Key-value pairs. Use ${ENV.API_KEY} for authorization tokens

Request Body

JSON body for POST/PUT/PATCH requests. Supports variables

Store response in variable

Name of the variable to store the response object

Expected status code

If set, the step fails if the response status does not match

Using API response data in later steps:

If you set "Store response in variable" to loginResponse, subsequent steps can reference:

  • ${loginResponse.status} — HTTP status code

  • ${loginResponse.body.token} — a field from the JSON response body

  • ${loginResponse.body.user.id} — nested JSON path access

  • ${loginResponse.headers.content-type} — a response header value

Example flow:


Conditional (If / Else)

The Conditional step creates a branch in the test execution. If a specified condition evaluates to true, one set of steps runs; otherwise, an alternative set runs (or nothing runs if no else branch is defined).

When to use: When the test must handle two different application states — for example, a "Remember Me" dialog that only appears on first login, or a feature flag that changes which UI is shown.

Fields:

  • Condition — a variable comparison expression:

    • ${localVar} == "expected value"

    • ${apiResponse.body.status} == "active"

    • ${ENV.FEATURE_FLAG} == "true"

  • If-true steps — the steps to execute when the condition is met.

  • Else steps — optional steps to execute when the condition is not met.

Example:


For Loop

The For Loop step repeats a set of inner steps a fixed number of times or once per item in a data set.

When to use: When you need to add 5 items to a cart, dismiss a recurring modal, iterate over rows in a table, or repeat a form submission multiple times.

Modes:

Count-based: Repeat N times.

Data-based: Iterate over a list variable. If ${productList} contains ["Widget A", "Widget B", "Widget C"], the loop runs three times with ${item} set to each value in turn:


While Loop

The While Loop step repeats a set of inner steps until a specified condition becomes false (or until a maximum iteration count is reached, as a safety guard).

When to use: When you need to wait for an asynchronous process to complete — polling for a background job to finish, waiting for a status field to change, retrying until a dynamic element appears.

Fields:

  • Condition — the loop continues while this condition is true: ${jobStatus} != "complete"

  • Max iterations — safety limit to prevent infinite loops (default: 10)

  • Inner steps — the steps to execute on each iteration

Example:


Step Group

The Step Group step inserts a named, reusable step group at a specific point in the test case. At execution time, the step group expands and its individual steps run in sequence as if they were inline steps.

When to use: Any repeated setup or teardown pattern — login sequences, navigation to a module, closing persistent UI elements (cookie banners, chat widgets), completing a checkout flow that precedes the actual test assertion.

Fields:

  • Step Group — search and select from the available step groups in the workspace.

Behavior:

  • Changes to the step group are reflected in all test cases that use it without requiring the test cases to be re-saved.

  • Step group steps appear individually in the execution report, not as a collapsed entry — each step shows its own pass/fail status and screenshot.


AI Verification

The AI Verification step uses a vision-language AI model to evaluate a natural language condition against the current browser state. It is the most flexible assertion type, handling dynamic, non-deterministic content that cannot be verified with exact string matching.

When to use:

  • Verifying that a dynamically generated value (order ID, timestamp, random token) exists and is plausible.

  • Confirming visual states ("the chart rendered successfully", "the map shows a marker in London").

  • Checking relative conditions ("the error message is displayed below the Password field").

  • Any condition that requires understanding context rather than matching a literal string.

Action field examples:

How it differs from a standard AI Agent step: A standard AI Agent step performs an action. An AI Verification step produces a pass/fail result based on evaluating the current screenshot against the stated condition. Use the AI Verification type explicitly for assertion steps so the execution report correctly categorizes them as verifications rather than actions.


Step Editor Fields Reference

Every step, regardless of type, has the following common configuration fields:

Field
Description

Step Name / Description

The plain-English instruction or assertion. This is what appears in the step list and in the execution report.

Step Type

Dropdown selecting the step type (AI Agent, Navigate, REST API Call, etc.).

Wait Condition

Optional condition that must be satisfied before this step executes: Wait for element visible, Wait for URL to contain, Wait for network idle, Wait N seconds.

Screenshot Capture

Controls when a screenshot is captured: Always (default), On Failure Only, Never.

Mark as Optional

If enabled, a failure on this step does not fail the overall test case. Useful for non-critical UI enhancements or known-flaky elements.

Timeout

Maximum time (in seconds) to wait for the step action to complete before marking it as failed. Default: 30 seconds.


Variables in Steps

ContextQA supports four variable scopes. All variable types are referenced with the ${variableName} syntax.

Local Variables

Scoped to the test case. Defined in the test case settings panel or set dynamically by REST API Call steps.

Setting a local variable dynamically: In a REST API Call step, use "Store response in variable" to save the response. Individual fields are accessed via dot notation on the stored object.

Global Variables

Workspace-scoped. Available to every test case in the workspace. Defined in Settings → Global Variables.

Global variables are useful for values that are shared across many test cases but are not environment-specific — a default admin account email, a test project name, a default search term.

Environment Parameters

Values from the currently selected environment. Referenced with the ${ENV.KEY} prefix.

Environment parameters allow the same test step to work against staging, QA, and production without modification. The environment is selected at the Test Plan level.

Test Data Profile Variables

When a test case has a test data profile attached, each column in the profile becomes a variable available in the test steps.

Each profile row is a separate test run. Variables are substituted per row.


Tips & Best Practices

  • Keep AI Agent step descriptions action-oriented. Start with a verb: "Click", "Type", "Select", "Scroll", "Verify", "Navigate". The AI execution engine recognizes these action verbs and dispatches them to the correct browser interaction handler.

  • Use AI Verification steps for all assertions. Even when an assertion could theoretically be expressed as a standard AI Agent step ("verify the heading says Dashboard"), using the explicit AI Verification type ensures the step is tracked as an assertion in the execution report and counted in pass/fail statistics correctly.

  • Set timeouts explicitly for slow operations. If your application has pages that take more than 30 seconds to load (reports, data exports, dashboard renders), increase the timeout on the relevant steps to avoid premature failures.

  • Mark setup steps as Optional when appropriate. Cookie banner dismissal steps, onboarding tour dismissal, and similar incidental actions that may or may not appear can be marked Optional so they don't fail the test if the UI state doesn't require them.

  • Use For Loop for repetitive data entry. When a test needs to add multiple rows to a table or submit a form multiple times with the same structure, use For Loop with a data set variable rather than duplicating steps.

Troubleshooting

An AI Agent step is clicking the wrong element Refine the step description to be more specific. Add context about the element's location ("in the header", "in the confirmation dialog"), its label ("the button labelled Submit Order, not the Cancel button"), or its visual appearance ("the blue primary action button").

A REST API Call step fails with a 401 Unauthorized response Verify that the Authorization header is correctly formatted and that the token variable it references has been set by an earlier step. Check the network log in the execution report to see the exact request headers that were sent.

A Conditional step is not taking the expected branch Add a temporary AI Agent step before the conditional that logs the value of the condition variable: "Verify the page shows the value of ${myVar}". Run the test and check the screenshot to see what value the variable actually holds.

Step Group steps are not appearing individually in the report This behavior was changed in a recent release. If you are on an older workspace version, the step group may appear as a collapsed entry. Contact support to upgrade your workspace to the current execution engine.

While Loop is hitting the max iteration limit The default maximum is 10 iterations. If your polling loop needs more iterations, increase the max iterations field. Also consider increasing the wait time between iterations to give the background process more time to complete before each poll.

circle-info

70% less manual test maintenance with AI self-healing. Book a Demo →arrow-up-right — See ContextQA create and maintain tests for your web application.

Last updated

Was this helpful?