Core Concepts

A practical explanation of every major building block in ContextQA, from workspaces and test cases to environments, self-healing, and the MCP server.

circle-info

Who is this for? New and existing ContextQA users across all roles who want a clear mental model of how workspaces, test cases, suites, plans, and environments fit together.

ContextQA is organized around a small set of composable building blocks. Understanding how they relate to each other makes it much easier to structure your test suite, manage test data, and get the most from the AI capabilities. This page explains each concept with concrete examples drawn from realistic testing scenarios.

Prerequisites

  • You have a ContextQA account and have signed in at least once.

  • You have read the Introduction and understand the platform's high-level purpose.


The Building Block Hierarchy

Before diving into each concept individually, here is how they nest together:

A Test Plan pulls together one or more Test Suites, each containing Test Cases, each built from individual Steps (or reusable Step Groups). The plan specifies the Environment to run against and the browsers or devices to target.


Workspace

A workspace is the top-level container for everything related to one application under test. It is isolated — the test cases, environments, variables, knowledge bases, and settings inside one workspace are completely separate from those in another.

When to create a new workspace:

Application
Workspace

Customer-facing web app

MyApp - Web

iOS mobile app

MyApp - iOS

Internal REST API

MyApp - API

Salesforce CRM instance

MyApp - Salesforce

Practical example: Your company has a SaaS product with a React web frontend, a React Native mobile app, and a public REST API. You create three workspaces — one per target. Each workspace has its own environments (staging vs production), its own test cases, and its own execution history. Team members can be invited to specific workspaces with different permission levels.


Test Case

A test case is the atomic unit of testing. It represents one complete user scenario — from opening a page to completing an action and asserting the expected outcome. Every test case has:

  • Name — a human-readable label

  • Starting URL — where the browser navigates when the test begins

  • Steps — an ordered list of actions and verifications

  • Tags — optional labels for filtering (e.g., smoke, regression, login)

  • Prerequisite test cases — other test cases that must pass before this one runs

  • Test Data Profile — optional parameterized data set for data-driven runs

Practical example: A test case named "Add item to cart — guest user" starts at https://shop.example.com, searches for "blue t-shirt", selects the first result, clicks "Add to Cart", and verifies the cart icon shows 1 item. This is one atomic scenario with a clear starting state, actions, and an assertion.

Prerequisite linking: If your "Checkout" test case requires a logged-in session that your "Login" test case establishes, you can link "Login" as a prerequisite of "Checkout". ContextQA runs prerequisites first and shares session state.


Step Groups

A step group is a reusable library of steps that can be inserted into any test case like a function call. When you update a step group, every test case that uses it automatically inherits the change.

Common step group patterns:

Step Group Name
Contents

SG_Login_Admin

Navigate to /login, type admin credentials, click Sign In, verify dashboard

SG_Login_Guest

Navigate to /login, type guest credentials, click Sign In

SG_Close_Cookie_Banner

Click the "Accept All" button if the cookie consent banner is visible

SG_Checkout_Payment

Fill card number, expiry, CVV, click Pay Now

Practical example: You have 40 test cases that all start with logging in. Instead of repeating the login steps in each test case, you create SG_Login once. When the login page redesigns and the button label changes from "Sign In" to "Log In", you update SG_Login in one place and all 40 test cases are fixed instantly.

Creating a step group:

  1. Navigate to Test Development → Step Groups.

  2. Click + Create Step Group.

  3. Give it a name (prefix with SG_ by convention).

  4. Add steps exactly as you would in a test case.

Alternatively, use the three-dot menu on any test case → Clone → As Step Group to convert an existing test case into a reusable step group.


Test Suite

A test suite is a named grouping of test cases. Suites serve two purposes: they organize related tests logically, and they define what gets executed together when you run a suite.

ContextQA supports nested suites — a suite can contain sub-suites, creating a folder-like hierarchy for large test libraries.

Practical example:

Naming convention: Use a prefix that describes the suite's purpose and scope. Examples: Smoke_Auth, Regression_Checkout, API_Products, Mobile_Onboarding.


Test Plan

A test plan is the run configuration. It brings together everything needed to execute a meaningful batch of tests:

  • Which suites to include

  • Which browser(s) to target (Chrome, Firefox, Safari, Edge) or which device for mobile

  • Which environment to run against (Staging, Production, QA)

  • Parallel or sequential execution

  • Schedule (cron expression for recurring runs)

  • Notifications (Slack channel, email recipients for results)

Practical example: Your CI/CD pipeline has two test plans:

  1. Smoke Plan — runs the Smoke_Auth and Smoke_Checkout suites in Chrome against the Staging environment on every pull request. Runs in parallel, completes in under 5 minutes.

  2. Nightly Regression — runs the full E-Commerce Regression suite against Chrome, Firefox, and Edge against the Production environment every night at midnight. Runs in parallel across browsers.

One test case can appear in multiple test plans, running against different environments or browser configurations without duplication.


Environment

An environment is a named configuration that stores the base URL and a dictionary of key-value parameters for one deployment of your application.

Anatomy of an environment:

Key
Value
Type

BASE_URL

https://staging.myapp.com

text

API_KEY

sk-staging-abc123

password (encrypted)

DB_HOST

db-staging.internal

text

ADMIN_EMAIL

text

Using environment variables in steps: Reference them with the ${ENV.KEY} syntax anywhere in a step description, URL field, or API call configuration:

  • Navigate to ${ENV.BASE_URL}/dashboard

  • Set Authorization header to Bearer ${ENV.API_KEY}

Practical example: You have two environments: Staging and Production. Both have the same keys but different values. By selecting the environment in your test plan, you can run the exact same test suite against either environment without editing a single test step.

Password-type parameters are stored encrypted and are never shown in plain text in the UI or in execution logs.


Test Data Profile

A test data profile is a parameterized data set that enables data-driven testing — running the same test case multiple times with different input values.

Structure: A profile is a table where each column is a named variable and each row is one test run.

username

password

expected_result

Admin123!

Admin Dashboard

User456!

User Dashboard

Read789!

Read-Only Dashboard

When this profile is attached to the "Login" test case and executed, ContextQA runs the test three times — once per row — substituting ${username}, ${password}, and ${expected_result} in the steps.

Practical example: Your login form needs to be tested with 10 different user roles. Instead of creating 10 separate test cases, you create one test case and one profile with 10 rows. The profile runs produce 10 individual execution records with independent pass/fail results.

Creating a data profile:

  1. Navigate to Test Development → Data Profiles.

  2. Click + Create Profile.

  3. Define column names (these become the variable names).

  4. Add rows of data.

  5. In your test plan, attach the profile to the test case.


Knowledge Base

A knowledge base is a set of plain-English instructions stored in your workspace that the AI test agent consults before and during execution. It is the mechanism for teaching the AI how to handle recurring situations specific to your application.

Common knowledge base entries:

  • "If a cookie consent banner appears, click the 'Accept All Cookies' button before proceeding."

  • "If a chat widget opens in the bottom-right corner, close it by clicking the X icon."

  • "The application may show a 'Session expired' modal. If it appears, click 'Stay logged in'."

  • "On the payment page, always use the test card number 4111 1111 1111 1111 with expiry 12/26 and CVV 123."

Practical example: Your application shows a GDPR cookie consent modal on first visit. Without a knowledge base entry, the AI might try to interact with the page before the modal is dismissed, causing failures. With the entry "Close the cookie banner before any other action", every test in the workspace automatically handles the modal correctly.

Knowledge bases are workspace-scoped — they apply to every test case in the workspace automatically.


Custom Agents

A custom agent is an AI persona with a specialized system prompt. While the default ContextQA AI agent is calibrated for general web testing, custom agents can be configured for domain-specific testing scenarios.

Use cases:

  • A Salesforce agent calibrated to understand Lightning component naming conventions

  • An accessibility testing agent that looks for ARIA violations on every page

  • A financial application agent that understands currency formatting and validation rules

  • A localization agent that tests right-to-left layouts and non-ASCII character handling

Custom agents are created in Settings → Custom Agents and can be selected when creating or executing a test case.


Execution

An execution is one complete run of a test case, test suite, or test plan. Each execution produces:

  • A unique execution_id (also called result_id)

  • Pass/fail status per step

  • Screenshots captured at each step

  • Full video recording of the browser session

  • Network HAR log (all HTTP requests and responses)

  • Browser console log (errors, warnings, info)

  • Playwright trace file (for deep debugging)

  • AI-generated root cause analysis (for failures)

Executions are stored indefinitely by default and can be accessed from the Execution History panel.


Self-Healing

Self-healing is the AI capability that automatically repairs a broken test step when the UI element it references has changed.

When healing occurs:

  1. A step tries to interact with an element (e.g., "Click the Sign In button").

  2. The element cannot be found at its previously known location.

  3. The AI scans the current page using visual analysis and DOM inspection.

  4. If a semantically equivalent element is found with a confidence score above 90%, the step is automatically updated.

  5. Execution continues without a failure being recorded.

What triggers self-healing:

  • Button text changes ("Sign In" → "Log In")

  • CSS class or ID attribute changes after a design system update

  • Element moved to a different DOM position

  • data-testid attribute renamed

What does not trigger self-healing:

  • Elements completely removed from the page (this is treated as a genuine failure)

  • Entire page flows replaced (the test needs to be rewritten)

  • Confidence below 90% (flagged for manual review instead)

After execution, the report shows healed steps with their original locator, the healed locator, and the confidence score.


MCP Server

The ContextQA MCP (Model Context Protocol) server exposes 67 platform tools to external AI assistants. Any MCP-compatible client — including Claude, Cursor, and GPT with the right plugins — can call these tools to create, run, and analyze tests programmatically.

Examples of what an AI assistant can do via MCP:

  • "Create a test case for the login flow of my app" → calls create_test_case

  • "Run the smoke suite and tell me the results" → calls execute_test_suite, then get_execution_status

  • "Which tests are currently failing?" → calls get_test_case_results

  • "Show me auto-healing suggestions for the last run" → calls get_auto_healing_suggestions

The MCP server is the foundation for the ContextQA integration with Claude Code, enabling test automation directly from your development environment.


Tips & Best Practices

  • Keep test cases atomic — one scenario per test case. This makes failures easier to diagnose and keeps execution time predictable.

  • Build step groups early — identify repeated setup patterns (login, navigation to a module) and extract them as step groups before you have too many test cases to refactor.

  • Use environments from day one — even if you only have one environment initially, setting up the environment configuration makes it easy to add staging/production variants later.

  • Write knowledge base entries proactively — before creating test cases, add entries for any persistent UI elements that could interfere with automation (banners, modals, tours, chat widgets).

  • Tag every test case — consistent tagging (smoke, regression, critical, login) allows you to build targeted test plans for different CI/CD stages.

Troubleshooting

My test cases do not appear in a suite after adding them Refresh the page. The suite member list updates asynchronously and may take a moment to reflect changes.

Environment variables are not being substituted in steps Check that the variable name in the step exactly matches the key in the environment configuration, including case sensitivity. ${ENV.baseUrl} and ${ENV.BaseUrl} are different.

A step group I updated is not reflecting in test cases Step groups are resolved at execution time. Re-run the affected test cases to see the updated step group content. There is no need to re-open or re-save the test cases.

The AI is not respecting my knowledge base instructions Knowledge base entries must be written as clear, imperative instructions. Avoid ambiguous language. Instead of "handle cookie popups", write "If a cookie consent banner is visible, click the button labelled Accept All Cookies immediately before any other interaction."

circle-info

Create your first test in 5 minutes — no code required. Start Free Trial →arrow-up-right — Or Book a Demo →arrow-up-right to see ContextQA with your application.

Last updated

Was this helpful?