Glossary

Comprehensive A-Z reference for all key terms used in ContextQA documentation, the platform UI, and the MCP server API.


A

Action Step A test case step that performs an interaction with the UI — clicking a button, typing in a field, selecting a dropdown option, scrolling the page, hovering over an element, or navigating to a URL. Contrast with a Verification Step, which checks state rather than performing an action.

AI Agent The autonomous execution engine at the heart of ContextQA. The AI agent interprets natural language test steps, locates UI elements using visual AI and DOM analysis, performs browser actions via Playwright, and self-heals broken locators. Multiple specialized agents collaborate in the 9-stage pipeline — each stage is handled by a dedicated sub-agent optimized for that task.

AI Reasoning The per-step log of the AI's decision-making process during a test execution. Accessible via the AI Insights tab in the execution report or the get_ai_reasoning MCP tool.

AI Verification A step type that uses visual AI to validate a condition described in plain English. Rather than checking a specific DOM attribute or CSS property, AI Verification analyzes the screenshot of the page and evaluates whether the described condition is true. Useful for dynamic content, visual states, and assertions that are difficult to express as DOM queries.

Assertion See Verification Step.

Auto-Healing See Self-Healing.


B

Browser Automation Layer The underlying technology through which the AI agent controls browser instances. ContextQA uses Playwright internally as the browser automation framework — supporting Chromium, Firefox, and WebKit (Safari). Users interact with this layer exclusively through natural language; they never write Playwright code directly unless they choose to export a test.


C

CI/CD Integration Connecting ContextQA test plan executions to a continuous integration/continuous deployment pipeline. Tests can be triggered on pull request open, push to a protected branch, or any pipeline event via the ContextQA REST API or MCP server. GitHub Actions, GitLab CI, Jenkins, and CircleCI are all supported.

Confidence Score A numeric score (0.0 to 1.0) produced by the AI when making decisions about element matching. The Self-Healing Agent uses confidence scores to determine whether to auto-apply a healing suggestion: scores at or above 0.90 are applied automatically; scores between 0.70 and 0.89 are applied with a warning flag; scores below 0.70 are not applied and require manual review.

Custom Agent A user-defined AI persona with a custom system prompt that specializes the agent's behavior for a specific application or testing scenario. For example, a "Checkout Agent" might always use a specific test credit card number, or a "Mobile Agent" might always grant OS permissions when prompted. Custom agents are created via the UI or the create_custom_agent MCP tool and assigned to test cases or suites.


D

Data-Driven Testing A testing approach where the same test case is executed multiple times, each time using a different row from a Test Data Profile. Each profile row produces an independent test execution with its own pass/fail result, screenshots, and step log. Useful for testing registration flows with many email formats, checkout flows with multiple shipping addresses, or any parameterized scenario.

DAST (Dynamic Application Security Testing) Security testing performed against a running application by simulating attack patterns. ContextQA integrates DAST scanning (based on OWASP ZAP) as a test type that can be triggered via the execute_security_dast_scan MCP tool. Scan profiles: passive (safe for production), standard (staging), full (dev/staging only).

Defect Ticket A bug report created in an external issue tracker (Jira or Azure DevOps) from a ContextQA test failure. Created manually via the test report UI or programmatically via the create_defect_ticket MCP tool. Populated automatically with the failure screenshot, error message, failing step, and a link to the ContextQA execution report.


E

Element A UI component on a web or mobile page that the AI agent can interact with or assert against — buttons, input fields, dropdowns, links, checkboxes, table rows, text nodes, images with alt text, etc. ContextQA identifies elements using both DOM analysis (HTML attributes, ARIA roles) and visual AI (appearance in the screenshot).

Element Repository (UI Elements) The workspace-level catalog of UI elements discovered during test executions. ContextQA records every element it interacts with, building a map of the application's interface over time. Accessible at Settings → UI Elements or via get_ui_elements MCP tool.

Environment A named configuration containing a base URL and one or more key-value parameters for a specific deployment target of the application. Common environments: Development, Staging, Production, QA. When executing a test plan, you select which environment to run against. Environment variables are referenced in test steps as ${ENV.KEY_NAME}.

Evidence Package The complete set of artifacts produced by a single test execution: per-step screenshots (JPG), full session video (WebM), network HAR log (JSON), browser console log (JSON), Playwright trace file (ZIP), and AI reasoning log. Every execution produces an evidence package regardless of whether it passed or failed.

Execution One complete run of a test case, test suite, or test plan. Each execution is identified by an execution ID and, upon completion, a result ID. The execution produces a status (PASSED / FAILED / ERROR / RUNNING) and a full evidence package.


F

Flaky Test A test that produces inconsistent results — passing on some runs and failing on others — without any change to the application or the test steps. Common causes include race conditions (an element appearing before an API response completes), timing-dependent UI animations, network latency, and non-deterministic test data. ContextQA's AI classifies failures as flaky vs. genuine to help teams prioritize.

Fix and Apply The fix_and_apply MCP tool workflow: given an execution ID and a repository URL, ContextQA diagnoses the failure, proposes a code-level fix, and applies it to the test case or the application code. Used in AI agent orchestration scenarios where the agent is responsible for both detecting and resolving failures.


G

Generation Source The artifact type used to create tests via AI generation. ContextQA supports 10 generation sources: natural language, Jira/ADO ticket, Figma design, Excel/CSV, Swagger/OpenAPI spec, video recording, requirements document, code diff, n8n workflow, and edge case specification.


H

HAR (HTTP Archive) A JSON-format log of all HTTP requests and responses made by the browser during a test execution. Captures: URL, method, status code, request headers, request body, response headers, response body, and timing. Used to diagnose API failures, authentication issues, missing network calls, and performance problems. Accessible via get_network_logs MCP tool or the Network tab in the execution report.

Healing See Self-Healing.


I

Intent Parser Stage 3 of the AI execution pipeline. Converts each natural language test step into a structured action specification: action type (click, type, select, assert), target element reference, data value (if any), and confidence score. The structured action is passed to the Action Executor (Stage 4).


K

Knowledge Base A set of plain-English instructions that the AI agent reads before every test execution where the knowledge base is assigned. Knowledge bases encode application-specific testing knowledge: how to dismiss cookie consent banners, which test user accounts to use, how to navigate complex multi-step wizards. Created via the UI or create_knowledge_base MCP tool.


L

Locator The strategy used to identify a specific UI element in the DOM or screenshot. ContextQA manages locators automatically using AI. Users never write or manage locators directly.

LockDataGuard A feature access gate in ContextQA that restricts certain features based on subscription plan level. Features that may be gated include: execution results history, knowledge base, custom agents, workspace switcher, and UI elements. If a feature shows a lock icon, the restriction is at the subscription level — contact the workspace admin about the plan.


M

MCP (Model Context Protocol) An open standard protocol for AI models to call external tools in a structured, typed way. MCP defines how tool manifests are declared (name, description, input schema, output schema) and how tool calls are made (request/response format). ContextQA implements MCP to expose its platform capabilities to AI agents like Claude, Cursor, and VS Code Copilot.

MCP Server The ContextQA server application (built with FastMCP and Python) that implements the Model Context Protocol and exposes 67 ContextQA platform tools to MCP-compatible AI clients. Can be run locally, in Docker, or deployed to a cloud host like Google Cloud Run.

Migration Platform The set of three MCP tools (analyze_test_repo, migrate_repo_to_contextqa, export_to_playwright) that support moving existing test suites between ContextQA and other frameworks (Playwright, Cypress, Selenium). The migration flow is: analyze → review the complexity report → migrate.


N

NLP Step A test step written in natural language that the AI interprets and executes. For example: "Click the Submit button", "Type '[email protected]' in the Email field", "Verify the success message is visible". NLP steps are the default step type in ContextQA and require no knowledge of CSS selectors, XPath, or browser automation APIs.

n8n Workflow An automation workflow built in the n8n low-code platform. ContextQA can generate test cases from n8n workflow JSON files, mapping each workflow node (HTTP Request, Code, AI, Webhook, etc.) to corresponding ContextQA verification steps. Access via generate_contextqa_tests_from_n8n.


P

Playwright Trace A binary archive file (.zip) in Playwright's trace format. Contains: complete DOM snapshots before and after every action, all network requests with full headers and bodies, console output, and screenshots synchronized to the action timeline. Viewable at trace.playwright.devarrow-up-right with no installation required. Accessible via get_trace_url MCP tool.

Pre-Requisite A dependency relationship between test cases. When Test B has Test A as a pre-requisite, Test B will only execute if Test A passed in the same execution run. Used to model test case dependencies: "only run checkout tests if the login test passed".

Profile Variable A variable reference to a column in a Test Data Profile. Written as ${VAR.COLUMN_NAME} in test steps. At execution time, the variable is replaced with the value from the current data row being executed.


R

Result ID The unique identifier for a specific test execution result artifact. Distinct from the execution ID — the execution ID identifies the run, while the result ID identifies the result object produced by that run. Required parameter for all telemetry MCP tools: get_execution_step_details, get_network_logs, get_console_logs, get_trace_url, get_ai_reasoning.

Root Cause Analysis An AI-generated explanation of why a test execution failed. Produced by analyzing all evidence artifacts (screenshots, video, HAR log, console log, DOM state). Returns: a plain-English failure summary, the affected step number, the evidence used in the analysis, a suggested fix, and a classification (test bug, application bug, flaky failure, environment issue). Accessible via get_root_cause MCP tool or the AI Insights button in the execution report.


S

Self-Healing Automatic repair of a broken test step when the target UI element cannot be found using the original locator. When the Action Executor fails to locate an element, the Self-Healing Agent searches the current DOM and screenshot for semantically equivalent alternatives. If a match is found with high confidence, the step is updated and execution continues — no manual intervention required. Healings can be reviewed via get_auto_healing_suggestions and approve_auto_healing.

Service Account A dedicated ContextQA user account created for CI/CD automation and MCP server connections, not associated with an individual person. Best practice is to use a service account (e.g., [email protected]) for automated tooling so that access can be revoked independently and audit logs remain clean.

Step Group A reusable collection of test steps that can be inserted into multiple test cases, functioning like a reusable function. Step groups are useful for common sequences like "log in as admin" or "navigate to the settings page" that appear across many test cases. Updating a step group updates every test case that uses it. Conventionally prefixed with SG_ in their names.


T

Test Case The atomic unit of testing in ContextQA. A single test scenario consisting of: a name, a starting URL, an ordered list of natural language steps (actions and assertions), and optional metadata (tags, priority, assigned environment, pre-requisites). Produces one execution result per run.

Test Data Profile A parameterized data table used for data-driven testing. Each column defines a variable name; each row defines one set of input values. When a test case is linked to a data profile and executed, one execution is created per data row, each using that row's values as the variable values for the test steps.

Test Plan An execution configuration that specifies which test suites to include, which browser(s) or device(s) to use, which environment to target, how many tests to run in parallel, and optionally a schedule (cron expression) for recurring runs. Test plans are the primary artifact used for CI/CD integration.

Test Suite A logical grouping of related test cases, typically organized by feature area (e.g., "Authentication", "Checkout", "User Profile") or execution purpose (e.g., "Smoke", "Regression", "Sanity"). Test suites are the unit added to a test plan. A test case can belong to multiple suites.

Trace File See Playwright Trace.


V

Variable A named placeholder used in test steps to decouple test data from test logic. ContextQA supports four variable scopes:

  • Local variable — scoped to a single test case execution; set in one step and used in later steps

  • Global variable — shared across all test cases in a workspace; defined in workspace settings

  • Environment parameter — stored in an Environment configuration, referenced as ${ENV.KEY_NAME}

  • Profile variable — from a Test Data Profile row, referenced as ${VAR.COLUMN_NAME}

Version ID The workspace version identifier that appears in most ContextQA portal URL paths as :versionId. Each workspace has a primary version. The version ID is used internally when constructing API requests and route paths. In the UI, you typically do not need to know the version ID — it is embedded in the URL automatically when you navigate within a workspace.

Verification Step A test case step that asserts a condition rather than performing an action. Examples: "Verify the success message is visible", "Verify the URL contains '/dashboard'", "Verify the error message says 'Invalid email'". Verification steps use the Verification Agent (Stage 8 of the pipeline) which applies visual AI to evaluate the condition against the current page state.


W

Workspace The top-level isolated project environment in ContextQA. Contains its own test cases, test suites, test plans, environments, test data profiles, knowledge bases, custom agents, UI elements, integrations, user roster, and settings. Multiple workspaces are completely isolated — data does not cross workspace boundaries. Teams use separate workspaces to isolate access by product, team, or client.


X

XPath A query language for selecting nodes in an XML/HTML document tree. Used as a locator strategy of last resort by the ContextQA action executor. ContextQA prefers more stable strategies (data attributes, ARIA labels, text content) and only falls back to XPath when all other strategies fail. Users never write XPath directly in test steps.

Last updated

Was this helpful?