Tool Reference

circle-info

Who is this for? SDETs, developers, and DevOps engineers integrating ContextQA with AI coding assistants (Claude, Cursor) or CI/CD pipelines.

This page provides a complete index of all 67 ContextQA MCP tools, organized by category. Each entry includes the tool name, a one-line description, and the key parameters you need to call it.

For the complete parameter schemas including optional fields, data types, and return value documentation, refer to the individual tool pages in this section.


Category 1: Test Case Management (8 tools)

These tools cover the full CRUD lifecycle for test cases — the atomic unit of testing in ContextQA. Each test case has a URL, a set of natural language steps, and produces a result when executed.

Tool
Description
Key Parameters

create_test_case

Create a new test case from a URL and plain English task description

url, task_description, name

get_test_cases

List all test cases in the current workspace

workspace_id (optional filter)

get_test_case_steps

Retrieve all steps for a specific test case

test_case_id

update_test_case_step

Modify the description, expected result, or configuration of one step

test_case_id, step_id, description

delete_test_case

Permanently delete a test case and all its execution history

test_case_id

delete_test_case_step

Remove a single step from a test case

test_case_id, step_id

query_contextqa

Natural language search across all test cases and test suites

query

create_complex_test_step

Add an advanced step type: conditional logic, loop, API call, or data-driven branch

test_case_id, step_type, configuration

Usage Notes

create_test_case is the most commonly called tool. Supply the full URL of the page you want to test and a plain English description of the user journey. The AI generates all steps automatically — you do not need to specify individual actions.

query_contextqa accepts free-form natural language. For example: "tests that cover the checkout flow" or "login tests that use admin credentials". It searches test case names, step descriptions, and tags.

create_complex_test_step supports step types beyond the standard NLP action. Supported types include api_assertion (verify a REST endpoint response), conditional (if/then branching), loop (repeat steps N times or until a condition), and data_source (inject values from a test data profile).


Category 2: Execution & Results (5 tools)

These tools trigger test runs and retrieve results. All executions are asynchronous — call execute_test_case to start a run, then poll get_execution_status until a terminal state is returned.

Tool
Description
Key Parameters

execute_test_case

Start an execution of a single test case

test_case_id, environment_id (optional)

get_execution_status

Poll the status of a running or completed execution

test_case_id, number_of_executions

get_test_case_results

Retrieve the full result object for a completed execution

execution_id

get_test_step_results

Get per-step pass/fail status with screenshot URLs

result_id

fix_and_apply

Apply a code-level fix to a test case based on an execution failure

execution_id, repo_url

Usage Notes

execute_test_case returns a number_of_executions value in its response. Pass this value to get_execution_status along with the test_case_id to poll for the result. The execution is complete when result is PASSED, FAILED, or ERROR.

get_test_case_results returns the execution-level summary: overall result, total duration, browser used, and environment. Use get_test_step_results to drill into individual step data.

fix_and_apply is used in code-sync scenarios where a ContextQA test failure reflects a real bug that has been fixed in the application code. It validates the fix by re-running the test against the patched code.


Category 3: Test Suites & Plans (6 tools)

Suites are logical groupings of test cases. Plans add execution configuration: browser, environment, schedule, and which suites to include. Use these tools to orchestrate regression runs and monitor plan-level status.

Tool
Description
Key Parameters

get_test_suites

List all test suites in the current workspace

execute_test_suite

Run all test cases in a suite

suite_id, environment_id (optional)

get_test_plans

List all test plans

execute_test_plan

Trigger a full test plan execution

plan_id, environment_id (optional)

get_test_plan_execution_status

Poll the status of a test plan run

plan_id, execution_id

rerun_test_plan

Re-run a previously executed test plan

plan_id, execution_id

Usage Notes

execute_test_suite runs all test cases in the suite concurrently (subject to concurrency limits). For sequential execution with dependencies, configure pre-requisite relationships between test cases in the ContextQA UI before calling this tool.

rerun_test_plan is useful after applying fixes. It re-runs the same plan configuration against the same environment, allowing you to verify that previously failing tests now pass.


Category 4: Infrastructure & Config (8 tools)

These tools expose the configuration layer: environments (base URLs and variables), device farm inventory for mobile tests, UI element maps for live pages, and the custom agent and knowledge base systems.

Tool
Description
Key Parameters

get_environments

List all configured test environments with their base URLs

get_test_devices

List available mobile device configurations

platform (ios or android, optional)

get_mobile_concurrency

Check how many concurrent mobile test slots are available

get_ui_elements

Discover all UI elements on a live page

url

list_custom_agents

List all custom AI agent personas

create_custom_agent

Define a new agent persona with a custom system prompt

name, system_prompt

list_knowledge_bases

List all knowledge bases

create_knowledge_base

Create a new knowledge base with AI behavioral instructions

name, content

Usage Notes

get_environments returns environment IDs that you can pass to execution tools to target a specific environment (staging, production, QA). If no environment is specified when executing, ContextQA uses the default environment configured for the test case.

get_ui_elements performs a live page scan and returns a structured map of all interactive elements: buttons, inputs, links, dropdowns, checkboxes. This is useful for agents that need to verify an element exists before writing a test step for it.

create_custom_agent accepts a system_prompt string that instructs the AI execution engine on special behaviors. The prompt is prepended to every execution that uses this agent. Keep it focused — broad prompts can conflict with test step instructions.


Category 5: Test Data Profiles (5 tools)

Test data profiles allow parameterized testing: a single test case runs multiple times with different input values from a data table. Each row in the profile produces one execution.

Tool
Description
Key Parameters

get_test_data_profiles

List all test data profiles in the workspace

get_test_data_profile

Get the full content of a specific data profile

profile_id

create_test_data_profile

Create a new data profile with column definitions

name, columns, rows

update_test_data_profile

Add, remove, or modify rows and columns

profile_id, updates

delete_test_data_profile

Delete a data profile

profile_id

Usage Notes

When a test case is linked to a data profile, executing the test case triggers one execution per row in the profile. Results are grouped under the test case with each row's values shown in the step output.

Data profiles support all standard data types: string, number, email, URL, date, boolean. They also support masked fields (for passwords and sensitive values) which are redacted in test reports.


Category 6: Test Generation (10 tools)

The generation tools are the most powerful entry point for AI-driven test creation. Each tool accepts a different source artifact and returns one or more fully formed test cases ready to execute.

Tool
Description
Key Parameters

generate_contextqa_tests_from_n8n

Generate tests from an n8n workflow JSON file or URL

file_path_or_url

generate_tests_from_code_change

Generate regression tests from a git diff

diff_text, app_url

generate_tests_from_jira_ticket

Generate tests from a Jira or Azure DevOps ticket

ticket_id, include_acceptance_criteria

generate_tests_from_figma

Generate UX flow tests from a Figma design file

figma_url

generate_tests_from_excel

Generate tests from an Excel or CSV file of test cases

file_path

generate_tests_from_swagger

Generate API contract tests from an OpenAPI spec

file_path_or_url

generate_tests_from_video

Generate tests from a screen recording video file

video_file_path

generate_tests_from_requirements

Generate tests from a plain text requirements document

requirements_text

generate_tests_from_analytics_gap

Generate tests targeting identified coverage gaps

gap_id

generate_edge_cases

Generate boundary and negative test scenarios

context_query

Usage Notes

generate_tests_from_code_change is specifically designed for CI/CD integration. Pass the output of git diff main...HEAD as the diff_text parameter. ContextQA analyzes which application flows are affected by the changed code and generates targeted tests for those specific areas.

generate_tests_from_jira_ticket reads the ticket summary, description, and acceptance criteria. Set include_acceptance_criteria=True (the default) to generate one test case per acceptance criterion in addition to the main scenario.

generate_tests_from_video works with .mp4, .mov, and .webm recordings. The AI watches the video, identifies distinct user actions, and converts them into NLP test steps. For best results, record the video at normal speed without fast-forwarding.

generate_edge_cases does not require an existing test case. Supply a context_query describing the feature — for example, "user registration with email address validation" — and ContextQA generates a set of edge case scenarios covering boundaries, invalid inputs, and error states.


Category 7: Bug & Defect (3 tools)

These tools manage the failure-to-defect lifecycle: pushing failures to issue trackers, inspecting what changed in the UI, and applying automated fixes.

Tool
Description
Key Parameters

create_defect_ticket

Create a Jira or Azure DevOps issue from a test failure

execution_id, project_id

get_auto_healing_suggestions

Get AI-proposed locator fixes for a failed step

execution_id

approve_auto_healing

Accept and apply a healing suggestion

healing_id, execution_id

Usage Notes

create_defect_ticket automatically populates the issue with:

  • The name of the test case that failed

  • The specific step that failed and the error message

  • The failure screenshot as an attachment

  • A link to the ContextQA execution (for video and trace access)

  • The browser, OS, and environment used

get_auto_healing_suggestions returns a list of candidate fixes ranked by AI confidence. Each suggestion includes the original locator, the proposed new locator, and the confidence score (0.0 to 1.0). Only suggestions with confidence above 0.90 are recommended for automatic application.

approve_auto_healing applies the selected suggestion. The test case step is updated immediately. The next execution of that test case will use the new locator.


Category 8: Advanced Testing (3 tools)

Beyond browser UI tests, ContextQA supports performance load testing and DAST security scanning from the same MCP interface.

Tool
Description
Key Parameters

execute_performance_test

Run a load or performance test against a URL or API

target_url, concurrent_users, duration_seconds

execute_security_dast_scan

Run a DAST security scan against a live application

target_url, scan_profile

export_test_case_as_code

Export a ContextQA test case as executable code

test_case_id, format

Usage Notes

execute_performance_test runs a configurable load pattern against the specified target. Key parameters include concurrent_users (virtual user count), duration_seconds (total test duration), and ramp_up_seconds (time to reach full load). Results include response time percentiles (p50, p90, p95, p99), error rate, and throughput (requests/second).

execute_security_dast_scan launches a DAST scan using the OWASP ZAP engine. The scan_profile parameter accepts passive (non-intrusive, safe for production), standard (common vulnerability checks, safe for staging), or full (comprehensive attack simulation, staging/dev only).

export_test_case_as_code supports format values of playwright_typescript, playwright_javascript, and python. The exported code is a complete, runnable test file with all steps translated to the target framework's API.


Category 9: AI-Powered Analysis (3 tools)

These tools apply AI reasoning to understand test repositories, identify risk, and diagnose failures.

Tool
Description
Key Parameters

get_root_cause

AI analysis of a specific test execution failure

execution_id

query_repository

Query the test repo for context about an application feature

query

analyze_test_impact

Identify which existing tests are impacted by a code change

diff_text

Usage Notes

get_root_cause is the primary debugging tool. It analyzes the complete evidence package (screenshots, video, network logs, console logs, DOM state) and returns a structured explanation: what failed, why it failed, which step is affected, and a concrete suggested fix.

query_repository is designed for agents that need to understand what test coverage exists before creating new tests. For example: "what tests cover the user profile settings page?" or "are there any tests for the password reset flow?".

analyze_test_impact takes a git diff and identifies which test cases in your workspace are most likely to be affected by the changed code. Use this in CI pipelines to run a targeted subset of tests rather than the full suite on every commit.


Category 10: Analytics & Coverage (2 tools)

These tools help identify what is not being tested and generate tests to close those gaps.

Tool
Description
Key Parameters

analyze_coverage_gaps

Identify application flows and pages with no test coverage

app_url (optional)

generate_tests_from_analytics_gap

Create tests targeting a specific identified gap

gap_id

Usage Notes

analyze_coverage_gaps crawls the application (if a URL is provided) and compares the discovered pages and flows against existing test cases. It returns a list of gap objects, each with a gap_id, a description of the uncovered flow, and a severity rating (high, medium, low based on traffic or criticality).

Pass a gap_id from the gap analysis to generate_tests_from_analytics_gap to automatically generate tests for that specific uncovered area.


Category 11: Custom Agents & Knowledge Bases (4 tools)

Custom agents and knowledge bases encode team-specific testing knowledge into the AI execution engine.

Tool
Description
Key Parameters

list_custom_agents

List all custom AI agent personas in the workspace

create_custom_agent

Create a new agent persona with a custom system prompt

name, system_prompt

list_knowledge_bases

List all knowledge bases

create_knowledge_base

Create a knowledge base with AI behavioral instructions

name, content

Usage Notes

Custom agents are best used for behaviors that should apply to every test in a suite — for example, a checkout agent that always uses the test payment gateway, or a mobile agent that always grants permissions when prompted.

Knowledge bases are better for factual instructions that the agent should reference as needed — for example, documentation of which test user accounts exist, or instructions for navigating a complex multi-step wizard.

Both can be assigned to a test case or test suite from the ContextQA UI. Multiple knowledge bases can be assigned simultaneously; multiple custom agents cannot (only one agent persona per execution).


Category 12: Telemetry (5 tools)

Every execution produces a complete evidence package. These tools expose each artifact in the package for programmatic access.

Tool
Description
Key Parameters

get_execution_step_details

Per-step data including screenshot URL, duration, and healing status

result_id

get_network_logs

Full network HAR log for an execution

result_id

get_console_logs

Browser console output (errors, warnings, logs)

result_id

get_trace_url

URL to view the Playwright trace at trace.playwright.dev

result_id

get_ai_reasoning

Per-step AI confidence scores and locator decisions

result_id

Usage Notes

The result_id parameter required by all telemetry tools is available in the response from get_test_case_results. It is distinct from the execution_id — the execution ID identifies the run, while the result ID identifies the result artifact within that run.

get_trace_url returns a URL that can be opened directly in any browser. The trace viewer at trace.playwright.dev is a web application that visualizes the complete DOM state, network waterfall, and action timeline for the execution. No installation required.

get_ai_reasoning is primarily useful for debugging flaky tests. The reasoning output shows, for each step, how the AI interpreted the test step and what actions it took. Use it to understand why a step passed or failed.


Category 13: Support-to-Fix (2 tools)

These tools connect support ticket reports to automated reproduction, allowing an agent to verify and diagnose user-reported bugs without manual steps.

Tool
Description
Key Parameters

reproduce_from_ticket

Reproduce a bug described in a support or issue ticket

ticket_id, app_url

investigate_failure

Deep investigation of a specific execution failure

execution_id

Usage Notes

reproduce_from_ticket reads the ticket content (via the configured Jira or Azure DevOps integration), creates a test case from the reported steps, executes it, and returns the result. If the bug reproduces, the response includes the execution ID, the step that failed, and the failure screenshot.

investigate_failure goes deeper than get_root_cause: it correlates the failure with recent code changes (using git blame data if a repo is linked), checks for similar failures in execution history, and suggests whether the issue is a test problem (locator changed) or an application problem (behavior changed).


Category 14: Migration Platform (3 tools)

These tools handle the ingestion of existing test codebases from Playwright, Cypress, Selenium, and other frameworks into ContextQA.

Tool
Description
Key Parameters

analyze_test_repo

Analyze a test repository and report its structure and complexity

repo_url, branch

migrate_repo_to_contextqa

Convert existing test code to ContextQA natural language test cases

repo_url, branch, workspace_id

export_to_playwright

Export all ContextQA test cases in a workspace to Playwright TypeScript

workspace_id, output_path

Usage Notes

The recommended migration sequence is:

  1. Call analyze_test_repo first to get a migration complexity report

  2. Review the report — it flags test patterns that require special handling (page object models, shared fixtures, custom commands)

  3. Call migrate_repo_to_contextqa to perform the actual migration

  4. Run the migrated tests with execute_test_suite and compare pass rates against the original framework

export_to_playwright is useful for teams that want to use ContextQA as a test authoring tool but run tests via their existing Playwright infrastructure. The exported TypeScript files use the standard @playwright/test API and can be executed with npx playwright test without any ContextQA dependencies.


Finding the Right Tool

If you are unsure which tool to use, start with these high-level heuristics:

  • "I want to create a new test"create_test_case or one of the generate_tests_from_* tools

  • "I want to run a test"execute_test_case, execute_test_suite, or execute_test_plan

  • "I want to know if a test passed"get_execution_status (for running), get_test_case_results (for completed)

  • "I want to know why a test failed"get_root_cause, get_execution_step_details, get_network_logs

  • "I want to fix a broken test"get_auto_healing_suggestions + approve_auto_healing, or fix_and_apply

  • "I want to find existing tests"query_contextqa

  • "I want to bring tests from another framework"analyze_test_repo + migrate_repo_to_contextqa

For detailed parameter schemas and return value documentation, refer to the individual tool pages in this section.

circle-info

67 MCP tools — full platform control from any AI agent or CI/CD system. Book a Technical Demo →arrow-up-right — See the MCP server integrated with Claude or Cursor and your actual test infrastructure.

Last updated

Was this helpful?