MCP Server Overview
Who is this for? SDETs, developers, and DevOps engineers integrating ContextQA with AI coding assistants (Claude, Cursor) or CI/CD pipelines.
The ContextQA MCP Server exposes ContextQA's full test automation platform as 67 MCP (Model Context Protocol) tools. Any MCP-compatible AI agent — Claude, Cursor, VS Code Copilot, or a custom-built agent — can call these tools to create tests, run executions, analyze failures, and manage the full test lifecycle, all through natural language.
This means you can describe what you want to test in plain English, and an AI agent orchestrates the entire workflow: generating the test case, executing it, monitoring results, diagnosing failures, pushing defect tickets, and even applying self-healing patches — without writing a single line of Playwright or Selenium code.
What You Can Do
Create Tests from Any Source
Natural language task descriptions
Jira and Azure DevOps tickets (reads acceptance criteria automatically)
Figma design files (analyzes screens and UX flows)
Excel and CSV files (migrates manual test libraries)
Swagger / OpenAPI specifications (generates API contract tests)
Video screen recordings (extracts user journeys from
.mp4files)Requirements documents (plain text or structured specs)
Code diffs from pull requests (generates targeted regression tests)
n8n workflow definitions (maps automation nodes to test steps)
Edge case generation (AI-inferred boundary and negative scenarios)
Execute Tests on Any Platform
Browser-based UI tests (Chrome, Firefox, Safari, Edge)
Mobile device tests (iOS and Android via device farm)
Performance load tests
Security DAST scans
Analyze and Debug Failures
Poll execution status programmatically until completion
Pull step-by-step results with per-step screenshots
Retrieve full browser console logs and network HAR logs
Access Playwright trace files for DOM-level debugging
Run AI root cause analysis that explains failures in plain English
Manage Defects and Self-Healing
Automatically push defect tickets to Jira or Azure DevOps
Retrieve self-healing suggestions when element locators break
Approve and apply healing patches programmatically
Migrate and Export
Analyze existing Cypress, Playwright, or Selenium repositories
Migrate test suites from those frameworks into ContextQA
Export ContextQA tests back to Playwright TypeScript
The 67 Tools by Category
Test Case Management
create, read, update, delete, query
8
Execution & Results
execute, poll, results, step details, fix
5
Test Suites & Plans
list, execute, status, rerun
6
Infrastructure & Config
environments CRUD, devices, UI elements
8
Test Data Profiles
CRUD for parameterized data profiles
5
Test Generation
n8n, code diff, Jira, Figma, Excel, Swagger, video, requirements, edge cases, Linear
10
Bug & Defect
create ticket, get healing suggestions, approve healing
3
Advanced Testing
performance load, DAST security, code export
3
AI-Powered Analysis
root cause, repo query, impact analysis
3
Analytics & Coverage
coverage gaps, generate from gap
2
Custom Agents & Knowledge Bases
CRUD for agents and knowledge bases
4
Telemetry
step results, network logs, console logs, trace URL, AI reasoning
5
Support-to-Fix
reproduce from ticket, investigate failure
2
Migration Platform
analyze repo, migrate to CQA, export to Playwright
3
Total
67
Tool Categories in Detail
Test Case Management (8 tools)
The core CRUD layer for test cases. You can create a test case from a URL and a plain English description, read back its steps, update individual steps, delete cases, and query the full test library using natural language search.
create_test_case
Create a new test case from URL + task description
get_test_cases
List all test cases in a workspace
get_test_case_steps
Get all steps for a specific test case
update_test_case_step
Modify an individual step
delete_test_case
Permanently delete a test case
delete_test_case_step
Remove one step from a test case
query_contextqa
Natural language search across all test cases
create_complex_test_step
Add an advanced step (conditional, loop, API call)
Execution & Results (5 tools)
Tools that trigger test runs and retrieve results. The typical pattern is: call execute_test_case, store the returned execution handle, poll get_execution_status until a terminal state, then fetch results.
execute_test_case
Run a single test case
get_execution_status
Poll for PASSED / FAILED / RUNNING
get_test_case_results
Get the complete result object for an execution
get_test_step_results
Retrieve per-step details including screenshots
fix_and_apply
Apply a code-level fix to a failing test
Test Suites & Plans (6 tools)
Suites group related test cases. Plans add execution configuration: which browser, which environment, which schedule. These tools let an agent orchestrate full regression runs, not just individual cases.
get_test_suites
List all suites in a workspace
execute_test_suite
Run an entire suite
get_test_plans
List all test plans
execute_test_plan
Trigger a full plan execution
get_test_plan_execution_status
Poll plan-level execution status
rerun_test_plan
Re-run a plan (useful after fixes)
Infrastructure & Config (8 tools)
Manage the environments (base URLs, variables), device configurations for mobile tests, and discover the UI element map of a live application.
get_environments
List all configured environments
get_test_devices
List available mobile device configurations
get_mobile_concurrency
Check how many concurrent mobile slots are available
get_ui_elements
Discover all UI elements on a live page
list_custom_agents
List all custom AI agent personas
create_custom_agent
Define a new agent persona with custom behavior
list_knowledge_bases
List all knowledge bases
create_knowledge_base
Create a new knowledge base with AI instructions
Test Generation (10 tools)
The generation tools are the most powerful entry point for an AI agent. Each one accepts a different source artifact and returns a fully formed test case ready to execute.
generate_contextqa_tests_from_n8n
Generate from an n8n workflow file or URL
generate_tests_from_code_change
Generate from a git diff
generate_tests_from_jira_ticket
Generate from a Jira/ADO ticket
generate_tests_from_figma
Generate from a Figma design URL
generate_tests_from_excel
Generate from an Excel/CSV file
generate_tests_from_swagger
Generate from an OpenAPI spec
generate_tests_from_video
Generate from a screen recording
generate_tests_from_requirements
Generate from a requirements document
generate_tests_from_analytics_gap
Generate tests to fill coverage gaps
generate_edge_cases
Generate boundary and negative scenarios
Bug & Defect (3 tools)
Once a failure is confirmed, these tools handle the full defect lifecycle: push to the issue tracker, inspect what changed in the UI, and apply the automated fix.
create_defect_ticket
Create a Jira/ADO issue with failure evidence
get_auto_healing_suggestions
Get AI-proposed locator fixes
approve_auto_healing
Accept and apply a healing suggestion
Advanced Testing (3 tools)
Beyond browser UI tests, ContextQA supports performance and security test types triggered from the same tool interface.
execute_performance_test
Run a load/performance test
execute_security_dast_scan
Run a DAST security scan against a live URL
export_test_case_as_code
Export a test case as runnable code
AI-Powered Analysis (3 tools)
These tools let the AI agent interrogate a test repository for intelligence — finding what changed, what is at risk, and what tests already exist.
get_root_cause
AI analysis of a specific test failure
query_repository
Query the test repo for context about a feature
analyze_test_impact
Given a code change, identify impacted tests
Analytics & Coverage (2 tools)
analyze_coverage_gaps
Identify application flows with no test coverage
generate_tests_from_analytics_gap
Create tests that close identified gaps
Custom Agents & Knowledge Bases (4 tools)
Custom agents and knowledge bases allow teams to encode institutional testing knowledge into the AI execution engine — for example, always skip the cookie consent banner, or always use the test credit card number on the payment page.
list_custom_agents
List all agent personas
create_custom_agent
Create a new agent with custom system prompt
list_knowledge_bases
List all knowledge bases
create_knowledge_base
Create a knowledge base with AI instructions
Telemetry (5 tools)
Every execution produces a rich evidence package. These tools expose each artifact individually so an AI agent can inspect exactly what happened at the network, DOM, and console level.
get_execution_step_details
Per-step data with screenshots
get_network_logs
Full HAR-format network log
get_console_logs
Browser console output
get_trace_url
URL to Playwright trace viewer
get_ai_reasoning
Per-step AI confidence scores and decisions
Support-to-Fix (2 tools)
When a user reports a bug — in a support ticket, Slack message, or Jira issue — these tools allow an agent to directly reproduce the reported behavior and produce a structured failure report.
reproduce_from_ticket
Reproduce a bug described in a support ticket
investigate_failure
Deep investigation of a specific failure
Migration Platform (3 tools)
Teams migrating from Cypress, Playwright, or Selenium can use these tools to analyze their existing test code and port it to ContextQA's natural language format.
analyze_test_repo
Analyze a test repository and report its structure
migrate_repo_to_contextqa
Convert existing tests to ContextQA format
export_to_playwright
Export ContextQA tests to Playwright TypeScript
How MCP Works with ContextQA
The Model Context Protocol is an open standard that lets an AI model call external tools in a structured way. When you configure the ContextQA MCP server in your AI client (Claude Desktop, Cursor, a custom agent), the client receives a manifest of all 67 tool definitions — their names, descriptions, and parameter schemas. The AI model can then decide to call any tool at any point in a conversation.
The ContextQA MCP server translates each tool call into the corresponding ContextQA REST API call, handles authentication, and returns structured JSON results that the AI can reason about and present to you.
This architecture means you never need to learn the ContextQA REST API directly. You interact with your AI assistant in natural language, and the AI handles all the API orchestration.
Next Steps
Installation & Setup — get the MCP server running in under 5 minutes
Authentication — configure credentials for your deployment
Agent Integration Guide — learn how AI agents should use these tools effectively
Tool Reference — complete parameter documentation for all 67 tools
Last updated
Was this helpful?