Execution & Results
Complete reference for MCP tools that trigger test runs, poll for completion, and retrieve results including screenshots and step details.
Who is this for? SDETs, developers, and DevOps engineers integrating ContextQA with AI coding assistants (Claude, Cursor) or CI/CD pipelines.
These tools form the core execution loop: trigger a run, poll until it finishes, and retrieve the full evidence package.
execute_test_case
execute_test_caseTriggers a single test case execution and returns a monitoring URL.
Category: Execution & Results Authentication required: Yes
Parameters
test_case_id
✅
integer
Numeric ID of the test case to run
run_in_portal
❌
boolean
Return a portal URL for live monitoring (default: true)
persona_id
❌
string
ID of a custom AI agent persona to use for this run
knowledge_id
❌
string
ID of a knowledge base to inject into this run
Returns
When run_in_portal is true: a portal URL string like https://app.contextqa.com/td/cases/18688/dry_runs.
When false: a terminal command string for local execution.
Execution pattern
Related Tools
get_execution_status • get_test_case_results • get_execution_step_details
get_execution_status
get_execution_statusPolls for the current execution status of a test case. Call this repeatedly after execute_test_case until a terminal state is returned.
Category: Execution & Results Authentication required: Yes
Parameters
test_case_id
✅
integer
The test case whose most recent execution you want to check
number_of_executions
✅
integer
Total number of executions triggered since last check; typically 1
Returns
String containing the current status and result ID when complete. Possible statuses:
Execution in progress— still running; poll againSTATUS_COMPLETED, result_id: <id>— finished (checkget_test_case_resultsfor pass/fail)STATUS_FAILED, result_id: <id>— execution infrastructure failure
Notes
Recommended polling interval: 5–10 seconds to avoid rate limiting.
The
result_idreturned here is the ID to pass toget_test_case_results,get_execution_step_details, and other telemetry tools.
Related Tools
execute_test_case • get_test_case_results
get_test_case_results
get_test_case_resultsReturns the complete result object for a test execution, including overall pass/fail, step count, duration, and all evidence URLs.
Category: Execution & Results Authentication required: Yes
Parameters
execution_id
❌
string
Execution ID (from execute_test_case return or portal URL)
result_id
❌
integer
Result ID (from get_execution_status return)
At least one of execution_id or result_id must be supplied.
Returns
JSON object with:
result_id— unique result identifierstatus—PASSEDorFAILEDtotal_steps— total step countfailed_steps— number of failed stepsduration_ms— total execution timevideo_url— pre-signed S3 URL to the screen recordingstarted_at/completed_at— ISO 8601 timestampssteps— array of step result summaries
Example
Related Tools
get_execution_step_details • get_test_step_results • get_root_cause
get_execution_step_details
get_execution_step_detailsReturns a human-readable step-by-step breakdown of a test execution with screenshots, failure reasons, and the overall verdict.
Category: Execution & Results Authentication required: Yes
Parameters
result_id
✅
integer
Result ID from get_execution_status or the portal URL
Returns
JSON object with:
test_case_name— name of the test caseoverall_status—PASSEDorFAILEDtotal_steps— total step countfailed_steps— number that failedscreen_recording_url— video URLsteps— array of per-step objects, each with:step_number(1-based)action— the step descriptionstatus—PASSEDorFAILEDscreenshot_url— URL of the step screenshotfailure_reason— AI explanation of what went wrong (if failed)
Example
Use case
This is the primary tool for a QA agent that wants to report test results to a developer. It provides all the evidence in a single structured call.
Related Tools
get_test_case_results • get_network_logs • get_console_logs • get_root_cause
fix_and_apply
fix_and_applyOrchestrates the full failure-to-fix pipeline in a single call: fetches execution results, runs AI root cause analysis, queries the source repository for the responsible code, and returns a structured fix suggestion.
Category: Execution & Results Authentication required: Yes
Parameters
execution_id
✅
string
ID of the failed execution
repo_url
✅
string
URL of the source code repository to query (GitHub, GitLab, Bitbucket)
Returns
JSON object with:
execution_id— echoed back for referencetest_case_name— which test failedfailure_description— plain English description of the failureroot_cause_analysis— AI analysis of what caused itcode_context— relevant code snippets from the repositorynext_steps— specific actions recommended (file to edit, line to change)
Use case
Call this after a test failure when you have access to the source repository. The tool automatically:
Fetches the execution details
Identifies the failed step and its screenshot
Sends evidence to the root cause analysis engine
Searches the repository for code related to the failed feature
Returns a correlated analysis with actionable fix suggestions
Example
Related Tools
get_root_cause • investigate_failure • get_execution_step_details
Last updated
Was this helpful?