Execution & Results

Complete reference for MCP tools that trigger test runs, poll for completion, and retrieve results including screenshots and step details.

circle-info

Who is this for? SDETs, developers, and DevOps engineers integrating ContextQA with AI coding assistants (Claude, Cursor) or CI/CD pipelines.

These tools form the core execution loop: trigger a run, poll until it finishes, and retrieve the full evidence package.


execute_test_case

Triggers a single test case execution and returns a monitoring URL.

Category: Execution & Results Authentication required: Yes

Parameters

Name
Required
Type
Description

test_case_id

integer

Numeric ID of the test case to run

run_in_portal

boolean

Return a portal URL for live monitoring (default: true)

persona_id

string

ID of a custom AI agent persona to use for this run

knowledge_id

string

ID of a knowledge base to inject into this run

Returns

When run_in_portal is true: a portal URL string like https://app.contextqa.com/td/cases/18688/dry_runs.

When false: a terminal command string for local execution.

Execution pattern

get_execution_statusget_test_case_resultsget_execution_step_details


get_execution_status

Polls for the current execution status of a test case. Call this repeatedly after execute_test_case until a terminal state is returned.

Category: Execution & Results Authentication required: Yes

Parameters

Name
Required
Type
Description

test_case_id

integer

The test case whose most recent execution you want to check

number_of_executions

integer

Total number of executions triggered since last check; typically 1

Returns

String containing the current status and result ID when complete. Possible statuses:

  • Execution in progress — still running; poll again

  • STATUS_COMPLETED, result_id: <id> — finished (check get_test_case_results for pass/fail)

  • STATUS_FAILED, result_id: <id> — execution infrastructure failure

Notes

  • Recommended polling interval: 5–10 seconds to avoid rate limiting.

  • The result_id returned here is the ID to pass to get_test_case_results, get_execution_step_details, and other telemetry tools.

execute_test_caseget_test_case_results


get_test_case_results

Returns the complete result object for a test execution, including overall pass/fail, step count, duration, and all evidence URLs.

Category: Execution & Results Authentication required: Yes

Parameters

Name
Required
Type
Description

execution_id

string

Execution ID (from execute_test_case return or portal URL)

result_id

integer

Result ID (from get_execution_status return)

At least one of execution_id or result_id must be supplied.

Returns

JSON object with:

  • result_id — unique result identifier

  • statusPASSED or FAILED

  • total_steps — total step count

  • failed_steps — number of failed steps

  • duration_ms — total execution time

  • video_url — pre-signed S3 URL to the screen recording

  • started_at / completed_at — ISO 8601 timestamps

  • steps — array of step result summaries

Example

get_execution_step_detailsget_test_step_resultsget_root_cause


get_execution_step_details

Returns a human-readable step-by-step breakdown of a test execution with screenshots, failure reasons, and the overall verdict.

Category: Execution & Results Authentication required: Yes

Parameters

Name
Required
Type
Description

result_id

integer

Result ID from get_execution_status or the portal URL

Returns

JSON object with:

  • test_case_name — name of the test case

  • overall_statusPASSED or FAILED

  • total_steps — total step count

  • failed_steps — number that failed

  • screen_recording_url — video URL

  • steps — array of per-step objects, each with:

    • step_number (1-based)

    • action — the step description

    • statusPASSED or FAILED

    • screenshot_url — URL of the step screenshot

    • failure_reason — AI explanation of what went wrong (if failed)

Example

Use case

This is the primary tool for a QA agent that wants to report test results to a developer. It provides all the evidence in a single structured call.

get_test_case_resultsget_network_logsget_console_logsget_root_cause


fix_and_apply

Orchestrates the full failure-to-fix pipeline in a single call: fetches execution results, runs AI root cause analysis, queries the source repository for the responsible code, and returns a structured fix suggestion.

Category: Execution & Results Authentication required: Yes

Parameters

Name
Required
Type
Description

execution_id

string

ID of the failed execution

repo_url

string

URL of the source code repository to query (GitHub, GitLab, Bitbucket)

Returns

JSON object with:

  • execution_id — echoed back for reference

  • test_case_name — which test failed

  • failure_description — plain English description of the failure

  • root_cause_analysis — AI analysis of what caused it

  • code_context — relevant code snippets from the repository

  • next_steps — specific actions recommended (file to edit, line to change)

Use case

Call this after a test failure when you have access to the source repository. The tool automatically:

  1. Fetches the execution details

  2. Identifies the failed step and its screenshot

  3. Sends evidence to the root cause analysis engine

  4. Searches the repository for code related to the failed feature

  5. Returns a correlated analysis with actionable fix suggestions

Example

get_root_causeinvestigate_failureget_execution_step_details

Last updated

Was this helpful?