Bug, Defect & Advanced Testing

MCP tool reference for defect management, performance and security testing, code export, and AI-powered impact analysis in ContextQA.

circle-info

Who is this for? SDETs, developers, and DevOps engineers integrating ContextQA with AI coding assistants (Claude, Cursor) or CI/CD pipelines.

This reference covers three capability groups: defect lifecycle tools that bridge test failures to your ALM, advanced testing tools for performance and security, and AI analysis tools that surface test impact and code context.


Bug & Defect


create_defect_ticket

Compiles failure evidence from an execution and pushes a structured defect ticket to the connected ALM (Jira or Azure DevOps); use this immediately after a failed execution to file a bug without leaving your AI assistant.

Category: Bug & Defect | Authentication required: Yes

Parameters

Name
Required
Type
Description

execution_id

string

ID of the failed test execution to report.

project_id

string

ALM project identifier to create the ticket in.

Returns

JSON with the created ticket's id, url, title, and the ALM system it was filed against.

Example

get_test_case_results, investigate_failure, get_root_cause


get_auto_healing_suggestions

Retrieves AI-generated proposals for repairing broken locators detected in a failed execution; review suggestions before applying them with approve_auto_healing.

Category: Bug & Defect | Authentication required: Yes

Parameters

Name
Required
Type
Description

execution_id

string

ID of the execution that produced broken locators.

Returns

JSON array of healing proposals. Each item includes healing_id, the original broken locator, the proposed replacement locator, a confidence score, and the affected test step.

Example

approve_auto_healing, get_test_step_results, investigate_failure


approve_auto_healing

Applies a specific auto-healing patch to update the broken locator in the test case definition; always review the suggestion from get_auto_healing_suggestions first.

Category: Bug & Defect | Authentication required: Yes

Parameters

Name
Required
Type
Description

healing_id

string

ID of the healing proposal to apply. Obtained from get_auto_healing_suggestions.

execution_id

string

Execution context for the heal. Helps the engine track which run triggered the repair.

Returns

JSON confirmation with status, the updated locator value, and the test step that was modified.

Example

get_auto_healing_suggestions, get_test_case_steps, update_test_case_step


Advanced Testing


execute_performance_test

Runs a load test against the flows covered by a test case using a configurable virtual user count, duration, and ramp-up period.

Category: Advanced Testing | Authentication required: Yes

Parameters

Name
Required
Type
Description

test_case_id

integer

Numeric ID of the test case to use as the load scenario.

virtual_users

integer

Number of concurrent virtual users to simulate.

duration

integer

Total test duration in seconds.

ramp_up

integer

Time in seconds to ramp up to the full virtual user count.

Returns

JSON with a performance execution ID, configuration echo, and a link to the live results dashboard.

Example

get_test_cases, get_execution_status, execute_test_case


execute_security_dast_scan

Launches a DAST (Dynamic Application Security Testing) scan against the application exercised by a test case, using the specified scan profile to control scan depth and aggression.

Category: Advanced Testing | Authentication required: Yes

Parameters

Name
Required
Type
Description

test_case_id

integer

Numeric ID of the test case whose flows define the scan target.

scan_profile

string

Scan intensity profile. Accepted values: "standard", "aggressive". Use "standard" for CI pipelines; "aggressive" for dedicated security runs.

Returns

JSON with a DAST scan execution ID, scan profile confirmation, and a link to the vulnerability report once the scan completes.

Example

get_test_cases, get_execution_status, execute_test_case


export_test_case_as_code

Exports one or more test cases as runnable code in a specified framework and language, writing the output to a local path.

Category: Advanced Testing | Authentication required: Yes

Parameters

Name
Required
Type
Description

test_case_id

integer

Numeric ID of the test case to export.

framework_type

string

Target framework, e.g. "playwright", "cypress", "selenium".

language

string

Output language, e.g. "typescript", "javascript", "python".

destination_path

string

Absolute local path where the generated file(s) should be written.

Returns

JSON confirmation with the destination_path, a download_url to retrieve the file from ContextQA, and the list of generated files.

Example

export_to_playwright, get_test_cases, migrate_repo_to_contextqa


AI-Powered Analysis


get_root_cause

Sends failed step data, screenshots, and the session recording to the ContextQA root cause engine and returns a structured AI root cause analysis; use this as the first step in any failure investigation.

Category: AI Analysis | Authentication required: Yes

Parameters

Name
Required
Type
Description

execution_id

string

ID of the failed execution to analyse.

Returns

JSON with rootCause (a plain-English summary), affectedStep, errorType, evidenceLinks (screenshots, recording), and suggested remediation.

Example

investigate_failure, get_ai_reasoning, get_test_step_results, create_defect_ticket


query_repository

Queries a connected source code repository with a natural-language question and returns the most relevant code context and file references; use this to understand how the application under test is implemented before writing or debugging tests.

Category: AI Analysis | Authentication required: Yes

Parameters

Name
Required
Type
Description

question

string

Natural-language question about the codebase.

repo_url

string

URL of the Git repository to query (HTTPS or SSH).

limit_files

integer

Maximum number of file references to return. Defaults to 10.

Returns

JSON with a answer string, a files array of relevant file paths with matched snippets, and a confidence score.

Example

analyze_test_impact, generate_tests_from_code_change, analyze_test_repo


analyze_test_impact

Determines which existing test cases are affected by a set of changed files (and optionally changed functions), and returns a prioritised list of tests to rerun; use this in CI after a PR diff is known.

Category: AI Analysis | Authentication required: Yes

Parameters

Name
Required
Type
Description

changed_files

array of strings

List of file paths that changed in the current diff, relative to repo root.

changed_functions

array of strings

List of function or method names that changed. Improves precision of impact analysis.

Returns

JSON with affectedTestCases (array of test case objects with IDs and names) and recommendedRerunIds (array of integers ready to pass to execute_test_case).

Example

query_repository, generate_tests_from_code_change, execute_test_case, get_test_cases


Last updated

Was this helpful?