This reference covers three capability groups: defect lifecycle tools that bridge test failures to your ALM, advanced testing tools for performance and security, and AI analysis tools that surface test impact and code context.
create_defect_ticket
Compiles failure evidence from an execution and pushes a structured defect ticket to the connected ALM (Jira or Azure DevOps); use this immediately after a failed execution to file a bug without leaving your AI assistant.
Category: Bug & Defect | Authentication required: Yes
Name
Required
Type
Description
ID of the failed test execution to report.
ALM project identifier to create the ticket in.
JSON with the created ticket's id, url, title, and the ALM system it was filed against.
get_test_case_results, investigate_failure, get_root_cause
get_auto_healing_suggestions
Retrieves AI-generated proposals for repairing broken locators detected in a failed execution; review suggestions before applying them with approve_auto_healing.
Category: Bug & Defect | Authentication required: Yes
Name
Required
Type
Description
ID of the execution that produced broken locators.
JSON array of healing proposals. Each item includes healing_id, the original broken locator, the proposed replacement locator, a confidence score, and the affected test step.
approve_auto_healing, get_test_step_results, investigate_failure
approve_auto_healing
Applies a specific auto-healing patch to update the broken locator in the test case definition; always review the suggestion from get_auto_healing_suggestions first.
Category: Bug & Defect | Authentication required: Yes
Name
Required
Type
Description
ID of the healing proposal to apply. Obtained from get_auto_healing_suggestions.
Execution context for the heal. Helps the engine track which run triggered the repair.
JSON confirmation with status, the updated locator value, and the test step that was modified.
get_auto_healing_suggestions, get_test_case_steps, update_test_case_step
Advanced Testing
Runs a load test against the flows covered by a test case using a configurable virtual user count, duration, and ramp-up period.
Category: Advanced Testing | Authentication required: Yes
Name
Required
Type
Description
Numeric ID of the test case to use as the load scenario.
Number of concurrent virtual users to simulate.
Total test duration in seconds.
Time in seconds to ramp up to the full virtual user count.
JSON with a performance execution ID, configuration echo, and a link to the live results dashboard.
get_test_cases, get_execution_status, execute_test_case
execute_security_dast_scan
Launches a DAST (Dynamic Application Security Testing) scan against the application exercised by a test case, using the specified scan profile to control scan depth and aggression.
Category: Advanced Testing | Authentication required: Yes
Name
Required
Type
Description
Numeric ID of the test case whose flows define the scan target.
Scan intensity profile. Accepted values: "standard", "aggressive". Use "standard" for CI pipelines; "aggressive" for dedicated security runs.
JSON with a DAST scan execution ID, scan profile confirmation, and a link to the vulnerability report once the scan completes.
get_test_cases, get_execution_status, execute_test_case
export_test_case_as_code
Exports one or more test cases as runnable code in a specified framework and language, writing the output to a local path.
Category: Advanced Testing | Authentication required: Yes
Name
Required
Type
Description
Numeric ID of the test case to export.
Target framework, e.g. "playwright", "cypress", "selenium".
Output language, e.g. "typescript", "javascript", "python".
Absolute local path where the generated file(s) should be written.
JSON confirmation with the destination_path, a download_url to retrieve the file from ContextQA, and the list of generated files.
export_to_playwright, get_test_cases, migrate_repo_to_contextqa
AI-Powered Analysis
Sends failed step data, screenshots, and the session recording to the ContextQA root cause engine and returns a structured AI root cause analysis; use this as the first step in any failure investigation.
Category: AI Analysis | Authentication required: Yes
Name
Required
Type
Description
ID of the failed execution to analyse.
JSON with rootCause (a plain-English summary), affectedStep, errorType, evidenceLinks (screenshots, recording), and suggested remediation.
investigate_failure, get_ai_reasoning, get_test_step_results, create_defect_ticket
query_repository
Queries a connected source code repository with a natural-language question and returns the most relevant code context and file references; use this to understand how the application under test is implemented before writing or debugging tests.
Category: AI Analysis | Authentication required: Yes
Name
Required
Type
Description
Natural-language question about the codebase.
URL of the Git repository to query (HTTPS or SSH).
Maximum number of file references to return. Defaults to 10.
JSON with a answer string, a files array of relevant file paths with matched snippets, and a confidence score.
analyze_test_impact, generate_tests_from_code_change, analyze_test_repo
analyze_test_impact
Determines which existing test cases are affected by a set of changed files (and optionally changed functions), and returns a prioritised list of tests to rerun; use this in CI after a PR diff is known.
Category: AI Analysis | Authentication required: Yes
Name
Required
Type
Description
List of file paths that changed in the current diff, relative to repo root.
List of function or method names that changed. Improves precision of impact analysis.
JSON with affectedTestCases (array of test case objects with IDs and names) and recommendedRerunIds (array of integers ready to pass to execute_test_case).
query_repository, generate_tests_from_code_change, execute_test_case, get_test_cases