Telemetry & Custom Agents

MCP tool reference for execution telemetry, AI reasoning inspection, custom agent personas, and knowledge bases in ContextQA.

circle-info

Who is this for? SDETs, developers, and DevOps engineers integrating ContextQA with AI coding assistants (Claude, Cursor) or CI/CD pipelines.

This reference covers two groups: telemetry tools that surface low-level execution evidence (step results, network traffic, console logs, Playwright traces, AI reasoning), and the custom agents and knowledge base tools that allow you to configure AI behaviour for test execution.


Telemetry


get_test_step_results

Retrieves raw per-step execution data for a given result, including the action performed, pass/fail status, screenshot URL, executed result detail, and AI reasoning summary.

Category: Telemetry | Authentication required: Yes

Parameters

Name
Required
Type
Description

result_id

integer

Numeric ID of the test case result to retrieve steps for. Obtain from get_test_case_results.

Returns

JSON array of step records. Each record includes stepIndex, action, status (PASSED/FAILED/SKIPPED), screenshot_url, executedResult, and a brief aiReasoning field.

Example

{
  "result_id": 1042
}

get_test_case_results, get_ai_reasoning, get_network_logs, get_console_logs, investigate_failure


get_network_logs

Returns the HAR-format network log capturing all HTTP requests and responses made during a test execution result; use this to debug API calls, check response codes, and inspect payload content.

Category: Telemetry | Authentication required: Yes

Parameters

Name
Required
Type
Description

result_id

integer

Numeric ID of the test case result whose network traffic to retrieve.

Returns

HAR-format JSON (log.entries array) with each entry containing request (method, URL, headers, body), response (status, headers, body), and timing data.

Example

get_test_step_results, get_console_logs, get_trace_url, investigate_failure


get_console_logs

Returns all browser console entries recorded during a test execution result, including log, warn, error, and uncaught exception messages.

Category: Telemetry | Authentication required: Yes

Parameters

Name
Required
Type
Description

result_id

integer

Numeric ID of the test case result to retrieve console entries for.

Returns

JSON array of console entries. Each entry includes level (log, warn, error, exception), message, source, and timestamp.

Example

get_network_logs, get_test_step_results, get_trace_url, investigate_failure


get_trace_url

Returns the Playwright Trace Viewer URL for a test execution result, enabling step-by-step visual replay with DOM snapshots and network activity.

Category: Telemetry | Authentication required: Yes

Note: Access to trace files requires the Trace Viewer add-on licence. If your workspace does not have this licence, the call returns a 403 error.

Parameters

Name
Required
Type
Description

result_id

integer

Numeric ID of the test case result to retrieve the trace for.

Returns

JSON with trace_url (a direct link to the Playwright Trace Viewer) and expires_at (ISO 8601 timestamp indicating when the URL expires).

Example

get_test_step_results, get_network_logs, investigate_failure


get_ai_reasoning

Returns detailed per-step AI reasoning data for an execution result, including confidence scores, the locator strategy chosen, DOM similarity analysis, and any fallback decisions made by the AI agent.

Category: Telemetry | Authentication required: Yes

Parameters

Name
Required
Type
Description

result_id

integer

Numeric ID of the test case result to inspect AI reasoning for.

Returns

JSON array of reasoning records, one per step. Each record includes stepIndex, confidence (0–1 float), locatorStrategy, domSimilarityScore, fallbackUsed (boolean), and reasoningNarrative.

Example

get_test_step_results, get_ai_insights, investigate_failure


get_ai_insights

Returns aggregated AI insights and analytics patterns across executions; optionally scoped to a specific result or workspace version to focus analysis.

Category: Telemetry | Authentication required: Yes

Parameters

Name
Required
Type
Description

result_id

integer

Scope insights to a specific test result.

version_id

string

Scope insights to a specific workspace version.

Returns

JSON with aggregated metrics including flakiness_score, top_failure_categories, healing_frequency, and coverage_trend data points.

Example

get_ai_reasoning, analyze_coverage_gaps, get_test_case_results


Custom Agents & Knowledge Base


list_custom_agents

Returns all custom agent personas defined in the workspace; use agent IDs as the persona_id parameter when executing test cases or test plans.

Category: Custom Agents | Authentication required: Yes

Parameters

None.

Returns

JSON array of agent persona objects. Each object includes id, name, description (the system prompt), and createdAt.

Example

create_custom_agent, execute_test_case, execute_test_plan


create_custom_agent

Creates a new custom AI agent persona with a name and a system prompt that defines its behaviour and knowledge scope during test execution.

Category: Custom Agents | Authentication required: Yes

Parameters

Name
Required
Type
Description

name

string

Display name for the agent persona.

description

string

System prompt defining the agent's behaviour. This plain-English instruction set guides how the AI navigates and interacts with the application during execution.

Returns

JSON of the created agent persona including its assigned id, which can then be used as persona_id in execution calls.

Example

list_custom_agents, execute_test_case, execute_test_plan


list_knowledge_bases

Returns all knowledge bases defined in the workspace; use knowledge base IDs as the knowledge_id parameter when executing test cases or test plans to give the AI agent additional domain context.

Category: Knowledge Base | Authentication required: Yes

Parameters

None.

Returns

JSON array of knowledge base objects. Each object includes id, title, prompt, and createdAt.

Example

create_knowledge_base, execute_test_case, execute_test_plan


create_knowledge_base

Creates a new knowledge base containing plain-English instructions that are injected into the AI agent's context during test execution to provide domain-specific guidance.

Category: Knowledge Base | Authentication required: Yes

Parameters

Name
Required
Type
Description

title

string

Display name for the knowledge base.

prompt

string

Plain-English instructions for the AI agent. Describe domain rules, business logic, expected behaviours, or navigation patterns the agent should be aware of.

Returns

JSON of the created knowledge base including its assigned id, which can then be used as knowledge_id in execution calls.

Example

list_knowledge_bases, execute_test_case, execute_test_plan, create_custom_agent


Last updated

Was this helpful?