# Execution & Results

{% hint style="info" %}
**Who is this for?** SDETs, developers, and DevOps engineers integrating ContextQA with AI coding assistants (Claude, Cursor) or CI/CD pipelines.
{% endhint %}

These tools form the core execution loop: trigger a run, poll until it finishes, and retrieve the full evidence package.

***

## `execute_test_case`

Triggers a single test case execution and returns a monitoring URL.

**Category:** Execution & Results **Authentication required:** Yes

### Parameters

| Name            | Required | Type    | Description                                               |
| --------------- | -------- | ------- | --------------------------------------------------------- |
| `test_case_id`  | ✅        | integer | Numeric ID of the test case to run                        |
| `run_in_portal` | ❌        | boolean | Return a portal URL for live monitoring (default: `true`) |
| `persona_id`    | ❌        | string  | ID of a custom AI agent persona to use for this run       |
| `knowledge_id`  | ❌        | string  | ID of a knowledge base to inject into this run            |

### Returns

When `run_in_portal` is `true`: a portal URL string like `https://app.contextqa.com/td/cases/18688/steps`.

When `false`: a terminal command string for local execution.

### Execution pattern

```python
# 1. Trigger
result = execute_test_case(test_case_id=18688)

# 2. Poll until terminal state
while True:
    status = get_execution_status(test_case_id=18688, number_of_executions=1)
    if "PASSED" in status or "FAILED" in status:
        break
    sleep(5)

# 3. Retrieve results
results = get_test_case_results(result_id=<id from status>)
```

### Related Tools

`get_execution_status` • `get_test_case_results` • `get_execution_step_details`

***

## `get_execution_status`

Polls for the current execution status of a test case. Call this repeatedly after `execute_test_case` until a terminal state is returned.

**Category:** Execution & Results **Authentication required:** Yes

### Parameters

| Name                   | Required | Type    | Description                                                          |
| ---------------------- | -------- | ------- | -------------------------------------------------------------------- |
| `test_case_id`         | ✅        | integer | The test case whose most recent execution you want to check          |
| `number_of_executions` | ✅        | integer | Total number of executions triggered since last check; typically `1` |

### Returns

String containing the current status and result ID when complete. Possible statuses:

* `Execution in progress` — still running; poll again
* `STATUS_COMPLETED, result_id: <id>` — finished (check `get_test_case_results` for pass/fail)
* `STATUS_FAILED, result_id: <id>` — execution infrastructure failure

### Notes

* Recommended polling interval: 5–10 seconds to avoid rate limiting.
* The `result_id` returned here is the ID to pass to `get_test_case_results`, `get_execution_step_details`, and other telemetry tools.

### Related Tools

`execute_test_case` • `get_test_case_results`

***

## `get_test_case_results`

Returns the complete result object for a test execution, including overall pass/fail, step count, duration, and all evidence URLs.

**Category:** Execution & Results **Authentication required:** Yes

### Parameters

| Name           | Required | Type    | Description                                                  |
| -------------- | -------- | ------- | ------------------------------------------------------------ |
| `execution_id` | ❌        | string  | Execution ID (from `execute_test_case` return or portal URL) |
| `result_id`    | ❌        | integer | Result ID (from `get_execution_status` return)               |

At least one of `execution_id` or `result_id` must be supplied.

### Returns

JSON object with:

* `result_id` — unique result identifier
* `status` — `PASSED` or `FAILED`
* `total_steps` — total step count
* `failed_steps` — number of failed steps
* `duration_ms` — total execution time
* `video_url` — pre-signed S3 URL to the screen recording
* `started_at` / `completed_at` — ISO 8601 timestamps
* `steps` — array of step result summaries

### Example

```json
{
  "result_id": 47284
}
```

### Related Tools

`get_execution_step_details` • `get_test_step_results` • `get_root_cause`

***

## `get_execution_step_details`

Returns a human-readable step-by-step breakdown of a test execution with screenshots, failure reasons, and the overall verdict.

**Category:** Execution & Results **Authentication required:** Yes

### Parameters

| Name        | Required | Type    | Description                                             |
| ----------- | -------- | ------- | ------------------------------------------------------- |
| `result_id` | ✅        | integer | Result ID from `get_execution_status` or the portal URL |

### Returns

JSON object with:

* `test_case_name` — name of the test case
* `overall_status` — `PASSED` or `FAILED`
* `total_steps` — total step count
* `failed_steps` — number that failed
* `screen_recording_url` — video URL
* `steps` — array of per-step objects, each with:
  * `step_number` (1-based)
  * `action` — the step description
  * `status` — `PASSED` or `FAILED`
  * `screenshot_url` — URL of the step screenshot
  * `failure_reason` — AI explanation of what went wrong (if failed)

### Example

```json
{
  "result_id": 47284
}
```

### Use case

This is the primary tool for a QA agent that wants to report test results to a developer. It provides all the evidence in a single structured call.

### Related Tools

`get_test_case_results` • `get_network_logs` • `get_console_logs` • `get_root_cause`

***

## `fix_and_apply`

Orchestrates the full failure-to-fix pipeline in a single call: fetches execution results, runs AI root cause analysis, queries the source repository for the responsible code, and returns a structured fix suggestion.

**Category:** Execution & Results **Authentication required:** Yes

### Parameters

| Name           | Required | Type   | Description                                                            |
| -------------- | -------- | ------ | ---------------------------------------------------------------------- |
| `execution_id` | ✅        | string | ID of the failed execution                                             |
| `repo_url`     | ✅        | string | URL of the source code repository to query (GitHub, GitLab, Bitbucket) |

### Returns

JSON object with:

* `execution_id` — echoed back for reference
* `test_case_name` — which test failed
* `failure_description` — plain English description of the failure
* `root_cause_analysis` — AI analysis of what caused it
* `code_context` — relevant code snippets from the repository
* `next_steps` — specific actions recommended (file to edit, line to change)

### Use case

Call this after a test failure when you have access to the source repository. The tool automatically:

1. Fetches the execution details
2. Identifies the failed step and its screenshot
3. Sends evidence to the root cause analysis engine
4. Searches the repository for code related to the failed feature
5. Returns a correlated analysis with actionable fix suggestions

### Example

```json
{
  "execution_id": "exec_abc123",
  "repo_url": "https://github.com/myorg/myapp"
}
```

### Related Tools

`get_root_cause` • `investigate_failure` • `get_execution_step_details`
