# Running Tests

{% hint style="info" %}
**Who is this for?** SDETs, QA managers, and engineering managers who need to run test cases, suites, and plans — manually, on a schedule, or triggered from a CI/CD pipeline.
{% endhint %}

ContextQA supports three granularities of test execution: a single test case, a test suite, and a full test plan. Each produces an execution record with the same evidence set — screenshots, video, network logs, console logs, and root cause analysis. Executions can be triggered manually from the portal, from CI/CD pipelines via the API, or from AI coding assistants via the MCP server.

## Prerequisites

* You have at least one test case created and saved.
* You have an environment configured with a valid base URL.
* For test plan execution: you have a test plan configured with at least one suite and an environment selected.

***

## Running a Single Test Case

Running a single test case is the primary feedback loop during test authoring. Use it to verify a new test case immediately after creation, or to investigate a failing step in isolation.

### Steps

1. Open the test case from **Test Development → Test Cases**.
2. In the test case editor, click the **Run** button (▶) in the top toolbar.
3. A dialog appears asking you to confirm the execution environment. Select the environment from the dropdown and click **Run**.
4. The test is queued immediately. A banner appears at the top of the editor showing the execution status.
5. Click **View Live Execution** (or the execution ID link in the banner) to open the live execution viewer.

The test case executes step by step. Each step shows a real-time pass/fail indicator as it completes.

***

## Running a Test Suite

Running a suite executes all test cases within it as a batch. This is the standard way to run a group of related tests during development or before a deployment.

### Steps

1. Navigate to **Test Development → Test Suites**.
2. Click on the suite name to open it.
3. Click the **Run Suite** button in the suite header.
4. Select the execution environment from the dialog.
5. Click **Run**.

The suite execution begins. All test cases in the suite are queued. Depending on the suite's parallel/sequential setting:

* **Sequential**: test cases execute one at a time in listed order.
* **Parallel**: test cases execute simultaneously in separate browser instances.

A suite execution record appears in **Execution History** with an aggregate pass/fail status and a breakdown by test case.

***

## Running a Test Plan

A test plan execution is the most complete form of test execution. It runs multiple suites against configured browsers and an environment, and is the entry point for CI/CD automation and scheduled runs.

### Steps

1. Navigate to **Test Development → Test Plans**.
2. Click on the test plan name to open it.
3. Click the **Execute** button.
4. A confirmation dialog shows the plan configuration:
   * Suites included
   * Target browsers or devices
   * Selected environment
5. Click **Confirm Execution**.

The plan begins executing. All configured suites start based on the plan's parallelism settings. Results are grouped by suite and test case in the execution report.

***

## Live Execution View

When any test executes, clicking the execution link opens the live execution viewer. The viewer updates in real time via a WebSocket connection.

**Layout:**

```
┌─────────────────────────┬─────────────────────────────────────────┐
│   Step List             │   Live Browser Screenshot               │
│                         │                                         │
│ ✓ Step 1: Navigate      │   [Current browser state shown here]   │
│ ✓ Step 2: Type email    │                                         │
│ ✓ Step 3: Type password │                                         │
│ ▶ Step 4: Click Sign In │                                         │
│   Step 5: Verify dash   │                                         │
│                         │                                         │
├─────────────────────────┴─────────────────────────────────────────┤
│   Network Log: GET /api/session 200 OK   POST /api/auth 200 OK    │
└───────────────────────────────────────────────────────────────────┘
```

* **Left panel** — the full step list with real-time status icons:
  * ✓ Green — step passed
  * ✗ Red — step failed
  * ▶ Blue — step currently executing
  * ○ Grey — step not yet reached
* **Right panel** — a live screenshot of the browser at the current step. The screenshot refreshes after each step completes.
* **Bottom panel** — the network request log, showing each HTTP request as it is made during execution.

The live view remains open after execution completes, allowing you to scroll through screenshots and network entries without leaving the view.

***

## Viewing Results After Execution

When execution completes, click **View Detailed Report** from the execution summary banner or from the Execution History list.

The detailed report contains:

### Execution Summary

* Overall status: PASSED / FAILED / PARTIALLY FAILED
* Duration: total execution time
* Browser and version
* Environment used
* Number of steps: total, passed, failed, skipped

### Step-by-Step Breakdown

Each step is listed with:

* Pass/fail status
* Step description
* Screenshot captured at that step (click to view full size)
* Duration for that step
* Self-healing indicator (if the step was auto-healed)
* For failed steps: the AI-generated root cause analysis

### Video Recording

A full MP4 recording of the browser session is embedded in the report. Use the video to see exactly what happened in context — especially useful for failures that are hard to diagnose from static screenshots alone.

### Network Log

The complete HAR log is accessible from the **Network** tab in the report. Each request shows method, URL, status code, response time, and the full request/response body. Use the network log to:

* Identify 4xx or 5xx responses that caused UI failures
* Verify that the correct API endpoints were called with the correct payloads
* Diagnose authentication token expiry

### Console Log

The **Console** tab shows all browser console entries captured during execution, timestamped and correlated with the active step. Look for `console.error` entries to find JavaScript exceptions that may explain unexpected UI behavior.

### Root Cause Analysis

For any failed step, the **Root Cause** tab shows the AI-generated analysis. The root cause analysis:

* Identifies the most likely reason for the failure (element not found, assertion mismatch, network error, JavaScript exception)
* Distinguishes between test-level failures (the step was wrong) and application-level failures (the app has a bug)
* Suggests specific corrective actions (update the step description, fix the element reference, investigate the API error)

***

## Parallel Execution

Parallel execution runs multiple test cases or test suites simultaneously across multiple browser instances, reducing total execution time.

### Enabling Parallel Execution for a Suite

1. Open the test suite.
2. Click the **Settings** (gear icon).
3. Set **Execution Mode** to **Parallel**.
4. Save.

### Enabling Parallel Execution at the Test Plan Level

1. Open the test plan.
2. In the plan settings, set **Suite Execution Mode** to **Parallel** to run all suites simultaneously, or **Sequential** to run suites one at a time.
3. Save the plan.

**Concurrent browser limits** are enforced per workspace based on your subscription tier. If all concurrent slots are occupied, new test cases queue and start as slots become available.

***

## Execution from MCP / API

ContextQA exposes execution capabilities through the MCP server, allowing AI coding assistants and CI/CD scripts to trigger and monitor tests programmatically.

### Execute a Single Test Case

```python
# Via MCP tool call
execute_test_case(test_case_id=1234)

# Returns:
# { "execution_id": 9876, "status": "QUEUED" }
```

### Execute a Test Suite

```python
execute_test_suite(test_suite_id=456)

# Returns:
# { "execution_id": 9877, "status": "QUEUED", "test_case_count": 12 }
```

### Execute a Test Plan

```python
execute_test_plan(test_plan_id=789)

# Returns:
# { "execution_id": 9878, "status": "QUEUED", "suite_count": 3 }
```

### Polling for Execution Completion

```python
import time

execution_id = 9876
while True:
    result = get_execution_status(execution_id=execution_id)
    if result["status"] in ["PASSED", "FAILED", "ERROR"]:
        break
    time.sleep(10)

print(f"Execution completed: {result['status']}")
print(f"Steps passed: {result['steps_passed']}/{result['steps_total']}")
```

### Retrieving Step Results

```python
# Get detailed step-by-step results
steps = get_test_step_results(execution_id=9876)

for step in steps:
    print(f"Step {step['index']}: {step['description']} — {step['status']}")
    if step["status"] == "FAILED":
        print(f"  Root cause: {step['root_cause']}")
        print(f"  Screenshot: {step['screenshot_url']}")
```

***

## Re-running Failed Tests

After reviewing a failed execution, you can re-run either the full test case or only the failed steps.

**Re-run full test case:**

1. Open the execution report.
2. Click **Re-run Test Case** in the report header.

**Re-run from a specific step:**

1. Open the execution report.
2. Locate the first failed step.
3. Click the three-dot menu on that step → **Re-run from This Step**.

Re-running from a step is useful when the first few steps are setup steps that are known to be correct — skipping them saves time when iterating on a failing assertion.

***

## Tips & Best Practices

* **Always verify a new test case with a manual run before adding it to a test plan.** Running the test case once from the editor catches obvious issues before they pollute automated suite results.
* **Use parallel execution for independent tests.** If your test cases do not share state (each one starts from a fresh browser session), parallel execution provides the fastest feedback. Sequential execution should be reserved for cases with explicit state dependencies.
* **Monitor network logs for intermittent failures.** If a test fails sometimes and passes other times (flaky behavior), the network log often reveals an intermittent API timeout or a race condition in the application's data loading.
* **Set up Slack notifications for critical plan executions.** In the test plan settings, configure a Slack channel to receive execution results. This ensures failures are surfaced immediately rather than discovered during the next manual review.
* **Use the video recording for stakeholder communication.** When reporting a genuine application bug found by ContextQA, include the video recording in the bug report. Stakeholders who are not familiar with the test tool can immediately see what went wrong from the video.

## Troubleshooting

**Execution is stuck in QUEUED status for more than 5 minutes** Your workspace may have exhausted its concurrent execution slots. Check the Execution History list to see how many executions are currently RUNNING. If multiple long-running test plans are occupying all slots, wait for them to complete or contact support to increase your concurrency limit.

**Steps are failing due to "element not found" but the element is visible in the screenshot** The screenshot captures the browser state after the step attempted to act, not before. The element may have been present before the step but changed state (disappeared, became disabled, or was replaced by a loading spinner) during the step's execution. Check the network log for a slow API call that might have caused a loading state at the critical moment.

**The video recording is not playing in the report** Video recordings are processed asynchronously after execution completes. If you access the report within 30–60 seconds of execution finishing, the video may not yet be ready. Refresh the report page after a minute. If the video is still unavailable after several minutes, contact support.

**Parallel execution is producing intermittent failures that do not reproduce in sequential mode** This typically indicates that the test cases are sharing state they should not be sharing — for example, using the same user account in multiple parallel tests. Each parallel test should use a distinct user account or dataset to avoid conflicts.

## Related Pages

* [Scheduling Tests](https://learning.contextqa.com/execution/scheduling)
* [Configuring Environments](https://learning.contextqa.com/execution/environments)
* [AI Self-Healing](https://learning.contextqa.com/web-testing/self-healing)
* [Platform Architecture](https://learning.contextqa.com/getting-started/architecture-overview)
* [MCP Server](https://github.com/indivatools/gitbooks-docs/blob/main/docs/mcp-server/README.md)

{% hint style="info" %}
**10× faster with parallel execution across browsers and devices.** [**Book a Demo →**](https://contextqa.com/book-a-demo/) — See ContextQA run your full test suite in parallel CI/CD execution.
{% endhint %}
