# Interactive Demo

Connect any MCP-compatible AI client (Claude Desktop, Cursor, VS Code Copilot) to your ContextQA workspace and control your entire test suite through conversation. This demo shows the four most common workflows an AI agent can perform using the ContextQA MCP server.

{% tabs %}
{% tab title="Run Smoke Tests" %}
{% stepper %}
{% step %}

#### User Prompt

The user asks the AI agent to run a specific test suite and report the results.

**User:** Run the checkout smoke tests and tell me if they pass.

{% hint style="info" %}
**AI Agent:** Calls `get_test_suites` to retrieve available suites in the workspace, then identifies the matching suite by name.
{% endhint %}
{% endstep %}

{% step %}

#### Execute the Suite

The agent triggers execution of the Checkout Smoke Tests suite (8 test cases) and monitors its status.

{% hint style="info" %}
**AI Agent:** Calls `execute_test_suite` to start the run, then polls `get_execution_status` until the run reaches a completed state.
{% endhint %}
{% endstep %}

{% step %}

#### Fetch & Summarize Results ✓

The agent retrieves the per-test results and returns a plain-English summary.

**Agent response:**

> I ran the **Checkout Smoke Tests** suite (8 tests). Here are the results:
>
> ✅ **7 passed** — all core checkout flows working ❌ **1 failed** — "Apply discount code" (Test Case #18754)
>
> The failure was on step 3: the discount code field returned a 422 Unprocessable Entity error when applying code "SAVE10". This looks like an application bug — the promo code validation API is rejecting valid codes. I can fetch the full root cause analysis if you'd like.

{% hint style="success" %}
**AI Agent:** Calls `get_test_case_results` to retrieve pass/fail counts and failed step details, then synthesizes a clear summary without requiring the user to navigate the ContextQA UI.
{% endhint %}
{% endstep %}
{% endstepper %}
{% endtab %}

{% tab title="Generate Tests" %}
{% stepper %}
{% step %}

#### User Prompt

The user asks the agent to create tests for a new feature.

**User:** Generate tests for the new discount code feature.

{% hint style="info" %}
**AI Agent:** Calls `get_test_suites` to find the relevant suite where the new tests should be placed.
{% endhint %}
{% endstep %}

{% step %}

#### Create Test Cases

The agent generates three test cases covering the happy path, invalid input, and edge cases.

{% hint style="info" %}
**AI Agent:** Calls `create_test_case` three times — valid code, invalid code, and expired code scenarios — each with a distinct description and assigned to the Promotions suite.
{% endhint %}
{% endstep %}

{% step %}

#### Confirm & Offer to Run ✓

**Agent response:**

> I generated **3 test cases** for the discount code feature and added them to the "Promotions" suite:
>
> 1. **TC #18760** — Apply valid code, verify discount applied (6 steps)
> 2. **TC #18761** — Invalid code shows error message (4 steps)
> 3. **TC #18762** — Expired code shows expiry message (4 steps)
>
> Would you like me to run them now to validate they pass against your staging environment?

{% hint style="success" %}
**AI Agent:** Test cases are immediately available in the ContextQA portal and can be run, edited, or added to a test plan — all without leaving the conversation.
{% endhint %}
{% endstep %}
{% endstepper %}
{% endtab %}

{% tab title="Analyze Failure" %}
{% stepper %}
{% step %}

#### User Prompt

The user asks why a specific test failed.

**User:** Why did test case 18750 fail in the last run?

{% hint style="info" %}
**AI Agent:** Calls `get_test_case_results` to retrieve the execution result for the most recent run of test case 18750.
{% endhint %}
{% endstep %}

{% step %}

#### Fetch Root Cause & Logs

The agent retrieves the AI root cause analysis and browser console logs for the failed execution.

{% hint style="info" %}
**AI Agent:** Calls `get_root_cause` for the AI-classified failure summary, then calls `get_console_logs` to gather supporting browser evidence.
{% endhint %}
{% endstep %}

{% step %}

#### Explain the Failure ✓

**Agent response:**

> **Test Case #18750** failed on **Step 4** ("Click the Sign In button").
>
> **AI Classification: APPLICATION BUG**
>
> The login button was not found on the page. The console log shows a JavaScript error that suggests the login form component failed to render.
>
> **Suggested fix:** Check the component for the login page — the form may not be mounting correctly if a required dependency is throwing an error on page load. Review the browser console in your staging environment.
>
> **Evidence available:** Screenshot at step 4, full console log, HAR network trace. Want me to create a Jira ticket for this?

{% hint style="success" %}
**AI Agent:** The full evidence chain — screenshot, console log, network HAR — is attached to the execution result and accessible via additional MCP tool calls if needed.
{% endhint %}
{% endstep %}
{% endstepper %}
{% endtab %}

{% tab title="Create Test Plan" %}
{% stepper %}
{% step %}

#### User Prompt

The user asks the agent to create a test plan combining two suites.

**User:** Create a test plan for the login and checkout flows.

{% hint style="info" %}
**AI Agent:** Calls `get_test_suites` to identify the Login Tests and Checkout Smoke Tests suites, then calls `get_environments` to list available target environments.
{% endhint %}
{% endstep %}

{% step %}

#### Configure & Create the Plan

The agent creates a test plan named "Login + Checkout Gate" with both suites, targeting the staging environment with 5 parallel workers.

{% hint style="info" %}
**AI Agent:** Calls `save_test_plan` with the suite IDs, environment ID, and parallelism settings — the plan is created in a single API call.
{% endhint %}
{% endstep %}

{% step %}

#### Confirm & Offer Next Steps ✓

**Agent response:**

> ✅ **Test Plan created: "Login + Checkout Gate"** (Plan ID: 156)
>
> **Configuration:**
>
> * Suites: Login Tests (12 cases) + Checkout Smoke Tests (8 cases)
> * Environment: staging
> * Parallel workers: 5
> * Estimated duration: \~3 minutes
>
> I can run it now, schedule it to run on every commit via your CI/CD webhook, or set up a nightly schedule. What would you like to do?

{% hint style="success" %}
**AI Agent:** Test plans created via MCP are immediately available in the ContextQA portal. They can be triggered via webhook, scheduled, or run on demand — all controllable through the same conversation.
{% endhint %}

| Capability                 | Detail                                                  |
| -------------------------- | ------------------------------------------------------- |
| MCP Tools Available        | 67                                                      |
| AI Clients Supported       | Claude Desktop, Cursor, VS Code Copilot, any MCP client |
| Lines of Code to Integrate | 0                                                       |
| Setup Time                 | \~10 minutes                                            |
| {% endstep %}              |                                                         |
| {% endstepper %}           |                                                         |
| {% endtab %}               |                                                         |
| {% endtabs %}              |                                                         |

***

{% hint style="success" %}
**Try it yourself** — [🚀 Start Free Trial →](https://app.contextqa.com/signup) · [Book a Demo](https://contextqa.com/book-a-demo/)
{% endhint %}
