playInteractive Demo

See how Claude and other AI agents use ContextQA's MCP tools to create tests, run them, and analyze failures — all from a natural language conversation.

Connect any MCP-compatible AI client (Claude Desktop, Cursor, VS Code Copilot) to your ContextQA workspace and control your entire test suite through conversation. This demo shows the four most common workflows an AI agent can perform using the ContextQA MCP server.

1

User Prompt

The user asks the AI agent to run a specific test suite and report the results.

User: Run the checkout smoke tests and tell me if they pass.

circle-info

AI Agent: Calls get_test_suites to retrieve available suites in the workspace, then identifies the matching suite by name.

2

Execute the Suite

The agent triggers execution of the Checkout Smoke Tests suite (8 test cases) and monitors its status.

circle-info

AI Agent: Calls execute_test_suite to start the run, then polls get_execution_status until the run reaches a completed state.

3

Fetch & Summarize Results ✓

The agent retrieves the per-test results and returns a plain-English summary.

Agent response:

I ran the Checkout Smoke Tests suite (8 tests). Here are the results:

7 passed — all core checkout flows working ❌ 1 failed — "Apply discount code" (Test Case #18754)

The failure was on step 3: the discount code field returned a 422 Unprocessable Entity error when applying code "SAVE10". This looks like an application bug — the promo code validation API is rejecting valid codes. I can fetch the full root cause analysis if you'd like.

circle-check

Last updated

Was this helpful?