Interactive Demo
See how Claude and other AI agents use ContextQA's MCP tools to create tests, run them, and analyze failures — all from a natural language conversation.
Connect any MCP-compatible AI client (Claude Desktop, Cursor, VS Code Copilot) to your ContextQA workspace and control your entire test suite through conversation. This demo shows the four most common workflows an AI agent can perform using the ContextQA MCP server.
Fetch & Summarize Results ✓
The agent retrieves the per-test results and returns a plain-English summary.
Agent response:
I ran the Checkout Smoke Tests suite (8 tests). Here are the results:
✅ 7 passed — all core checkout flows working ❌ 1 failed — "Apply discount code" (Test Case #18754)
The failure was on step 3: the discount code field returned a 422 Unprocessable Entity error when applying code "SAVE10". This looks like an application bug — the promo code validation API is rejecting valid codes. I can fetch the full root cause analysis if you'd like.
AI Agent: Calls get_test_case_results to retrieve pass/fail counts and failed step details, then synthesizes a clear summary without requiring the user to navigate the ContextQA UI.
Confirm & Offer to Run ✓
Agent response:
I generated 3 test cases for the discount code feature and added them to the "Promotions" suite:
TC #18760 — Apply valid code, verify discount applied (6 steps)
TC #18761 — Invalid code shows error message (4 steps)
TC #18762 — Expired code shows expiry message (4 steps)
Would you like me to run them now to validate they pass against your staging environment?
AI Agent: Test cases are immediately available in the ContextQA portal and can be run, edited, or added to a test plan — all without leaving the conversation.
Explain the Failure ✓
Agent response:
Test Case #18750 failed on Step 4 ("Click the Sign In button").
AI Classification: APPLICATION BUG
The login button was not found on the page. The console log shows a JavaScript error that suggests the login form component failed to render.
Suggested fix: Check the component for the login page — the form may not be mounting correctly if a required dependency is throwing an error on page load. Review the browser console in your staging environment.
Evidence available: Screenshot at step 4, full console log, HAR network trace. Want me to create a Jira ticket for this?
AI Agent: The full evidence chain — screenshot, console log, network HAR — is attached to the execution result and accessible via additional MCP tool calls if needed.
Confirm & Offer Next Steps ✓
Agent response:
✅ Test Plan created: "Login + Checkout Gate" (Plan ID: 156)
Configuration:
Suites: Login Tests (12 cases) + Checkout Smoke Tests (8 cases)
Environment: staging
Parallel workers: 5
Estimated duration: ~3 minutes
I can run it now, schedule it to run on every commit via your CI/CD webhook, or set up a nightly schedule. What would you like to do?
AI Agent: Test plans created via MCP are immediately available in the ContextQA portal. They can be triggered via webhook, scheduled, or run on demand — all controllable through the same conversation.
MCP Tools Available
67
AI Clients Supported
Claude Desktop, Cursor, VS Code Copilot, any MCP client
Lines of Code to Integrate
0
Setup Time
~10 minutes
Try it yourself — 🚀 Start Free Trial → · Book a Demo
Last updated
Was this helpful?