Knowledge Base
How to create and use Knowledge Base entries to teach ContextQA's AI agent how to handle application-specific UI patterns, consent banners, popups, and test-specific instructions.
Who is this for? All roles — especially QA engineers who want to teach ContextQA's AI how to handle application-specific UI patterns, consent banners, popups, and test conventions.
A Knowledge Base is a set of plain-English instructions stored at the workspace level that the ContextQA AI agent reads before and during every test execution. It is the mechanism for encoding persistent, application-specific knowledge into the test engine — so you write the instruction once and every test in the workspace benefits automatically.
Why You Need a Knowledge Base
The ContextQA AI agent is trained to test general web and mobile applications. But every production application has quirks: a GDPR consent modal that appears on first load, a live chat widget that opens over content, a feature tour that appears on every login, a staging-only banner that covers part of the UI.
Without guidance, the AI agent either attempts to interact with these overlays (causing false failures) or gets confused by them. With a knowledge base entry like "If a cookie consent modal appears, click 'Accept All Cookies' before any other interaction", the AI handles it correctly on every run.
Accessing the Knowledge Base
Open your ContextQA workspace.
In the left navigation, go to Knowledge Base (route:
/td/:versionId/Knowledge_Base).The Knowledge Base list shows all entries for this workspace.
Access note: Knowledge Base requires a plan that includes AI features. If the Knowledge Base menu item is grayed out or missing, contact your workspace administrator.
Creating a Knowledge Base Entry
From the Knowledge Base list, click + New Knowledge Base.
Enter a Title — a short label describing what this entry handles (e.g., "Cookie consent banner", "Chat widget dismissal", "Payment test card").
Enter the Prompt — the plain-English instruction the AI will follow. See the Writing Effective Prompts section below.
Click Save.
The entry is immediately active for all test executions in this workspace.
Writing Effective Prompts
The AI interprets prompts as instructions to follow whenever the described condition is encountered. Write prompts as imperative sentences.
✅ Good prompt patterns
Cookie consent modal
If a cookie consent banner, GDPR notice, or privacy consent dialog is visible, click the button labelled "Accept All Cookies" or "Accept" immediately before any other action.
Live chat widget
If a live chat widget, help bubble, or Intercom button opens in the bottom corner of the screen, close it by clicking the X or minimize button before interacting with other page elements.
Feature tour / product walkthrough
If a product tour, onboarding guide, or "Get started" wizard appears as an overlay or modal, click "Skip", "Dismiss", or "Close" to exit it before proceeding.
Test payment card
On any payment form, always use the test credit card number 4111 1111 1111 1111, expiry date 12/29, and CVV 123. These are test credentials that bypass real payment processing.
Two-factor authentication
If a two-factor authentication prompt appears, enter the code 123456. This code is accepted in the staging environment.
Loading indicators
If a loading spinner, skeleton screen, or "Please wait" overlay is present, wait for it to disappear before interacting with the page.
❌ Avoid these patterns
Handle cookie popups
Too vague — the AI doesn't know what "handle" means
The app sometimes shows a popup
Not an instruction; no action specified
Be careful on the payment page
Ambiguous — no concrete behavior described
Click X to close the chat
Too specific — the selector may change; describe the widget type instead
Prompt length and scope
Keep each knowledge base entry focused on one specific situation.
For complex applications, create multiple entries (one per pattern) rather than one long combined entry.
The AI reads all entries before each run, so there's no performance cost to having many entries.
Scoping Knowledge Bases to Specific Runs
When a knowledge base is workspace-scoped, it applies to every test execution. To apply a knowledge base only to specific runs:
When executing a test case, click the Settings icon next to Run.
Under Knowledge Base, select the specific entry to apply.
Click Run.
Via the MCP server:
Use list_knowledge_bases() to retrieve the ID for a specific entry.
Managing Knowledge Base Entries
Edit an entry
Click the entry name → Edit
Delete an entry
Click the three-dot menu → Delete
Disable without deleting
Not directly supported — delete and recreate when needed
Duplicate
Not directly supported — create a new entry with similar content
MCP Tools for Knowledge Bases
list_knowledge_bases
Get all knowledge bases and their IDs
create_knowledge_base
Create a new knowledge base programmatically
execute_test_case(knowledge_id=...)
Attach a knowledge base to a single test run
execute_test_plan(knowledge_id=...)
Attach a knowledge base to a full plan execution
Create via MCP
Common Use Cases
E-commerce applications
SaaS applications with onboarding
Multi-tenant applications
Staging environments
Related Pages
70% less human effort with AI test generation and self-healing. Book a Demo → — See AI generate, execute, and maintain tests for your application.
Last updated
Was this helpful?