AI Context Graph
ContextQA AI context graph — how ContextQA builds and applies accumulated application knowledge through UI Elements, Knowledge Bases, Custom Agents, and AI Data Analyst.
Who is this for? All roles — engineers, QA managers, and technical leads who want to understand how ContextQA builds and applies accumulated application knowledge to improve test accuracy over time.
AI context graph: The conceptual model describing how ContextQA accumulates, organizes, and applies knowledge about an application under test — combining element repository data, explicitly authored knowledge bases, domain-specific custom agents, and natural-language query capabilities into a unified application intelligence layer.
AI-powered test automation is only as good as the AI's understanding of the application it is testing. A generic AI that knows nothing about your application's domain, terminology, or UI patterns will produce brittle tests and poor failure analysis. ContextQA addresses this through a layered knowledge model that grows over time. This page describes each layer of that model and how they work together.
What is the AI context graph?
"Context graph" is a conceptual term for the sum of knowledge ContextQA holds about a specific application under test. The UI does not have a single screen labeled "Context Graph." Instead, the knowledge is distributed across four components that each contribute a different dimension of understanding:
UI Elements repository — machine-learned knowledge about the application's interface
Knowledge Bases — human-authored domain knowledge provided to the AI
Custom Agents — domain-specific AI personas with tailored reasoning behavior
AI Data Analyst — a natural-language interface for querying the accumulated test data
Together, these components give ContextQA an increasingly accurate model of what your application does, how it behaves, and what failures mean in context.
UI Elements: machine-learned interface knowledge
Every time ContextQA executes a test case, it observes the application's DOM, records element attributes, and updates the element repository. The repository stores selectors, element types, labels, and contextual relationships between elements.
This accumulated element data is accessible at Settings → UI Elements (route: /td/:versionId/elements). The UI Elements view shows every element ContextQA has encountered during executions, along with its selector history and the test cases that interact with it.
This repository is the foundation of ContextQA's self-healing capability. When an element changes — its ID, class, or position shifts between deployments — ContextQA searches the element repository to find the best matching selector, rather than treating the change as a hard failure. The more executions ContextQA has run against an application, the richer the element repository becomes, and the more accurate self-healing is.
The element repository also informs test generation. When ContextQA generates test cases from requirements or Figma designs, it consults the element repository to use selectors that are already known to work, rather than generating selectors from scratch.
Knowledge Bases: human-authored domain context
The UI Elements repository captures structural knowledge about the interface. Knowledge Bases capture semantic knowledge about the application's domain — business rules, terminology, test patterns, and application-specific conventions that the AI cannot infer from DOM observation alone.
A Knowledge Base is a structured document that you provide to ContextQA to inform its AI reasoning. Examples of what a Knowledge Base might contain:
Business rule documentation: "A checkout is only valid if the user has confirmed their email address. Tests that check checkout behavior should always begin with a verified user account."
Terminology glossary: "In this application, 'account' refers to a merchant account, not a customer account. Customer-facing concepts use 'profile' instead."
Known limitations: "The date picker component on the scheduling page requires a two-step interaction: first click the field to open the calendar, then click the date. Clicking the date without opening the calendar first will not work."
Test data conventions: "All test users use the email pattern
testuser+<scenario>@example.com. The password is always the same across test environments."
To create a Knowledge Base:
Use the
create_knowledge_baseMCP tool, providing the name and content.Alternatively, navigate to the Knowledge Bases section in the portal and use the creation form.
Once created, a Knowledge Base is referenced by ContextQA's AI when generating tests, analyzing failures, and making self-healing decisions for the associated application version. The list_knowledge_bases MCP tool returns all configured knowledge bases for the current project.
Knowledge Bases are versioned alongside your application. As your application evolves, update the relevant knowledge bases to reflect changes in business rules or UI conventions. Stale knowledge bases can mislead the AI — treat them with the same discipline as code documentation.
Custom Agents: domain-specific AI personas
Custom Agents extend the context graph with behavioral directives. Where a Knowledge Base provides facts, a Custom Agent provides a persona — a specific reasoning approach, tone, or domain focus that shapes how the AI operates when that agent is active.
A Custom Agent might specify:
"When analyzing failures in the payment flow, prioritize checking network response codes before examining DOM state."
"When generating tests for the admin panel, always include negative test cases for permission boundaries."
"Use the language and terminology of an insurance underwriter when describing test scenarios in this project."
Create a Custom Agent with the create_custom_agent MCP tool. List existing agents with list_custom_agents. Custom Agents can be assigned to specific test suites or used as defaults for particular application areas.
The combination of a rich Knowledge Base and a well-defined Custom Agent gives ContextQA a domain-specific reasoning context that significantly improves the relevance of AI-generated test cases and failure analyses compared to using the generic AI without customization.
AI Data Analyst: querying accumulated knowledge
The AI Data Analyst provides a natural-language interface for querying the test data and execution history that ContextQA has accumulated. Access it at:
Route: /td/:versionId/AI_Data_Analyst
Rather than navigating through report pages and filtering tables, you can ask questions directly:
"Which test cases have failed more than three times in the last two weeks?"
"What is the pass rate for the checkout flow test suite this month versus last month?"
"Show me the test cases that are most often classified as flaky failures."
"What is the average execution time for test plan 'Nightly Regression'?"
The AI Data Analyst formulates and executes the appropriate data queries based on your natural-language input and returns structured answers with supporting data. It also has access to the Knowledge Base content for your application, so it can interpret query results in the context of your domain.
The query_contextqa MCP tool provides programmatic access to the same querying capability — useful for building custom reporting integrations or populating dashboards with AI-interpreted test data.
How the layers combine
The power of the context graph is in the combination of layers:
Execution data → UI Elements — Each test run refines element knowledge, making future runs more resilient.
Knowledge Bases → AI reasoning — Domain facts improve test generation accuracy and failure interpretation.
Custom Agents → behavior — Persona directives shape how the AI applies its knowledge in specific contexts.
AI Data Analyst → insight — Natural-language querying makes the accumulated data accessible without requiring BI tools or SQL knowledge.
A new ContextQA project starts with minimal context. After dozens of executions, a few well-authored Knowledge Bases, and a custom agent scoped to the application's domain, ContextQA's AI operates with a model of the application that is meaningfully differentiated from a generic AI assistant.
Frequently Asked Questions
Does the UI actually show something called a "context graph"?
No. "Context graph" is a conceptual label for the combined knowledge model. The individual components — UI Elements, Knowledge Bases, Custom Agents, and AI Data Analyst — each have their own portal locations. There is no single "context graph" screen.
How often should I update my Knowledge Bases?
Update a Knowledge Base whenever a significant application change affects the facts it contains — for example, a renamed feature, a changed business rule, or a new UI convention. A practical cadence is to review Knowledge Bases at the end of each sprint and update anything that has changed during that sprint's development work.
Can multiple Custom Agents be active at the same time?
A single Custom Agent is typically assigned per test suite or application area. Using multiple conflicting agents simultaneously can produce inconsistent AI behavior. Design agents to be complementary and assign each to a specific scope rather than stacking agents with overlapping directives.
What data does the AI Data Analyst have access to?
The AI Data Analyst has access to execution history, test case pass/fail data, failure categories, execution timing, and Knowledge Base content for the current application version. It does not have access to the raw application data under test, only to ContextQA's own test execution records.
Related
70% less human effort with AI test generation and self-healing. Book a Demo → — See AI generate, execute, and maintain tests for your application.
Last updated
Was this helpful?