# For SDETs

{% hint style="info" %}
**Who is this for?** Software Development Engineers in Test (SDETs), senior automation engineers, and QA engineers who write and maintain test frameworks.
{% endhint %}

You've written the frameworks, built the CI/CD integrations, and maintained the test suites. You know the real cost: every sprint brings selector rot, environment drift, and another afternoon debugging why a locator stopped working. ContextQA augments your existing expertise with AI infrastructure that handles the brittle parts — so you focus on test architecture, coverage strategy, and toolchain integration.

***

## What ContextQA Adds to Your Stack

| Capability                 | Your Gain                                                                 |
| -------------------------- | ------------------------------------------------------------------------- |
| 67 MCP tools               | Full platform control from Claude, Cursor, or any MCP-compatible AI agent |
| `export_to_playwright`     | Export any ContextQA test as runnable Playwright TypeScript code          |
| `export_test_case_as_code` | Get the raw step definitions for custom framework integration             |
| AI self-healing            | Zero selector maintenance — AI fixes broken locators above 90% confidence |
| Evidence API               | Programmatic access to screenshots, HAR, console logs, Playwright traces  |
| Parallel execution         | Run full regression in minutes across browsers and devices                |
| CI/CD native               | REST trigger + polling pattern works with any pipeline tool               |

***

## MCP Server Integration

ContextQA exposes a Model Context Protocol server at your configured endpoint. Every platform capability is available as a tool call from any MCP-compatible AI client.

**Key SDET tools:**

```python
# Create a test case from a URL + natural language description
create_test_case(
    url="https://staging.yourapp.com/checkout",
    description="Complete a purchase with a valid credit card and verify the order confirmation",
    workspace_version_id=YOUR_WORKSPACE_VERSION_ID
)

# Execute a test case and get the execution ID
execute_test_case(test_case_id=18750, workspace_version_id=YOUR_WV_ID)

# Poll for completion
get_execution_status(number_of_executions=1, test_case_id=18750)

# Get results with step-level detail
get_test_case_results(execution_id=26242)
get_test_step_results(result_id=26242)

# AI root cause analysis for failures
get_root_cause(result_id=26242)
```

→ [MCP Server Overview](https://learning.contextqa.com/mcp-server/overview) | [Tool Reference](https://learning.contextqa.com/mcp-server/tool-reference)

***

## Exporting Tests as Playwright Code

Any test case created in ContextQA can be exported as Playwright TypeScript for use in your existing framework:

```bash
# Via MCP tool
export_to_playwright(test_case_id=18750)
```

The exported code includes:

* Page object model structure
* Resilient locator strategies (role-based + text-based + attribute fallbacks)
* Explicit wait patterns matching ContextQA's execution behavior
* Assertion calls using Playwright's `expect()` API

→ [Exporting Reports](https://learning.contextqa.com/reporting/exporting-reports)

***

## CI/CD Integration Patterns

ContextQA's execution API follows an async trigger + polling pattern compatible with any CI/CD system:

{% tabs %}
{% tab title="GitHub Actions" %}

```yaml
- name: Trigger ContextQA Test Plan
  id: trigger
  run: |
    RESULT=$(curl -s -X GET \
      "https://server.contextqa.com/api/test-plans/$TEST_PLAN_ID/execute" \
      -H "x-api-token: ${{ secrets.CONTEXTQA_TOKEN }}")
    echo "plan_result_id=$(echo $RESULT | jq -r '.testPlanResultId')" >> $GITHUB_OUTPUT

- name: Poll for Completion
  run: |
    while true; do
      STATUS=$(curl -s \
        "https://server.contextqa.com/api/test-plan-results/${{ steps.trigger.outputs.plan_result_id }}" \
        -H "x-api-token: ${{ secrets.CONTEXTQA_TOKEN }}" | jq -r '.status')
      [ "$STATUS" = "STATUS_COMPLETED" ] && break
      sleep 15
    done
```

{% endtab %}

{% tab title="Jenkins" %}

```groovy
stage('ContextQA') {
    steps {
        script {
            def trigger = sh(
                script: """curl -s -X GET \\
                    "${CONTEXTQA_HOST}/api/test-plans/${TEST_PLAN_ID}/execute" \\
                    -H "x-api-token: ${CONTEXTQA_TOKEN}" """,
                returnStdout: true
            )
            env.PLAN_RESULT_ID = readJSON(text: trigger).testPlanResultId
        }
    }
}
```

{% endtab %}

{% tab title="GitLab CI" %}

```yaml
contextqa:
  stage: test
  script:
    - |
      RESULT_ID=$(curl -s -X GET \
        "${CONTEXTQA_HOST}/api/test-plans/${TEST_PLAN_ID}/execute" \
        -H "x-api-token: ${CONTEXTQA_TOKEN}" | jq -r '.testPlanResultId')
      echo "PLAN_RESULT_ID=$RESULT_ID" >> build.env
  artifacts:
    reports:
      dotenv: build.env
```

{% endtab %}
{% endtabs %}

→ [GitHub Actions](https://learning.contextqa.com/integrations/github-actions) | [Jenkins](https://learning.contextqa.com/integrations/jenkins) | [GitLab CI](https://learning.contextqa.com/integrations/gitlab-ci)

***

## Test Architecture Best Practices

### Step Groups as Reusable Libraries

Build a `SG_Auth` step group containing your login flow. Reference it in every test case that requires authentication. When the login form changes, update `SG_Auth` once — all test cases inherit the fix automatically.

### Environments for Multi-Stage Testing

Define `staging`, `qa`, and `production` environments with their respective base URLs and API keys. Test Plans reference an environment by name — the same plan runs against any stage without modification.

→ [Environments](https://learning.contextqa.com/execution/environments)

### Knowledge Base for Application Context

Add known UI quirks to the Knowledge Base:

* *"Always dismiss the cookie consent banner before interacting with the page"*
* *"The loading spinner takes up to 8 seconds on the checkout page"*
* *"Use credentials <testuser@corp.com> / TestPass123 for MFA bypass in staging"*

The AI reads these instructions before every execution — reducing false failures from environment-specific behavior.

→ [Knowledge Base](https://learning.contextqa.com/ai-features/knowledge-base)

### Custom Agents for Domain Logic

Create a Custom Agent with a tailored system prompt for complex scenarios:

* A Salesforce-aware agent that understands Lightning UI navigation patterns
* An accessibility agent that verifies ARIA labels on every step
* A performance agent that flags any network request exceeding 2 seconds

→ [Custom Agents](https://learning.contextqa.com/ai-features/custom-agents)

***

## Evidence & Debugging API

Every execution produces a queryable evidence package:

| Tool                    | Returns                                                  |
| ----------------------- | -------------------------------------------------------- |
| `get_test_step_results` | Per-step pass/fail, screenshot URL, assertion detail     |
| `get_console_logs`      | Browser console entries (errors, warnings, info)         |
| `get_network_logs`      | Full HAR network log for the execution                   |
| `get_trace_url`         | Playwright trace viewer URL (`.zip` downloadable)        |
| `get_root_cause`        | AI classification + suggested fix + affected step number |
| `get_ai_reasoning`      | Full AI reasoning chain for the execution                |
| `get_ai_insights`       | Pattern-based insights across multiple executions        |

→ [Execution & Results tools](https://learning.contextqa.com/mcp-server/tool-reference/execution-and-results)

***

## Flaky Test Management

ContextQA automatically classifies failures across four categories:

* **Test bug** — the test assertion is incorrect
* **Application bug** — the application has a regression
* **Flaky failure** — the test passes on retry, likely a timing issue
* **Environment issue** — infrastructure or network problem

Use `get_root_cause` to retrieve this classification programmatically and route failures to the correct team automatically.

→ [Flaky Test Detection](https://learning.contextqa.com/reporting/flaky-test-detection)

***

{% hint style="success" %}
**Recommended next steps for SDETs:**

1. [MCP Server Installation](https://learning.contextqa.com/mcp-server/installation-and-setup) — connect your AI agent in 10 minutes
2. [Tool Reference](https://learning.contextqa.com/mcp-server/tool-reference) — full 67-tool catalog with parameters
3. [Agent Integration Guide](https://learning.contextqa.com/mcp-server/agent-integration-guide) — Claude/Cursor integration patterns
4. [CI/CD integrations](https://learning.contextqa.com/integrations/github-actions) — drop-in pipeline configs
   {% endhint %}

***

{% hint style="info" %}
**See the platform from an SDET's perspective.** [**Book a Technical Demo →**](https://contextqa.com/book-a-demo/) — A 45-minute deep-dive into MCP tooling, API patterns, and CI/CD integration with your actual test infrastructure.

*SDETs using ContextQA report 70% less time spent on test maintenance.*
{% endhint %}
