# For Developers

{% hint style="info" %}
**Who is this for?** Frontend developers, full-stack engineers, and backend developers who want fast feedback on their changes without becoming QA experts.
{% endhint %}

You're not a tester — but you're responsible for not breaking things. ContextQA integrates directly into your development workflow: generate tests from your tickets before you write the first line of code, run them from your IDE via MCP, and get AI-generated root cause reports in your terminal when something fails.

***

## Developer Workflow Integration

### Generate Tests from Your Jira Ticket

Before writing code, generate test cases from the acceptance criteria in your Jira ticket:

```python
# In Claude / Cursor with ContextQA MCP connected
generate_tests_from_jira_ticket(
    ticket_url="https://yourcompany.atlassian.net/browse/PROJ-456",
    workspace_version_id=YOUR_WV_ID
)
# → Creates 3 test cases covering the acceptance criteria
# → Test Case IDs: 18850, 18851, 18852
```

This gives you a test suite to run against your branch before opening a PR — no QA involvement required for the initial coverage.

→ [AI Test Generation](https://learning.contextqa.com/ai-features/ai-test-generation)

### Run Tests from Your Terminal

With the ContextQA MCP server connected to your AI coding assistant:

```bash
# Ask Claude/Cursor directly:
"Run test case 18850 and tell me if it passes"

# Or use the MCP tool directly:
execute_test_case(test_case_id=18850, workspace_version_id=YOUR_WV_ID)
```

You get back a pass/fail result with screenshot and video evidence — no context switching to a browser.

### Get Root Cause in Seconds

When a test fails, don't stare at a cryptic error. Ask for root cause:

```python
get_root_cause(result_id=26300)
# Returns:
# {
#   "failure_summary": "The 'Add to Cart' button was not found on the product page.",
#   "affected_step": 4,
#   "classification": "APPLICATION_BUG",
#   "suggested_fix": "Check that the ProductCard component renders the button when stock > 0.",
#   "evidence_used": ["screenshot_step_4", "console_log", "har"]
# }
```

The `classification` field tells you immediately: is this your bug or a test problem?

→ [Failure Analysis](https://learning.contextqa.com/reporting/failure-analysis)

***

## CI/CD Quality Gate

Add a ContextQA quality gate to your pull request checks. Tests run automatically when you push — the PR is blocked until all tests pass.

**GitHub Actions example:**

```yaml
name: ContextQA Tests
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - name: Trigger Test Plan
        id: trigger
        run: |
          RESULT=$(curl -s -X GET \
            "https://server.contextqa.com/api/test-plans/${{ vars.TEST_PLAN_ID }}/execute" \
            -H "x-api-token: ${{ secrets.CONTEXTQA_TOKEN }}")
          echo "result_id=$(echo $RESULT | jq -r '.testPlanResultId')" >> $GITHUB_OUTPUT

      - name: Wait for Results
        run: |
          while true; do
            STATUS=$(curl -s \
              "https://server.contextqa.com/api/test-plan-results/${{ steps.trigger.outputs.result_id }}" \
              -H "x-api-token: ${{ secrets.CONTEXTQA_TOKEN }}" | jq -r '.status')
            echo "Status: $STATUS"
            [ "$STATUS" = "STATUS_COMPLETED" ] && break
            sleep 15
          done

      - name: Check Pass Rate
        run: |
          RESULT=$(curl -s \
            "https://server.contextqa.com/api/test-plan-results/${{ steps.trigger.outputs.result_id }}" \
            -H "x-api-token: ${{ secrets.CONTEXTQA_TOKEN }}")
          FAILED=$(echo $RESULT | jq -r '.failedCount')
          [ "$FAILED" -gt "0" ] && exit 1 || exit 0
```

→ [GitHub Actions Integration](https://learning.contextqa.com/integrations/github-actions)

***

## Generating Tests from Your Codebase

If you have code changes that need test coverage, generate tests based on the diff:

```python
generate_tests_from_code_change(
    # Describe the change or paste the diff summary
    description="Added a discount code field to the checkout form. Valid codes apply 10-20% off.",
    workspace_version_id=YOUR_WV_ID
)
```

For API changes, generate tests directly from your OpenAPI/Swagger spec:

```python
generate_tests_from_swagger(
    swagger_url="https://staging.yourapp.com/api/docs/swagger.json",
    workspace_version_id=YOUR_WV_ID
)
```

→ [Creating API Tests](https://learning.contextqa.com/api-testing/creating-api-tests)

***

## API Testing for Backend Developers

ContextQA supports REST API testing natively — no browser required:

{% tabs %}
{% tab title="Create API Test" %}

```
Step 1: POST https://api.yourapp.com/users
Body: {"email": "test@example.com", "password": "SecurePass123"}
Assert: Status 201
Assert: Response body contains "userId"
```

{% endtab %}

{% tab title="Chain API Calls" %}
API Chaining lets you use response values in subsequent requests:

```
Step 1: POST /auth/login → extract token from response
Step 2: GET /users/profile → use token in Authorization header
Step 3: Assert profile.email equals "test@example.com"
```

{% endtab %}

{% tab title="Validate Responses" %}
Assert on:

* HTTP status codes
* Response body fields (exact match, contains, regex)
* Response headers
* Response time (SLA validation)
* Schema compliance (JSON Schema validation)
  {% endtab %}
  {% endtabs %}

→ [API Testing Overview](https://learning.contextqa.com/api-testing/api-testing)

***

## Connecting Your AI Coding Assistant

Add the ContextQA MCP server to Claude Desktop, Cursor, or any MCP-compatible client:

**Claude Desktop (`~/.claude/claude_desktop_config.json`):**

```json
{
  "mcpServers": {
    "contextqa": {
      "command": "docker",
      "args": ["run", "-i", "--rm",
        "-e", "API_TOKEN=your-token-here",
        "contextqa/mcp-server:latest"
      ]
    }
  }
}
```

Once connected, you can ask your AI assistant to:

* *"Run the smoke test suite on the staging environment"*
* *"Generate tests for the new payment flow I just built"*
* *"Why did test case 18850 fail in the last run?"*
* *"Create a test plan that covers all checkout tests"*

→ [MCP Installation & Setup](https://learning.contextqa.com/mcp-server/installation-and-setup) | [Agent Integration Guide](https://learning.contextqa.com/mcp-server/agent-integration-guide)

***

## Reproducing Bugs from Tickets

When a bug is reported in Jira, reproduce it automatically:

```python
reproduce_from_ticket(
    ticket_url="https://yourcompany.atlassian.net/browse/BUG-789",
    workspace_version_id=YOUR_WV_ID
)
# Analyzes the bug description and steps to reproduce
# Generates and executes a reproduction test case
# Returns: pass/fail + evidence if reproduced
```

→ [Bug, Defect & Advanced Testing tools](https://learning.contextqa.com/mcp-server/tool-reference/bug-defect-and-advanced)

***

{% hint style="success" %}
**Recommended path for developers:**

1. [MCP Installation](https://learning.contextqa.com/mcp-server/installation-and-setup) — connect to your IDE in 10 minutes
2. [AI Test Generation](https://learning.contextqa.com/ai-features/ai-test-generation) — generate tests from Jira tickets
3. [Jira Integration](https://learning.contextqa.com/integrations/jira) — connect Jira for automatic defect filing
4. [GitHub Actions](https://learning.contextqa.com/integrations/github-actions) — add a CI quality gate
5. [API Testing](https://learning.contextqa.com/api-testing/api-testing) — test your endpoints directly
   {% endhint %}

***

{% hint style="info" %}
**Ship faster without breaking things.** [**Book a Developer Demo →**](https://contextqa.com/book-a-demo/) — See how to integrate ContextQA into your PR workflow in under 15 minutes.

*Development teams using ContextQA catch 80% more regressions before they reach production.*
{% endhint %}
