# GitHub Actions

{% hint style="info" %}
**Who is this for?** SDETs, developers, and engineering managers who want to run ContextQA tests automatically on every pull request or push from their GitHub Actions workflow.
{% endhint %}

Trigger ContextQA tests automatically from your GitHub Actions CI/CD pipeline. When a developer opens a pull request or pushes to a protected branch, ContextQA runs the configured test plan on its cloud infrastructure, reports results back to the workflow, and can block merges when tests fail — all without requiring any browser installation on the GitHub Actions runner.

***

## How It Works

1. A GitHub Actions workflow step calls the ContextQA API to trigger a test plan execution
2. The workflow polls ContextQA for the execution result (ContextQA handles the browser execution in the cloud)
3. When execution completes, the workflow reports the result:
   * If tests passed: the workflow step exits with code 0 (success)
   * If tests failed: the workflow step exits with code 1, failing the workflow job

Because ContextQA runs tests on its own infrastructure, your GitHub Actions runner needs no browser, no Playwright installation, and no X display server — just the ability to make outbound HTTPS requests.

***

## Prerequisites

Before setting up the workflow:

1. A ContextQA workspace with at least one configured test plan
2. Your ContextQA credentials stored as GitHub encrypted secrets:
   * Go to your GitHub repository → **Settings → Secrets and variables → Actions → New repository secret**
   * Add `CONTEXTQA_USERNAME` with your ContextQA account email
   * Add `CONTEXTQA_PASSWORD` with your ContextQA account password
3. Your test plan ID (visible in the ContextQA UI under **Test Execution → Test Plans**)
   * Store this as a repository variable: **Settings → Secrets and variables → Actions → Variables → New repository variable**
   * Add `CONTEXTQA_PLAN_ID` with your plan ID

***

## Workflow: Run Tests on Push and Pull Request

The following workflow triggers a ContextQA test plan on every push to `main` and on every pull request targeting `main`. It authenticates, starts the execution, polls for completion, and fails the workflow if tests fail.

```yaml
name: Run ContextQA Tests
on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

jobs:
  test:
    name: ContextQA Test Plan
    runs-on: ubuntu-latest

    steps:
      - name: Run ContextQA Test Plan
        env:
          CONTEXTQA_USERNAME: ${{ secrets.CONTEXTQA_USERNAME }}
          CONTEXTQA_PASSWORD: ${{ secrets.CONTEXTQA_PASSWORD }}
        run: |
          pip install requests
          python -c "
          import requests, os, time, sys

          username = os.environ['CONTEXTQA_USERNAME']
          password = os.environ['CONTEXTQA_PASSWORD']
          plan_id = '${{ vars.CONTEXTQA_PLAN_ID }}'

          # Authenticate
          auth_resp = requests.post(
              'https://api.contextqa.com/auth/login',
              json={'username': username, 'password': password},
              timeout=30
          )
          auth_resp.raise_for_status()
          token = auth_resp.json()['token']
          headers = {'Authorization': f'Bearer {token}'}

          # Trigger test plan execution
          run_resp = requests.post(
              f'https://api.contextqa.com/test-plans/{plan_id}/run',
              headers=headers,
              timeout=30
          )
          run_resp.raise_for_status()
          execution_id = run_resp.json()['id']
          print(f'Execution started: {execution_id}')

          # Poll for result (up to 30 minutes)
          for attempt in range(60):
              time.sleep(30)
              status_resp = requests.get(
                  f'https://api.contextqa.com/executions/{execution_id}',
                  headers=headers,
                  timeout=30
              )
              status_resp.raise_for_status()
              data = status_resp.json()
              result = data.get('result', 'RUNNING')
              print(f'Attempt {attempt + 1}: {result}')

              if result == 'SUCCESS':
                  print('All tests passed.')
                  print(f\"Results: {data.get('report_url', '')}\")
                  sys.exit(0)
              elif result in ['FAILURE', 'ERROR', 'ABORTED']:
                  print(f'Tests failed: {result}')
                  print(f\"Results: {data.get('report_url', '')}\")
                  sys.exit(1)

          print('Timeout: execution did not complete within 30 minutes.')
          sys.exit(1)
          "
```

***

## Workflow: Generate Tests from a Pull Request Diff

For teams that want to automatically generate and run tests specifically targeting the code changes in a PR:

```yaml
name: Generate and Run Tests from PR
on:
  pull_request:
    branches: [main]

jobs:
  generate-and-test:
    name: Generate Tests from PR Changes
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Install ContextQA MCP
        run: |
          pip install uv
          git clone https://github.com/indivatools/cqa-mcp.git /tmp/cqa-mcp
          cd /tmp/cqa-mcp && uv sync

      - name: Generate tests from PR diff
        env:
          CONTEXTQA_USERNAME: ${{ secrets.CONTEXTQA_USERNAME }}
          CONTEXTQA_PASSWORD: ${{ secrets.CONTEXTQA_PASSWORD }}
        run: |
          DIFF=$(git diff origin/main...HEAD)
          cd /tmp/cqa-mcp
          uv run python -c "
          import os, sys

          # Import the client
          import sys
          sys.path.insert(0, '.')
          from app.contextqa_client import ContextQAClient

          client = ContextQAClient(
              username=os.environ['CONTEXTQA_USERNAME'],
              password=os.environ['CONTEXTQA_PASSWORD']
          )

          diff_text = '''$DIFF'''
          app_url = 'https://staging.myapp.com'

          # Generate tests from the diff
          result = client.generate_tests_from_code_change(
              diff_text=diff_text,
              app_url=app_url
          )

          print(f'Generated {len(result[\"test_cases\"])} test cases')

          # Execute each generated test
          all_passed = True
          for test in result['test_cases']:
              exec_result = client.execute_test_case(test_case_id=test['id'])
              print(f\"Test {test['name']}: {exec_result['result']}\")
              if exec_result['result'] != 'PASSED':
                  all_passed = False

          sys.exit(0 if all_passed else 1)
          "
```

***

## Posting Results as a PR Comment

Add a step after the test run to post the result back to the pull request as a comment, giving reviewers a direct link to the full ContextQA report:

```yaml
      - name: Comment on PR with results
        if: always() && github.event_name == 'pull_request'
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          STATUS="${{ steps.test.outcome }}"
          if [ "$STATUS" = "success" ]; then
            ICON="All ContextQA tests passed."
          else
            ICON="ContextQA tests failed. Review the execution report for details."
          fi
          gh pr comment "${{ github.event.pull_request.number }}" \
            --body "$ICON"
```

***

## Blocking Merges on Test Failure

To require ContextQA tests to pass before a PR can be merged:

1. In your GitHub repository, go to **Settings → Branches → Branch protection rules**
2. Create or edit the rule for your target branch (e.g., `main`)
3. Enable **Require status checks to pass before merging**
4. Search for and add the job name from your workflow (e.g., `ContextQA Test Plan` or `test`)
5. Optionally enable **Require branches to be up to date before merging**
6. Save the rule

With this configuration, GitHub blocks the merge button until the ContextQA workflow job reports a successful exit.

***

## Running Specific Suites or Test Cases

To run a specific test suite instead of a full plan, modify the API call in the workflow:

```python
# Run a specific test suite
run_resp = requests.post(
    f'https://api.contextqa.com/test-suites/{suite_id}/run',
    headers=headers
)

# Run a single test case
run_resp = requests.post(
    f'https://api.contextqa.com/test-cases/{test_case_id}/run',
    headers=headers
)
```

The polling pattern is identical — use the returned `execution_id` to poll for the result.

***

## Matrix Builds: Running Across Multiple Browsers

To run the same test plan across multiple browsers in parallel:

```yaml
jobs:
  test:
    name: ContextQA Tests (${{ matrix.browser }})
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        browser: [chrome, firefox, safari]

    steps:
      - name: Run ContextQA Tests on ${{ matrix.browser }}
        env:
          CONTEXTQA_USERNAME: ${{ secrets.CONTEXTQA_USERNAME }}
          CONTEXTQA_PASSWORD: ${{ secrets.CONTEXTQA_PASSWORD }}
        run: |
          pip install requests
          python -c "
          import requests, os, time, sys

          # ... auth code ...

          run_resp = requests.post(
              f'https://api.contextqa.com/test-plans/{plan_id}/run',
              headers=headers,
              json={'browser': '${{ matrix.browser }}'}
          )
          # ... polling code ...
          "
```

Each browser runs in its own parallel job. If the `fail-fast: false` option is set, all browsers run to completion even if one fails, giving you a complete cross-browser result set.

***

## Using the ContextQA MCP Server in CI

For CI workflows that use an AI agent or need access to the full 67-tool interface (not just plan execution), you can install and run the ContextQA MCP server on the GitHub Actions runner:

```yaml
      - name: Set up ContextQA MCP Server
        run: |
          git clone https://github.com/indivatools/cqa-mcp.git /tmp/cqa-mcp
          cd /tmp/cqa-mcp
          pip install uv
          uv sync

      - name: Start MCP Server
        env:
          CONTEXTQA_USERNAME: ${{ secrets.CONTEXTQA_USERNAME }}
          CONTEXTQA_PASSWORD: ${{ secrets.CONTEXTQA_PASSWORD }}
        run: |
          cd /tmp/cqa-mcp
          python -m app.fastmcp_server &
          sleep 3  # Wait for server to start
          curl http://localhost:8080/health  # Verify it's up

      - name: Run custom agent workflow
        run: |
          cd /tmp/cqa-mcp
          uv run python -c "
          # Full access to all 67 MCP tools via Python
          from app.contextqa_client import ContextQAClient
          import os

          client = ContextQAClient(
              username=os.environ['CONTEXTQA_USERNAME'],
              password=os.environ['CONTEXTQA_PASSWORD']
          )

          # Generate tests from PR diff
          import subprocess
          diff = subprocess.check_output(['git', 'diff', 'origin/main...HEAD']).decode()

          tests = client.generate_tests_from_code_change(
              diff_text=diff,
              app_url='https://staging.myapp.com'
          )

          for test in tests['test_cases']:
              result = client.execute_test_case(test_case_id=test['id'])
              if result['result'] == 'FAILED':
                  # Auto-create Jira ticket for failures
                  client.create_defect_ticket(
                      execution_id=result['execution_id'],
                      project_id='MYAPP'
                  )
          "
        env:
          CONTEXTQA_USERNAME: ${{ secrets.CONTEXTQA_USERNAME }}
          CONTEXTQA_PASSWORD: ${{ secrets.CONTEXTQA_PASSWORD }}
```

***

## Environment-Specific Runs

For workflows that deploy to different environments (staging vs. production), pass the environment ID to target the correct base URL:

```python
# Get the environment ID for staging from ContextQA
envs = client.get_environments()
staging_env = next(e for e in envs if e['name'] == 'Staging')

# Run tests against staging
run_resp = requests.post(
    f'https://api.contextqa.com/test-plans/{plan_id}/run',
    headers=headers,
    json={'environment_id': staging_env['id']}
)
```

This ensures your CI tests run against the environment that was just deployed, using the correct base URL and configuration.

***

## Storing Credentials Securely

Never hardcode ContextQA credentials in workflow files. Always use GitHub's encrypted secrets:

| Secret Name          | Value                           |
| -------------------- | ------------------------------- |
| `CONTEXTQA_USERNAME` | Your ContextQA account email    |
| `CONTEXTQA_PASSWORD` | Your ContextQA account password |

For organization-wide use, store these as organization-level secrets rather than per-repository secrets so they are available across all repositories without duplication.

For extra security, create a dedicated ContextQA service account for CI (not your personal login). This allows you to revoke CI access independently and keeps the audit log clean.

***

## Troubleshooting

**Authentication fails in CI but works locally:**

* Verify the secrets are named exactly `CONTEXTQA_USERNAME` and `CONTEXTQA_PASSWORD` (case-sensitive)
* Check that the secrets are available to the repository (organization secrets may require explicit repository access)
* Test the credentials manually: `curl -X POST https://api.contextqa.com/auth/login -d '{"username":"...", "password":"..."}'`

**Workflow times out before tests complete:**

* Increase the polling duration or the number of polling attempts
* Check if the test plan contains very long-running tests — consider splitting into multiple plans
* Verify ContextQA execution infrastructure is reachable from the GitHub Actions runner network

**Tests pass locally but fail in CI:**

* Check if the application is deployed and accessible at the URL being tested before the tests run
* Add a health check step before triggering ContextQA to verify the staging environment is up
* Check if the application requires VPN or IP allowlisting — GitHub Actions runners use ephemeral IPs that change each run

## Related Pages

* [Jenkins](https://learning.contextqa.com/integrations/jenkins) — alternative CI/CD integration
* [GitLab CI](https://learning.contextqa.com/integrations/gitlab-ci) — alternative CI/CD integration
* [Environments](https://learning.contextqa.com/execution/environments) — configure base URLs and variables for CI runs
* [Running Tests](https://learning.contextqa.com/execution/running-tests) — understand test execution options
* [MCP Server](https://learning.contextqa.com/mcp-server/overview) — trigger tests programmatically from AI agents

{% hint style="info" %}
**Connect ContextQA to your CI/CD pipeline in 15 minutes.** [**Book a Demo →**](https://contextqa.com/book-a-demo/) — See the full integration walkthrough for your existing toolchain.
{% endhint %}
