# CircleCI

{% hint style="info" %}
**Who is this for?** SDETs, developers, and engineering managers who use CircleCI and want to run ContextQA test plans as part of their CI workflow and fail builds on test failures.
{% endhint %}

> **CircleCI ContextQA integration:** A CI/CD configuration where a CircleCI workflow job triggers a ContextQA test plan via the REST API, polls for execution completion, and fails the CircleCI workflow when tests fail — providing automated release gating in CircleCI pipelines.

CircleCI workflows coordinate builds, deployments, and validation steps. Adding ContextQA as a dedicated job in your workflow ensures that every deployment to staging or production is validated by your full test suite before the workflow proceeds. This page provides a complete `config.yml` example, explains credential management in CircleCI, and covers result publication.

## Prerequisites

Before configuring the CircleCI integration:

* A ContextQA account with at least one configured test plan
* The test plan ID (visible in the URL when viewing the plan: `/test-plans/<plan_id>`)
* A ContextQA service account (dedicated email and password — do not use a personal account)
* A CircleCI project connected to your repository
* CircleCI CLI or access to the CircleCI web interface for managing environment variables

## Storing credentials as CircleCI environment variables

CircleCI environment variables are encrypted at rest and masked in job output. Store ContextQA credentials at the project level:

1. In CircleCI, navigate to your **Project Settings**.
2. Click **Environment Variables** in the left sidebar.
3. Click **Add Environment Variable**.
4. Set **Name** to `CONTEXTQA_USERNAME` and **Value** to your service account email. Click **Add Environment Variable**.
5. Repeat with **Name** `CONTEXTQA_PASSWORD` and your service account password.

For organization-wide reuse across multiple projects, store the variables in a CircleCI **Context**:

1. Navigate to **Organization Settings → Contexts**.
2. Create a context named `contextqa` (or any name that fits your naming convention).
3. Add `CONTEXTQA_USERNAME` and `CONTEXTQA_PASSWORD` to the context.
4. Reference the context in your workflow using the `context` key.

## Complete config.yml example

```yaml
version: 2.1

executors:
  contextqa-executor:
    docker:
      - image: cimg/base:stable
    resource_class: small

jobs:
  build:
    executor: contextqa-executor
    steps:
      - checkout
      - run:
          name: Build application
          command: echo "Build steps here"

  deploy_staging:
    executor: contextqa-executor
    steps:
      - run:
          name: Deploy to staging
          command: echo "Deploy steps here"

  contextqa_test:
    executor: contextqa-executor
    environment:
      CONTEXTQA_BASE_URL: https://server.contextqa.com
      CONTEXTQA_PLAN_ID: <your_test_plan_id>
    steps:
      - run:
          name: Authenticate with ContextQA
          command: |
            AUTH_RESPONSE=$(curl -s -X POST "${CONTEXTQA_BASE_URL}/api/v1/auth/login" \
              -H "Content-Type: application/json" \
              -d "{\"email\":\"${CONTEXTQA_USERNAME}\",\"password\":\"${CONTEXTQA_PASSWORD}\"}")
            TOKEN=$(echo "$AUTH_RESPONSE" | python3 -c "import sys,json; print(json.load(sys.stdin)['token'])")
            if [ -z "$TOKEN" ]; then
              echo "ERROR: Authentication failed."
              exit 1
            fi
            echo "export CONTEXTQA_TOKEN=${TOKEN}" >> $BASH_ENV
            echo "Authentication successful."

      - run:
          name: Trigger ContextQA test plan
          command: |
            # Test plan execute endpoint uses GET, not POST
            TRIGGER_RESPONSE=$(curl -s -X GET "${CONTEXTQA_BASE_URL}/api/v1/testplans/${CONTEXTQA_PLAN_ID}/execute" \
              -H "Authorization: Bearer ${CONTEXTQA_TOKEN}")
            EXECUTION_ID=$(echo "$TRIGGER_RESPONSE" | python3 -c "import sys,json; print(json.load(sys.stdin)['executionId'])")
            if [ -z "$EXECUTION_ID" ]; then
              echo "ERROR: Failed to start test plan execution."
              echo "Response: $TRIGGER_RESPONSE"
              exit 1
            fi
            echo "ContextQA execution started: ${EXECUTION_ID}"
            echo "export CONTEXTQA_EXECUTION_ID=${EXECUTION_ID}" >> $BASH_ENV

      - run:
          name: Poll for test completion
          command: |
            STATUS="RUNNING"
            ATTEMPT=0
            MAX_ATTEMPTS=60

            while [ "$STATUS" = "RUNNING" ] || [ "$STATUS" = "PENDING" ]; do
              if [ "$ATTEMPT" -ge "$MAX_ATTEMPTS" ]; then
                echo "ERROR: ContextQA timed out after $((MAX_ATTEMPTS * 30)) seconds."
                exit 1
              fi
              sleep 30
              ATTEMPT=$((ATTEMPT + 1))
              STATUS_RESPONSE=$(curl -s -X GET \
                "${CONTEXTQA_BASE_URL}/api/v1/executions/${CONTEXTQA_EXECUTION_ID}/status" \
                -H "Authorization: Bearer ${CONTEXTQA_TOKEN}")
              STATUS=$(echo "$STATUS_RESPONSE" | python3 -c "import sys,json; print(json.load(sys.stdin)['status'])")
              echo "Attempt ${ATTEMPT}: ContextQA status = ${STATUS}"
            done

            REPORT_URL="https://app.contextqa.com/executions/${CONTEXTQA_EXECUTION_ID}"
            echo "Final status: ${STATUS}"
            echo "Report: ${REPORT_URL}"

            # Write result to a file for the artifact step
            mkdir -p /tmp/contextqa-results
            echo "Status: ${STATUS}" > /tmp/contextqa-results/result.txt
            echo "Execution ID: ${CONTEXTQA_EXECUTION_ID}" >> /tmp/contextqa-results/result.txt
            echo "Report URL: ${REPORT_URL}" >> /tmp/contextqa-results/result.txt

            echo "export CONTEXTQA_STATUS=${STATUS}" >> $BASH_ENV
            echo "export CONTEXTQA_REPORT_URL=${REPORT_URL}" >> $BASH_ENV

            if [ "$STATUS" = "FAILED" ]; then
              echo "ContextQA tests FAILED. See report: ${REPORT_URL}"
              exit 1
            fi

      - store_artifacts:
          path: /tmp/contextqa-results
          destination: contextqa-results

workflows:
  build_test_deploy:
    jobs:
      - build
      - deploy_staging:
          requires:
            - build
      - contextqa_test:
          requires:
            - deploy_staging
          context:
            - contextqa
```

## How the config.yml works

**Executor:** The `cimg/base:stable` image includes `curl` and Python 3, which are used for HTTP calls and JSON parsing respectively. The `small` resource class is sufficient — the CircleCI job itself does nothing computationally intensive; it only issues HTTP requests and sleeps between polls.

**Environment variables:** The `CONTEXTQA_BASE_URL` and `CONTEXTQA_PLAN_ID` are set as non-secret environment variables directly in the job definition. The secret credentials (`CONTEXTQA_USERNAME`, `CONTEXTQA_PASSWORD`) come from the `contextqa` CircleCI Context referenced in the workflow definition.

**$BASH\_ENV for variable passing:** CircleCI steps within the same job share environment through `$BASH_ENV`. Writing `export VAR=value >> $BASH_ENV` makes `VAR` available in all subsequent steps in the same job. The `CONTEXTQA_TOKEN`, `CONTEXTQA_EXECUTION_ID`, and final status variables are all passed this way.

**GET request for test plan execution:** The trigger step uses `curl -X GET`. This is correct — ContextQA's execute endpoint is a GET request. Using POST to this endpoint returns a 405 error.

**Polling logic:** The polling step sleeps 30 seconds between attempts and allows a maximum of 60 attempts (30 minutes total). Adjust `MAX_ATTEMPTS` to suit your suite's typical run time. The loop exits when status is neither `RUNNING` nor `PENDING`. A `FAILED` status causes the step to exit with code 1, which fails the CircleCI job.

**Workflow dependency:** The `contextqa_test` job lists `deploy_staging` in its `requires` block. CircleCI will not start the test job until the deploy job completes successfully. If the deploy fails, the test job is skipped automatically.

## Storing the execution result as a CircleCI artifact

The `store_artifacts` step publishes the `/tmp/contextqa-results/result.txt` file to CircleCI. This file contains the execution status, execution ID, and report URL. After the job completes, navigate to the **Artifacts** tab of the job in the CircleCI dashboard to find the file.

The artifact is stored regardless of whether the job passes or fails because `store_artifacts` always runs — CircleCI artifact steps are not gated by prior step success. This ensures that even when tests fail, the report URL is accessible from the CircleCI job artifacts.

## Using a CircleCI Context for shared credentials

If multiple projects in your CircleCI organization need to access ContextQA, use a Context rather than per-project environment variables:

1. Navigate to **Organization Settings → Contexts → Create Context**.
2. Name the context `contextqa`.
3. Add `CONTEXTQA_USERNAME` and `CONTEXTQA_PASSWORD`.
4. In each project's `config.yml`, reference the context under the workflow job definition using the `context` key as shown in the example.

Contexts are governed by security groups in CircleCI. You can restrict which projects and team members can use the `contextqa` context, preventing unauthorized access to your test infrastructure.

## Failing the workflow on test failure

When the `contextqa_test` job fails (exit code 1 from the polling step), CircleCI marks the job as failed and the overall workflow status becomes failed. Any jobs in the workflow that list `contextqa_test` in their `requires` block will not run. This provides automatic release gating: a production deploy job that requires the test job will never run if tests fail.

## Frequently Asked Questions

### Why use Python 3 for JSON parsing instead of jq?

The `cimg/base:stable` image ships Python 3 by default. `jq` is not always present and would require an installation step. Python's standard library `json` module is reliable and avoids adding dependencies. If you prefer `jq`, add `sudo apt-get install -y jq` to a `run` step before the authentication step.

### Can I reuse the ContextQA token across multiple pipeline runs?

No. Obtain a fresh token at the start of each job. ContextQA tokens have a limited lifetime, and storing a token in a CircleCI environment variable that persists across builds would eventually result in authentication failures.

### How do I trigger a ContextQA test plan on a schedule in CircleCI rather than on every push?

Use CircleCI's scheduled pipeline feature. In the CircleCI web UI, navigate to **Project Settings → Triggers** and create a scheduled trigger. Set the trigger to call your pipeline on the desired schedule. The `contextqa_test` job will run as part of that scheduled workflow without requiring a code push.

### Can I run multiple test plans in the same CircleCI workflow?

Yes. Add additional jobs following the `contextqa_test` pattern, each with a different `CONTEXTQA_PLAN_ID`. List them as parallel jobs under the workflow, or chain them sequentially using `requires` if you need them to run in order.

## Related

* [GitHub Actions integration](https://learning.contextqa.com/integrations/github-actions)
* [Jenkins integration](https://learning.contextqa.com/integrations/jenkins)
* [GitLab CI integration](https://learning.contextqa.com/integrations/gitlab-ci)
* [Azure DevOps integration](https://learning.contextqa.com/integrations/azure-devops)
* [Running tests and test plans](https://learning.contextqa.com/execution/running-tests)

{% hint style="info" %}
**Connect ContextQA to your CI/CD pipeline in 15 minutes.** [**Book a Demo →**](https://contextqa.com/book-a-demo/) — See the full integration walkthrough for your existing toolchain.
{% endhint %}
