# Mobile Test Plans

{% hint style="info" %}
**Who is this for?** SDETs and QA managers who need to run the same mobile test suite across multiple device and OS configurations in a single orchestrated execution.
{% endhint %}

A well-structured mobile test plan lets you run the same test suite against multiple device and OS configurations in a single orchestrated execution. This page covers building mobile-specific test suites, assembling multi-device test plans, leveraging mobile concurrency for parallel runs, configuring environments, and scheduling automated execution.

## Creating a Mobile Test Suite

A test suite groups related test cases into a logical unit. For mobile testing, suites should be scoped to a platform (Android or iOS) or a functional area (login, checkout, notifications) to make results and failure triage easier.

### Steps

1. Navigate to **Test Development** from the main workspace.
2. Select the **Test Suites** section.
3. Click **Create Test Suite**.
4. Enter a descriptive name for the suite (for example, "Android Regression - Checkout Flow").
5. Set **Platform Type** to **Mobile**. This ensures the suite is configured for mobile execution contexts.
6. Add the test cases you want to include. Search by name or use the filter to show only Mobile-type test cases.
7. Optionally, add a label to categorize the suite (for example, "regression", "smoke", "nightly").
8. Click **Save and Create**.

The suite appears in the Test Suites list, ready to be added to a test plan.

### Querying Test Suites via MCP

```python
# MCP tool call — list existing test suites
get_test_suites(query="android regression")
```

## Building a Multi-Device Test Plan

A test plan defines which suites run, on which devices, with what app build, and in what configuration. A single test plan can target multiple device/OS combinations, making it the primary tool for cross-device compatibility testing.

### Steps

1. Navigate to **Test Development → Test Plans**.
2. Click **Create Test Plan** (or **Test Plan** depending on workspace version).
3. Enter a clear name that identifies the scope, for example "Mobile Regression — Android 13 + Android 14 + iOS 17".
4. In the **Parallel Nodes** field, set the number of simultaneous streams you want. This must not exceed your workspace's available mobile concurrency slots. Check availability first:

```python
# MCP tool call — check mobile slot availability before configuring the plan
get_mobile_concurrency()
```

5. Optionally, add email addresses to receive execution results when the plan completes.

### Adding the Test Suite

In the **Test Machine and Suite** section:

1. Select the checkbox for the test suite you created.
2. Click **Add Machine and Devices**.

### Configuring Device Entries

Each device entry in a test plan represents one device/OS/app-build combination. Add one entry per configuration you want to cover.

**For each device configuration:**

1. Enter a descriptive name for the machine entry (for example, "Pixel 5 - Android 13", "iPhone 14 - iOS 17").
2. Open the device dropdown and select the target device from the farm.
3. Open the **App Build** dropdown and select the uploaded APK or IPA for this configuration. Different device entries can reference different builds.
4. Click **Create** to save the device configuration.

**Example multi-device configuration for a typical regression plan:**

| Machine Name         | Device    | OS Version | App Build             |
| -------------------- | --------- | ---------- | --------------------- |
| Pixel 5 - Android 13 | Pixel 5   | Android 13 | clinic-v2.4-debug.apk |
| Pixel 7 - Android 14 | Pixel 7   | Android 14 | clinic-v2.4-debug.apk |
| iPhone 13 - iOS 16   | iPhone 13 | iOS 16.6   | clinic-v2.4-adhoc.ipa |
| iPhone 14 - iOS 17   | iPhone 14 | iOS 17.2   | clinic-v2.4-adhoc.ipa |

After configuring all device entries, click **Create** to finalize the test plan.

## Mobile Concurrency: Parallel Execution

Mobile concurrency controls how many devices run simultaneously within a test plan execution.

### How It Works

When a test plan runs, ContextQA distributes the device configurations across available concurrency slots. If you have four device entries and four available slots, all four devices start simultaneously. If you have four device entries but only two available slots, the first two start immediately while the other two queue until a slot frees up.

Setting the **Parallel Nodes** value on the test plan controls the maximum degree of parallelism requested. Set it equal to the number of device entries to get maximum parallel execution.

### Checking Availability

```python
# MCP tool call — returns total slots and currently occupied slots
get_mobile_concurrency()
```

Example response:

```json
{
  "total_slots": 5,
  "occupied_slots": 1,
  "available_slots": 4
}
```

If `available_slots` is less than your intended parallel node count, either reduce the node count or schedule the run when more capacity is available.

## Environment Setup for Mobile

Environments in ContextQA store configuration parameters that vary between deployments — base URLs for backend APIs, API keys, feature flags, and other runtime values your app reads from its configuration.

### Creating a Mobile Environment

1. Navigate to **Settings → Environments** (or via the MCP tools).
2. Click **Create Environment**.
3. Provide a name (for example, "Mobile Staging", "Mobile Production").
4. Add parameters as key-value pairs. Common mobile environment parameters:

| Parameter Key         | Example Value                     | Purpose                                  |
| --------------------- | --------------------------------- | ---------------------------------------- |
| `API_BASE_URL`        | `https://api-staging.example.com` | Backend API endpoint                     |
| `AUTH_TOKEN`          | `Bearer eyJhbGciOi...`            | Pre-set authentication token             |
| `FEATURE_FLAG_NEW_UI` | `true`                            | Feature toggle for A/B variants          |
| `APP_ENV`             | `staging`                         | Environment identifier passed to the app |

5. Save the environment.

### Using MCP to Manage Environments

```python
# MCP tool call — list existing environments
get_environments(query="mobile")

# MCP tool call — create a new mobile environment
create_environment(
    name="Mobile Staging",
    env_type="mobile",
    description="Staging environment for mobile regression runs",
    parameters={
        "API_BASE_URL": "https://api-staging.example.com",
        "APP_ENV": "staging"
    }
)
```

### Attaching an Environment to a Test Plan

When creating or editing a test plan, select the environment from the **Environment** dropdown. All test cases in the plan will receive the environment's parameter values at runtime.

## Running and Scheduling a Test Plan

### Immediate Execution

Click the **Run** button on the test plan. ContextQA starts distributing the device configurations across available concurrency slots. Track progress in the **View** section of the test plan.

Via MCP:

```python
# MCP tool call — trigger test plan execution
execute_test_plan(test_plan_id=42)

# MCP tool call — poll execution status
get_test_plan_execution_status(execution_id=7890)
```

The `execute_test_plan` response returns an `execution_id`. Poll `get_test_plan_execution_status` until the status is `COMPLETED`, `PASSED`, or `FAILED`.

### Viewing Results by Device

After execution, click **View** on the completed run. Results are broken out by device entry — you see separate pass/fail status for each configured device. Click into any device's result to view:

* Per-step status and screenshots
* AI logs and locator decisions
* Video recording of the full session on that device

### Scheduling Automated Runs

Test plans can be scheduled to run automatically on a recurring basis — useful for nightly regression runs or daily smoke tests.

1. On the test plan, click the **Schedule** icon.
2. Provide a schedule name and optional description (for example, "Nightly Mobile Regression - 9 AM").
3. Set the **Start Date** and **Start Time**.
4. Choose the **Frequency**: daily, weekly, or custom cron.
5. Click **Schedule** to confirm.

The test plan will execute automatically at the defined cadence. Results are sent to any email addresses configured on the plan. Scheduled runs appear in the Run History alongside on-demand executions.

### Re-running a Failed Plan

If a plan run contains failures and you want to re-execute after a fix:

```python
# MCP tool call — re-run the most recent test plan execution
rerun_test_plan(execution_id=7890)
```

This re-executes the same device configurations and app builds without requiring a new manual trigger.

## Summary: End-to-End Mobile Test Plan Workflow

```
1. Upload APK/IPA → Test Development → Uploads
2. Create test cases → Test Development → New Test Case (Mobile Application)
3. Create test suite → Test Development → Test Suites (Platform: Mobile)
4. Check concurrency → get_mobile_concurrency()
5. Create test plan → Test Development → Test Plans → add devices + app builds
6. Configure environment → Settings → Environments → attach to test plan
7. Run or schedule → Test Plans → Run / Schedule
8. Review results → Test Plans → View → per-device breakdown + AI logs
```

For large-scale runs spanning both iOS and Android, consider maintaining separate test plans per platform (Android Plan, iOS Plan) with shared test suites. This keeps result reporting clean and makes it easier to identify platform-specific failures.

{% hint style="info" %}
**Test iOS and Android in parallel — same workflow as web.** [**Book a Demo →**](https://contextqa.com/book-a-demo/) — See ContextQA automate mobile testing for your iOS and Android apps.
{% endhint %}
