# Creating Mobile Tests

{% hint style="info" %}
**Who is this for?** SDETs and QA managers who want to create and execute automated tests for iOS and Android apps using natural language — the same workflow as web testing.
{% endhint %}

Mobile test cases in ContextQA follow the same overall structure as web test cases, with key differences in how the app is launched (via an uploaded binary rather than a URL) and how interactions are expressed (gestures rather than browser events). This page covers the full lifecycle: creation, AI generation, device selection, execution, and results review.

## Creating a Mobile Test Case

### Manually via the UI

1. Navigate to **Test Development** from the main workspace.
2. Click the **+** (plus) icon to create a new test case.
3. Select **Start with AI Assistant** or choose manual step entry.
4. When prompted for the target platform, select **Mobile Application**.

Selecting Mobile Application changes the test case context: instead of entering a URL, you will configure an app build and device when running the test.

### Filtering Existing Mobile Test Cases

To see only mobile test cases in a project with mixed test types:

1. In **Test Development**, click the **Filter** option.
2. Under **Test Type**, select **Mobile**.
3. Apply the filter. The list updates to show only mobile-type test cases.

This filter is persistent during your session and helps when managing large test libraries.

## AI Test Generation for Mobile Apps

ContextQA's AI Assistant generates test steps from a plain-English description of the user journey you want to automate.

### Generation Workflow

1. In the new test case dialog, select **AI Assistant**.
2. Select **Mobile Application** as the platform.
3. Enter your test prompt in plain English. Be specific about the actions and assertions. For example:

   > "Open the clinic app, log in with username '<doctor@example.com>' and password 'Test1234', navigate to the Appointments tab, and verify that at least one upcoming appointment is listed."
4. If the test requires prior app state (for example, the user must already be logged in), select the appropriate prerequisite test case from the list. If no prerequisites are needed, leave the field empty.
5. Click **Generate & Execute Test Case**.

### Selecting Platform and App Build

A device configuration dialog appears after clicking Generate & Execute Test Case:

1. Select the platform: **Android** or **iOS**.
2. Open the **App Build** dropdown and choose the APK or IPA you previously uploaded.
3. Select the target device from the available device list.

Once configured, click **Start Execution**. ContextQA installs the app on the selected device and begins executing the generated steps.

### Reviewing the Generated Script

After execution completes, click **Back to Test Case**. The AI-generated English prompt is converted into a structured Spark script — ContextQA's internal step format. Each step is visible and editable. Review the steps to verify they accurately capture the intended test logic, and edit any step descriptions as needed.

## Supported Gestures

Mobile test steps support the following interactions:

| Gesture        | Description                                                                      |
| -------------- | -------------------------------------------------------------------------------- |
| **Tap**        | Single touch on an element or coordinate                                         |
| **Long Press** | Touch and hold on an element                                                     |
| **Swipe**      | Directional drag (up, down, left, right) across the screen or a specific element |
| **Scroll**     | Vertical or horizontal scroll within a scrollable container                      |
| **Pinch**      | Two-finger pinch inward (zoom out)                                               |
| **Zoom**       | Two-finger spread outward (zoom in)                                              |
| **Type**       | Enter text into a focused input field                                            |
| **Assert**     | Verify that an element is visible, contains specific text, or matches a state    |

When writing AI prompts, use natural gesture language: "swipe left on the card", "scroll down to the footer", "long press the message to open the context menu". The AI maps these descriptions to the appropriate gesture commands.

## Device and OS Version Selection

When configuring a test case execution or setting up a test plan device entry, you select from devices available on the device farm. Use the `get_test_devices` MCP tool to see the full current list:

```python
# MCP tool call — list available mobile devices
get_test_devices(query="iOS")
```

Example response fields:

```json
{
  "devices": [
    { "name": "iPhone 14", "platform": "iOS", "os_version": "17.2", "status": "available" },
    { "name": "iPhone 13", "platform": "iOS", "os_version": "16.6", "status": "available" },
    { "name": "Pixel 7", "platform": "Android", "os_version": "14", "status": "available" },
    { "name": "Pixel 5", "platform": "Android", "os_version": "13", "status": "available" }
  ]
}
```

Choose the device that matches your target audience or the configuration specified in your test plan requirements.

## Executing a Mobile Test Case

### From the UI

1. Open the test case in **Test Development**.
2. Click **Run** or the execution button.
3. In the execution dialog, select the **Platform** (Android or iOS), the target **Device**, and the **App Build**.
4. Review the concurrency status indicators at the top of the dialog (see [Concurrency status](#concurrency-status) below).
5. Click **Start Execution**.

Live execution begins immediately. The left panel shows the execution log with step-by-step status updates. The right panel displays a live view of the device screen as the test runs.

### Concurrency status

The execution dialog displays two real-time status indicators that show how your workspace's mobile device slots are being used:

| Indicator            | Format         | Meaning                                                          |
| -------------------- | -------------- | ---------------------------------------------------------------- |
| **Parallel** (green) | `Parallel X/Y` | X tests are running out of Y maximum concurrent device slots     |
| **Queued** (orange)  | `Queued X/Y`   | X tests are waiting in the queue out of Y maximum queue capacity |

These indicators update automatically every five seconds while the dialog is open, so you always see the current state before starting a new execution.

### Execution warnings

The dialog validates your configuration and displays warnings when action is needed:

| Condition                               | Warning                                                                                                 | What to do                                                                                    |
| --------------------------------------- | ------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
| No app builds uploaded                  | "No App Build Found" with a **Go to Upload Page** link                                                  | Navigate to **Test Development → Uploads** and upload your APK or IPA before running the test |
| Required fields missing                 | "Please select platform, device and app build before starting execution."                               | Complete all required fields in the execution dialog                                          |
| Test case has no prerequisites          | "This test case has no prerequisites. Do you want to run execution anyway?"                             | Confirm to proceed, or cancel and add a prerequisite test case if prior app state is needed   |
| All parallel slots occupied             | Shows the current running and queued counts and warns that the new execution may take longer than usual | Click **Skip and continue** to add the execution to the queue, or wait for a slot to free up  |
| App build missing required capabilities | "Required capabilities are missing in the selected build. Please re-upload your app."                   | Re-upload the app build with the correct capabilities from the Uploads page                   |

> **Note**: When all parallel device slots are in use, the **Start Execution** button label changes to **Skip and continue**. The execution is added to the queue and starts automatically when a slot becomes available.

### Via MCP

```python
# MCP tool call — execute a specific mobile test case
execute_test_case(test_case_id=1234)

# MCP tool call — poll until execution completes
get_execution_status(test_case_id=1234, number_of_executions=1)
```

The `execute_test_case` response includes a portal URL. Open this URL to watch the live execution in the ContextQA portal if needed.

## Reviewing Execution Results

### Run History

After execution, navigate to the **Run History** page for the test case. Each run is listed with:

* Overall pass/fail status
* Execution date and duration
* The device and OS version used
* A video recording of the session

### AI Logs

Click on a specific run to open its detailed results. The **AI Logs** section shows:

* Per-step status (passed, failed, skipped)
* Screenshots captured at each step
* AI confidence scores and locator decisions
* Any assertion failures with expected vs. actual values

### Investigating Failures via MCP

```python
# MCP tool call — get step-by-step results for a failed execution
get_test_case_results(execution_id=9876)

# MCP tool call — get AI root cause analysis for the failure
get_root_cause(execution_id=9876)

# MCP tool call — check for auto-healing suggestions on broken locators
get_auto_healing_suggestions(execution_id=9876)
```

If the AI identifies that an element locator has changed between app versions, `get_auto_healing_suggestions` returns proposed fixes. Apply them with `approve_auto_healing` to update the test step without touching the UI.

## Exporting Mobile Test Results

Individual execution results are accessible through the Run History view. For defect tracking integration, failed executions can be pushed directly to your ALM tracker:

```python
# MCP tool call — create a defect ticket from a failed execution
create_defect_ticket(execution_id=9876, project_id="MOBILE")
```

This bundles the failure step, screenshot, console output, and AI diagnosis into a structured ticket in Jira or Azure DevOps.

## Next Steps

Once you have mobile test cases working, organize them into suites and plans for structured regression runs. See [Mobile Test Plans](https://learning.contextqa.com/mobile-testing/mobile-test-plans) for multi-device orchestration and scheduling.

{% hint style="info" %}
**Test iOS and Android in parallel — same workflow as web.** [**Book a Demo →**](https://contextqa.com/book-a-demo/) — See ContextQA automate mobile testing for your iOS and Android apps.
{% endhint %}
