Tutorial: Data-Driven Testing

Build a parameterized login test that runs automatically with multiple user roles — one test case, one data profile, four executions, zero duplication.

circle-info

Who is this for? Testers and SDETs new to data-driven testing in ContextQA. You will build one test case and run it across multiple data sets without duplicating anything.

Data-driven testing runs the same test case multiple times, each time with a different set of input values. Instead of creating separate test cases for "login as admin", "login as manager", and "login as viewer", you create one test case and a test data profile that supplies the credentials and expected outcomes for each role.

In this tutorial you will:

  1. Create a login test case with variable placeholders

  2. Create a test data profile with four user roles

  3. Attach the profile to a test plan

  4. Execute the plan and review per-row results

End result: One test case produces four independent execution records — one per data row — each with its own screenshots, video, and pass/fail status.

Prerequisites


Step 1: Create a test case with variable placeholders

Instead of hard-coding a username and password into your test steps, use ${variable} placeholders that ContextQA resolves at runtime from the data profile.

  1. In the left sidebar, click the Test Development icon.

  2. Click the + button and select Start with AI Assistance.

  3. Enter your application's login page URL in the Application URL field.

  4. In the Task Description field, write a parameterized description:

  1. Click Generate. The AI creates steps that reference ${username}, ${password}, and ${expected_title} as variable placeholders.

  2. Review the generated steps. Confirm each ${variable} reference appears exactly as you typed it — spelling and case must match the data profile column names you create in the next step.

circle-info

Tip: You can also add variable placeholders to an existing test case. Open the step editor, select a step, and replace any hard-coded value with ${variableName}.


Step 2: Create a test data profile

A test data profile is a table where each column is a variable name and each row is one complete test run.

  1. In the left sidebar, navigate to Test Development.

  2. Click the Data Profiles tab.

  3. Click + Create Profile.

  4. Enter a profile name: LoginScenarios_MultiRole.

  5. Click + Add Column and enter username. Repeat to add columns for password and expected_title.

Important: Column names are case-sensitive. username and Username are different variables. Match the exact spelling you used in your test steps.

  1. Click + Add Row four times to create four data rows. Fill in the values:

username

password

expected_title

Admin123!

Admin Dashboard

Manager456!

Team Dashboard

Viewer789!

Reports

Billing000!

Billing Overview

  1. Click Save.

You now have a profile with four rows. When attached to your test case, ContextQA runs the test four times — once per row.


Step 3: Create a test plan and attach the profile

Test data profiles are attached at the test plan level, not at the test case level. This keeps your test cases reusable — the same test case can run with different profiles in different plans.

  1. Navigate to Test Development → Test Plans.

  2. Click + Create Test Plan.

  3. Enter a plan name: Login — Multi-Role Validation.

  4. Under Test Suites, select the suite that contains your login test case. If the test case is not in a suite yet, add it to one first (see Managing test suites).

  5. Locate your login test case in the plan's test case list.

  6. In the Test Data Profile dropdown next to the test case, select LoginScenarios_MultiRole.

  7. Configure the remaining plan settings:

    • Browser: Select your target browser (e.g., Chrome).

    • Environment: Select the environment that points to your application.

  8. Click Save.

circle-exclamation

Step 4: Execute the test plan

  1. Open the test plan you created.

  2. Click Run.

  3. ContextQA queues four executions — one for each row in the data profile.

Watch the execution progress. Each row runs independently:

Each execution captures its own screenshots, video recording, and network logs.


Step 5: Review per-row results

When all four executions complete:

  1. Navigate to the test plan's execution results.

  2. Each data row appears as a separate execution record with its own pass/fail status.

  3. Click any execution to see the step-by-step breakdown, including:

    • The resolved variable values used for that row

    • Per-step screenshots showing the actual credentials typed

    • Video recording of the full browser session

    • Root cause analysis if the row failed

What to look for:

Result
What it means

All four rows pass

Your login flow works correctly for all tested roles

One row fails, others pass

That specific role has a unique issue — check the failure's root cause analysis

All rows fail on step 1

The login URL or page structure may have changed — the issue is not data-specific


Summary

You built a data-driven test in four steps:

  1. Created a test case with ${variable} placeholders instead of hard-coded values

  2. Created a test data profile with columns matching those variable names and rows for each scenario

  3. Attached the profile to the test case inside a test plan

  4. Executed the plan and reviewed independent results per data row

One test case now covers four user roles. To add a fifth role, add a row to the profile — no test case changes needed.

Next steps

  • Scale your profile: Add more rows for edge cases — empty passwords, special characters in usernames, expired accounts. Each row becomes an automatic test run.

  • Import from a spreadsheet: If you have test data in Excel or CSV, import it directly into a data profile instead of typing each row manually. See Test data management.

  • Combine with environments: Use ${ENV.BASE_URL} for the application URL and ${username} from the data profile for credentials. This lets you run the same data-driven test against staging and production by switching environments in the test plan. See Configuring environments.

  • Add negative scenarios: Create a second profile with invalid credentials and expected error messages. Attach it to the same test case in a different test plan to validate error handling without duplicating test steps.

circle-info

Run the same test across 100 data sets — no code required. Start Free Trial →arrow-up-right — Or Book a Demo →arrow-up-right to see data-driven testing with your application.

Last updated

Was this helpful?