Test Generation & Execution
Recording Salesforce test cases in ContextQA, managing test data with profiles and parameters, and running tests in parallel with execution logs.
Who is this for? SDETs and QA managers who need to record, parameterize, and execute Salesforce test cases — including data-driven runs and parallel execution across sandboxes.
This page covers the full lifecycle of a Salesforce test case in ContextQA: recording, data management, execution, and reviewing logs.
Recording a Salesforce test session
ContextQA creates test cases by recording your interactions with the Salesforce UI. The recorder captures not just the element locator but also metadata — label, role, surrounding context — so the AI can re-locate elements even after Salesforce regenerates its DOM identifiers.
In ContextQA, click the plus (+) icon to start a new scenario.
Select Start recording (or choose an existing scenario to extend).
Perform the Salesforce workflow you want to automate: navigate to a record, fill in fields, click action buttons, submit forms.
Stop the recording when the workflow is complete.
Click the Edit icon (three dots) on any step to review the captured locator and metadata. ContextQA stores both, which is what makes the test resilient to UI changes between Salesforce releases.
Because ContextQA records metadata alongside locators, you rarely need to update test steps after a Salesforce patch or seasonal update. The AI resolves the correct element using stored context even when the locator has changed.
Executing a recorded test case
Open the test case in ContextQA.
Click Run.
Wait for execution to complete. ContextQA's intelligent wait state handling ensures each step proceeds only after the required Salesforce element or page is ready — no fixed delays needed.
Review the results in the execution view. A passing test confirms the workflow completed successfully despite any Salesforce UI changes since the last run.
Managing test data with profiles
Hardcoding values like usernames, passwords, or record field data into test steps makes tests brittle and difficult to maintain. ContextQA separates test data from test logic using test data profiles, parameters, and variables.
Parameters vs. variables
Parameter
An input value injected from a test data profile
Usernames, passwords, record field values provided before the test runs
Variable
A value captured during execution and reused in later steps
Order IDs, confirmation numbers, or any value generated by the application at runtime
Creating a test data profile
A test data profile is a table of input values — each row is one data set. For a Salesforce login test, a profile might contain several rows, each with a different email address and password.
Navigate to the test data profiles section in ContextQA to view existing profiles or create a new one for your Salesforce module.
Connecting a test case to a test data profile
Open the Salesforce test case you want to parameterize.
Click Add step and hover over the menu icon.
Choose For loop. Select your test data profile, set the loop boundaries to Start Loop and End Loop, and click Create.
Click Reorder and drag all relevant test steps underneath the for-loop step so they execute within the loop.
Click Update to save the reordered test case.
Replacing hardcoded values with parameters
Find a step that contains a hardcoded value you want to replace (for example, a username or password field).
Click the Edit icon on that step.
Remove the hardcoded value from the field.
Click Parameter and select the parameter that corresponds to the column in your test data profile (for example,
user_emailorpassword).Click Update to save the step.
Repeat for every step that contains data you want to drive from the profile.
Running with a test data profile
Click Run on the test case.
Choose execution from the run options.
ContextQA fetches each row from the test data profile and executes the for-loop steps once per row. The execution history shows one result per data set, making it easy to see which inputs passed and which failed.
Parallel execution and test plans
For large Salesforce test suites, parallel execution reduces total run time and provides faster feedback.
Setting up a test plan for parallel execution
Navigate to Test Development in ContextQA.
Click Test Plans and open the test plan that contains your Salesforce test cases.
Click the Edit icon on the test plan.
Go to the Test Machine and Issue section. Add your test suite and select a browser (for example, Chrome).
Click Add to save the machine configuration.
Open Test Plan Settings and set the parallel execution count — options include 5, 10, 15, 20, or more depending on your project.
Click Update to save, then click Run to execute the test plan.
ContextQA runs the configured number of test cases simultaneously. Each test case manages its own wait state independently, so a test that is waiting for a Salesforce record to save does not block other tests from proceeding.
Reading execution logs
After a test plan runs, open Run History to review results.
The run history shows the pass/fail status for every test case in the plan.
Click any test case entry to open the detailed execution log, which includes each step, its pass/fail status, and timestamps.
For failed steps, the log shows what was expected and what actually occurred. Common causes in Salesforce tests: a field was not found because of a UI change, a record was not saved before the next step attempted to read it, or test data was invalid for the target environment.
Use the timestamps to identify slow steps that might indicate Salesforce performance issues or timing problems in a specific environment. If a test is consistently failing in a particular sandbox but passing elsewhere, compare the environment configuration and data state rather than assuming a test logic error.
AI-assisted Salesforce testing without Salesforce test expertise. Book a Demo → — See ContextQA automate Salesforce Lightning UI testing for your org.
Last updated
Was this helpful?