Testing consists of 13% of total score in Salesforce Platform Lifecycle and Deployment Architect Exam. The topic covers testing methodologies, automated testing, unit testing, and testing strategy.
NOTE
Most of the content in this work was generated with the assistance of AI and carefully reviewed, edited, and curated by the author. If you have found any issues with the content on this page, please do not hesitate to contact me atΒ support@issacc.com.
π§ Testing in Salesforce β Overview
Goal: Choose and run the right testing methodology (manual vs automated; unit β UAT β performance), meet coverage requirements, and manage representative, secure test data across the lifecycle.
π§± Two Main Categories of Testing
Category | What it is | When to use | Pros | Cons |
---|---|---|---|---|
Manual | Humans execute tests (click paths, subjective checks) | Exploratory, UX/usability, one-off scenarios | Human judgment, flexible | Slow, repetitive, error-prone, hard to scale |
Automated | Tools/scripts run tests at scale | Regression, CI/CD, data-driven, repeatable flows | Fast, consistent, scalable | Upfront setup, maintenance of scripts |
Tools
Automated examples: Selenium, Copado, Provar, ACCELQ, UTAM, WebdriverIO, Jest (LWC).
π§ͺ Testing Methodologies (What to run)
Method | Purpose | Typical Owner | Where |
---|---|---|---|
Unit | Verify smallest pieces (Apex methods, flows, controllers) in isolation | Devs | Dev/SBX |
Integration | Verify interactions (Apex β external APIs, components) | Devs | SBX |
System | End-to-end of full Salesforce app/config | QA/Devs | SBX |
Functional | Feature works per spec/requirements | QA/BA | SBX |
UAT | Meets business needs in real scenarios | End users/PO | Full/partial copy SBX |
Regression | New changes donβt break old behavior | QA/DevOps | CI/CD SBX |
Production (Smoke) | Verify deployment worked in prod | DevOps | PROD (lightweight only) |
Performance | App performance & scalability under load | Perf/DevOps | SBX only |
Load | Expected traffic/volume | Perf/DevOps | SBX |
Stress | Beyond expected limits to find breaking points | Perf/DevOps | SBX |
π§© Unit Testing (Apex) β Must-knows
- @isTest classes/methods; test classes donβt count toward 6 MB Apex limit.
- Code coverage: β₯ 75% overall required to deploy Apex; each class/trigger must compile and have some coverage.
- No emails, no real callouts in tests.
- Use
HttpCalloutMock
+Test.setMock()
for HTTP callouts. - Reset limits:
Test.startTest()
β¦ code under test β¦Test.stopTest()
β fresh governor limits for the block. - Data isolation: Avoid
@IsTest(SeeAllData=true)
; generate your own test data. - Helpful:
Limits
methods to inspect consumption (heap, DML, callouts, etc.).
Minimal test flow
- Create test data β 2)
Test.startTest()
β 3) invoke method under test β 4) assert expected results β 5)Test.stopTest()
.
π©βπΌ UAT β Donβt miss requirements
- Run by end users in full/partial copy sandboxes.
- Use a Requirements Traceability Matrix (RTM) to ensure every requirement has test cases and nothing is missed.
Why RTM?
Maps requirements β test cases so coverage is explicit and auditable.
π Performance / Load / Stress Testing (Salesforce rules)
- Sandbox only (multi-tenant platform).
- Request approval from Salesforce Support β₯ 2 weeks in advance with business justification.
- Salesforce monitors activity, doesnβt design or interpret results.
- Build a test plan (scenarios, success criteria).
- Use built-ins for insight: Developer Console, Lightning Usage App, Salesforce Optimizer.
- Keep governor limits and integration constraints in mind.
Production is off-limits
Performance, load, and stress testing are not permitted in production.
π€ Automated Testing in CI/CD
- Run unit + functional suites on merge to main branches to prevent regression.
- For deployments via Metadata API, you can run a subset of tests using
RunSpecifiedTests
(pass test classes, not methods, inDeployOptions
). - Fail deployment if any specified test fails or if < 75% coverage for changed Apex.
π§° Test Execution β Tools & Options
Area | Options / Notes |
---|---|
Run tests | Apex Test Execution page, Apex Classes page (Run All), Dev Console, VS Code, Code Builder, API (Tooling REST/SOAP) |
Groupings | Single class, suite, set of classes, all tests |
Deploy test levels | Default / Run Local Tests / Run All Tests / Run Specified Tests (Metadata API) |
Coverage math | CCP = covered / (covered + uncovered) Γ 100; comments/braces/class names/debug lines donβt count |
Quick deploy time saver
Validate first, then use quick deploy (Change Sets) when appropriate; with Metadata API, use
RunSpecifiedTests
.
β‘ Testing Lightning
π· Lightning Web Components (LWC) with Jest
- Runs locally (not in browser, no org), fast feedback with watch mode.
- Test public API, DOM, events, simple interactions.
- In each test:
createElement('c-foo', { is: Foo })
βdocument.body.appendChild(el)
β assert viael.shadowRoot
. - Reset DOM in
afterEach()
. - For wire: use
@salesforce/sfdx-lwc-jest
test utilities and local JSON mocks. - Test async DOM updates with Promises/
await
.
π£ Aura Components
- JS testing with Jest/UTAM/Mocha/WebdriverIO/Selenium.
- Accessibility checks (missing
alt
, headerscope
, etc.) in JavaScript and WebDriver environments. - Server-side Apex logic tested separately with Apex test classes.
π Unified Test Data Strategy (Representative + Secure)
Why: Consistent, realistic, and compliant test data across Dev β Test β Staging β Prod.
Element | Practices |
---|---|
Representativeness | Mirror prod shape/volume (entities, relationships, edge cases) |
Security & Compliance | Data Mask in sandboxes; masking/anonymization/pseudonymization for PII/PHI (GDPR/HIPAA) |
Creation | Data factories, scripts; Bulk API/Data Loader for volume |
Automation | Scheduled loads for reliability; no human oversight needed |
Relationships at scale | REST Composite & sObject Tree requests insert multiple related records in one transaction |
Sandbox strategy | Choose Dev, Dev Pro, Partial, Full based on stage & data needs |
Refresh | Regularly refresh from prod; define refresh cadence & approvals |
Cleanup | Batch Apex to purge test data (e.g., nightly reset of partial copy) |
Monitoring/Auditing | Track access/usage; periodic audits for policy adherence |
Common pitfalls
Using live prod data unmasked, relying on
SeeAllData=true
, stale sandbox data, and missing parent-child relationships in bulk loads.
π§© Scenario Playbook (What to recommend)
Scenario | Recommendation |
---|---|
High daily volume REST service (150k orders, up to 20 line items) | Performance + load testing in full copy sandbox with expected volumes; pre-approved by Salesforce |
Apex HTTP callouts in tests | Implement HttpCalloutMock , call Test.startTest() β Test.setMock() β invoke β assert β Test.stopTest() |
CI/CD with shorter deploys | RunSpecifiedTests on DeployOptions (classes only) |
UAT completeness | Maintain RTM to ensure all requirements are covered |
Hitting governor limits in tests | Wrap critical invocations between startTest() /stopTest() |
Automated browser tests for LWC | Update tests when DOM changes across Salesforce releases |
Sandbox nightly reset | Batch Apex job to delete ~2 GB test data on schedule |
Multi-object test data | Use REST Composite or sObject Tree to insert related graphs in one call |
Prod slowdown concerns | Never perf-test in prod; monitor via Optimizer/Usage App; tune queries/triggers |
π Quick Checklists
Apex Unit Test Essentials
No real emails/callouts
Use mocks for HTTP
Generate test data (no
SeeAllData
)
startTest()
/stopTest()
to reset limitsAssertions for behavior, not just coverage
β₯ 75% coverage to deploy
Performance Testing Rules
Sandbox only
Approval β₯ 2 weeks ahead
Salesforce monitors, doesnβt interpret
Define scenarios + success criteria
Respect governor limits
UAT Quality
Business users drive tests
Use RTM
Focus on real workflows & edge cases
π Flow Charts
1) Pick the right testing type
flowchart TD A[Testing need] --> B[Unit] A --> C[Integration] A --> D[System] A --> E[User acceptance] A --> F[Regression] A --> G[Performance] G --> H[Load] G --> I[Stress]
2) Apex unit test flow
flowchart TD A[Start test] --> B[Create test data] B --> C[Start test block] C --> D[Need http callout mock] D --> E[Set mock object] D --> F[Invoke method under test] E --> F F --> G[Assert expected results] G --> H[Stop test block] H --> I[Done]
3) Deployment with run specified tests
flowchart TD A[Begin deployment] --> B[Choose test level] B --> C[Run specified tests] B --> D[Run local or all tests] C --> E[List test classes in deploy options] E --> F[Deploy] D --> F F --> G[Check failures and coverage] G --> H[Fail deployment] G --> I[Succeed deployment]
4) Performance testing approval
flowchart TD A[Plan scenarios and metrics] --> B[Use sandbox only] B --> C[Prepare business justification] C --> D[Submit for approval two weeks ahead] D --> E[Approval granted] D --> F[Approval not granted] E --> G[Execute load or stress tests] G --> H[Monitor with tools] H --> I[Analyze results] I --> J[Optimize and retest] F --> C
5) Unified test data strategy
flowchart TD A[Define data policy] --> B[Model representative data] B --> C[Mask or anonymize sensitive data] C --> D[Choose sandbox type] D --> E[Create data with factories] D --> F[Bulk load with data loader] D --> G[Generate mock data] E --> H[Insert related records with composite] F --> H G --> H H --> I[Automate loads and refresh] I --> J[Batch apex cleanup] J --> K[Monitor and audit access]
6) UAT with requirements traceability
flowchart TD A[Prepare uat sandbox] --> B[Build traceability matrix] B --> C[Map requirements to test cases] C --> D[Prepare uat testers] D --> E[Execute uat scenarios] E --> F[Record defects] F --> G[Fix and redeploy] G --> H[Retest until pass] H --> I[Business sign off]
7) Lwc unit testing with jest
flowchart TD A[Setup jest] --> B[Create test file] B --> C[Create element] C --> D[Append to document] D --> E[Simulate user action] E --> F[Assert shadow dom] F --> G[Wait for async if needed] G --> H[Cleanup after each] H --> I[Run in watch mode]
8) Aura component test strategy
flowchart TD A[Plan aura tests] --> B[Choose javascript framework] A --> C[Choose webdriver for end to end] B --> D[Write client side tests] C --> E[Run accessibility checks] D --> F[Test apex controllers with apex tests] E --> G[Report issues and fix] F --> G
π Flashcards
π§© Testing Categories
What are the two main categories of testing in Salesforce?
π Manual testing and π€ Automated testing
When should you use manual testing?
When human judgment or subjective evaluation is needed (UX, exploratory, one-off scenarios)
When should you use automated testing?
For repeatable scenarios (e.g. regression, CI/CD pipelines, data-driven tests)
Give examples of automated testing tools
βοΈ Selenium, Copado, Provar, ACCELQ, UTAM, WebdriverIO, Jest
π§ͺ Testing Types
What does unit testing validate?
The smallest testable parts (e.g. individual Apex methods) work correctly in isolation.
What does integration testing validate?
Components work together correctly (e.g. Apex β external API)
What does system testing validate?
The entire Salesforce system works as expected end-to-end
What does UAT (user acceptance testing) validate?
The system meets business needs in real-world use, run by end users in a full/partial copy sandbox
What is regression testing?
Checks that new features/updates donβt break existing functionality
What is performance/load/stress testing?
Evaluates app performance, scalability, and limits under expected or extreme usage
βοΈ Apex Unit Testing Essentials
What is the minimum code coverage required to deploy Apex code?
β 75% overall (and every trigger must have some coverage)
Can unit tests send emails or make HTTP callouts?
β No β must use
HttpCalloutMock
withTest.setMock()
for callouts; emails are blocked
What are
Test.startTest()
andTest.stopTest()
for?They reset governor limits for code inside them
Should tests use
@IsTest(SeeAllData=true)
?β οΈ Generally no β tests should create their own data to avoid dependency on org data
What are
Limits
methods for?To check governor usage (e.g. heap, DML, callouts) during a test
π Test Execution & Coverage
What tools can run Apex tests?
π§ͺ Apex Test Execution page, Apex Classes page, Developer Console, Visual Studio Code, Code Builder, Tooling REST/SOAP API
How to shorten deployment time when running tests?
Use
RunSpecifiedTests
(class-level only) inDeployOptions
when deploying via Metadata API
How is code coverage calculated?
(coveredlines)/(covered+uncoveredlines)(covered lines) / (covered + uncovered lines)(coveredlines)/(covered+uncoveredlines) Γ 100 β comments, braces, debug lines donβt count
Where to view code coverage?
In the Tests tab of Developer Console (rerun tests to refresh)
β‘ Testing Lightning Components
How to unit test Lightning Web Components (LWC)?
Use Jest locally β create element, append to document, assert shadow DOM, simulate user actions, run in watch mode
How to test Aura components?
Use JS frameworks like Jest/Mocha/WebdriverIO and run accessibility tests (JavaScript and WebDriver environments)
How to test server-side logic used by Aura components?
Write Apex test classes for the Apex controllers
π Performance Testing Rules
Can performance testing be done in production?
β No β only in sandbox (Salesforce is multi-tenant)
What approval is needed for performance testing?
Must request approval from Salesforce Support at least 2 weeks ahead with a business justification
Does Salesforce interpret the results?
β No β they only monitor activity, you analyze results yourself
π§© Unified Test Data Strategy
Why use a unified test data strategy?
To ensure test data is representative, secure, consistent, and compliant across all stages
How to protect sensitive data in test environments?
Use Salesforce Data Mask to mask data in sandboxes
How to insert related records in one transaction?
Use REST Composite or sObject Tree requests
How to clean up large volumes of test data?
Schedule a Batch Apex job to delete it (e.g. nightly reset)
Why avoid using production data directly?
β οΈ Risk of PII/PHI exposure and regulatory non-compliance