Building consists of 14% of total score in Salesforce Platform Lifecycle and Deployment Architect Exam. The topic covers source control, Salesforce DX & source tracking, development models and environments.
NOTE
Most of the content in this work was generated with the assistance of AI and carefully reviewed, edited, and curated by the author. If you have found any issues with the content on this page, please do not hesitate to contact me atΒ support@issacc.com.
π§ Source Control, Testing & Quality (Salesforce)
β‘ TL;DR
- π§° Git as source of truth; pick a branching model by team size, cadence, risk.
- π§ͺ Own your test data; cover positive/negative/permission; mock externals.
- π¦ Prefer package-based + scratch orgs for modular CI/CD; use org-based + sandboxes for simpler/legacy.
- β Enforce standards + PRs + reviews + static analysis in CI/CD.
1) ποΈ Source Control Essentials (Git/GitHub)
Why Git: history, rollback, parallel work, reviews, CI hooks.
Key terms: repo β’ branch β’ commit β’ merge β’ tag
Merge types: fast-forward βοΈ recursive
Conflicts: overlapping edits on same lines (or delete vs edit)
π³ Branching Models β Quick Chooser
- π§© Centralized: 1β3 devs, super simple.
- π GitHub Flow: CI/CD, short-lived features β PR β
main
. - π€οΈ Gitflow: schedules, multiple trains;
main
+develop
+ feature/release/hotfix. - π Forking: open-source, strict control.
- π² Trunk-Based Dev (TBD): tiny branches, daily merges; feature flags.
β Flashcard: What triggers a merge conflict?
Same lines edited on different branches (or delete vs edit). Resolve locally, then merge.
π·οΈ Flashcard: When to tag?
At releases using Semantic Versioning.
2) π§° Salesforce DX & Source Tracking
- π VCS = source of truth; DX enables source-driven work.
- π₯οΈ Salesforce CLI for pull/push, tests, deploy.
- π§ͺ Scratch orgs: disposable, configurable, perfect for packages & CI.
- π Source Tracking: default in scratch; can enable for prodβDev/Dev Pro sandboxes (some metadata still manual).
π‘ Flashcard: Why scratch orgs for packages?
Clean, reproducible envs; automation-friendly; minimal config drift.
3) π§ͺ Test Data & Unit Testing (Apex)
Always create test data β repeatable, env-agnostic tests.
Methods: brute force β’ @TestSetup β’ Test Data Factory β’ CSV static resource (great for large volumes).
Default: tests donβt see org data β avoid SeeAllData=true
(rare exceptions).
Test types
- β Positive: valid in β expected out.
- π« Negative: invalid/edge β graceful handling.
- π Permission-based:
System.runAs()
to verify CRUD/FLS/perm behavior. - π§ͺ Mocks/Stubs:
HttpCalloutMock
,Test.setMock()
,StubProvider
.
π¦ Flashcard: When to use a CSV static resource?
Seeding large data volumes fast (scale/perf tests).
π§± Flashcard: What belongs in
@TestSetup
?Reusable baseline records shared by all tests (reset between methods).
4) ποΈ Development Models & Environments
π’ Org-Based Model
- Sandboxes + VCS; more manual change tracking.
- Good for smaller/legacy teams.
π¦ Package-Based Model
- Modular (unlocked/2GP) packages, immutable versions, dependencies.
- Needs Dev Hub (enable Unlocked/2GP).
- Works best with scratch orgs, CI, automated tests.
π§± Environments
- π§ͺ Scratch orgs: temp, automated, ideal for features/CI/packages.
- ποΈ Sandboxes: dev/integration/testing/training with prod-like data.
- π§βπ» Dev Edition vs π€ Partner DE: Partner DE = more API/storage/licenses (great for partners & apps).
Tip
Many teams & modules + CI β Package-based + scratch orgs
Simpler/legacy β Org-based + sandboxes
5) β Ensuring Code Quality
Standards & frameworks
- π§Ύ Team naming/style; trigger framework (e.g., Kevin OβHara), error logging, bypass via custom perms/settings.
- π§ Apex best practices: bulkify, avoid SOQL/DML in loops, use Limits, clear comments.
PRs & Reviews
- Every change via PR β automated checks + human review.
- Use a checklist: standards, limits, security (CRUD/FLS), performance, readability.
- Keep feedback respectful & actionable; fix before merge.
Static Analysis
- π Tools: PMD, Salesforce CLI Scanner, CodeScan, Clayton.
- Run in CI on every PR; block on criticals.
Security (AppExchange)
- π‘οΈ Validate vs OWASP threats; enforce CRUD/FLS (common failure).
π§° Flashcard: What runs in CI before merge?
Static analysis + unit tests (plus build/package steps).
π§ Scenario Playbook (Quick Picks)
π Multiple teams, different timelines
GitHub Flow or Gitflow; feature branches + PR; protect
main
.
π Need releases + hotfixes
Gitflow with release/hotfix branches.
π External contributors
Forking; PRs from forks; maintainers control merge.
β‘ Rapid CI/CD, minimal branch complexity
TBD with feature flags; small daily merges.
π§© Tests broke after record type changes
Update @TestSetup; query record type Ids dynamically (no hard-coding).
π Ops volume up (150 β 300/hr)
Double test data; factory/CSV; add negative & permission tests.
ποΈ One org, independent releases
Package-based + scratch orgs; enable Dev Hub, Unlocked/2GP.
π§Ύ Mini-Checklists
PR Checklist
- π§ͺ Tests (positive/negative/perm) updated
- π Static analysis passes (no new criticals)
- π CRUD/FLS + limits respected
- π§± Naming/format/trigger framework followed
- π·οΈ Release tag/rollback plan ready
Test Checklist
- π§°
@TestSetup
or factory used - π« No
SeeAllData=true
(unless justified) - π Mock callouts; stub internals
- β Asserts verify behavior & messages
- π Data volumes approximate reality
π Flow Charts
π³ Branching Strategy Chooser
flowchart TD A[Start] --> B{Team size / cadence} B -->|Tiny team, simple| C[Centralized<br/>single main] B -->|Rapid CI/CD| D[GitHub Flow<br/>short-lived branches + PRs] B -->|Multiple releases / hotfixes| E[Gitflow<br/>main + develop + feature/release/hotfix] B -->|External contributors| F[Forking<br/>PRs from forks] B -->|Daily merges, flags| G[Trunk-Based Dev<br/>tiny branches + feature flags]
π GitHub Flow (happy path)
flowchart LR A[Create branch from main] --> B[Commit changes] B --> C[Open Pull Request] C --> D[CI: tests & static analysis] D -->|Pass| E[Review & Approve] D -->|Fail| B E --> F[Merge to main] F --> G[Delete branch & deploy]
π€οΈ Gitflow Overview
flowchart LR subgraph Primary M[main] --- D[develop] end subgraph Feature Cycle D --> F1[feature/*] F1 --> D end subgraph Release Cycle D --> R[release/*] R --> M R --> D end subgraph Hotfix M --> H[hotfix/*] H --> M H --> D end
π² Trunk-Based Development
flowchart TD A[Work in trunk or very short-lived branch] --> B[Small commits daily] B --> C[CI runs: build, tests, lint] C -->|Green| D[Merge quickly to main] C -->|Red| A D --> E[Feature flags hide incomplete work] E --> F[Continuous deploy]
π§ͺ Apex Testing & Test Data Map
flowchart LR T1["@TestSetup / Factory / CSV"] --> T2["Isolated Test Data"] T2 --> T3["Positive Tests"] T2 --> T4["Negative Tests"] T2 --> T5["Permission-based Tests (runAs)"] T2 --> T6["Mocks & Stubs"] T3 --> T7["Assertions: expected outputs"] T4 --> T8["Try/Catch + expected exceptions"] T5 --> T9["CRUD/FLS & sharing verified"] T6 --> T10["External deps isolated"]
π Merge Types (quick reference)
flowchart LR subgraph Fast-Forward A1[main] --> B1[feature tip] end subgraph Recursive Merge A2[main] --- B2[feature] B2 --> C2[merge commit on main] end
π Flashcards
π³ Git & Branching
π± Whatβs the difference between Git and GitHub?
Git: local version control tool (commits, branches, merges).
GitHub: hosted collaboration platform (PRs, issues, reviews, CI hooks).
π§© When should I use Centralized workflow?
Tiny team (1β3)
Simple collaboration, minimal branching
Low risk of conflicting changes
π When should I use GitHub Flow?
CI/CD with rapid releases
Short-lived feature branches β PR β
main
High collaboration, fast feedback
π€οΈ When should I use Gitflow?
Multiple release trains / scheduled releases
Need release and hotfix branches
Large/complex projects
π When should I use Forking?
Open-source or external contributors
Strict control over what gets merged
π² When should I use Trunk-Based Development (TBD)?
Very fast delivery
Tiny/short-lived branches, daily merges to
main
Use feature flags for incomplete work
π What causes a merge conflict?
- Two edits to the same lines (or delete vs edit) across branches.
π·οΈ When should I create a tag?
- At releases; prefer Semantic Versioning (e.g.,
v2.3.1
).
β³ Fast-forward vs recursive (merge commit)?
Fast-forward: target moves to feature tip; no merge commit.
Recursive: creates a merge commit to combine histories.
π§° Salesforce DX & Environments
π§ Whatβs the source of truth in modern Salesforce dev?
- The version control system (Git), not any individual org.
π§ͺ Why use scratch orgs?
Disposable, configurable, reproducible
Great for packages, automation, CI
π What is Source Tracking?
- Auto-tracks changes between local project and scratch org (and eligible sandboxes when enabled). Some metadata still requires manual tracking.
π§± Sandboxes vs Scratch Orgsβquick rule?
Scratch: short-lived, automated, feature/CI/package work.
Sandbox: longer-lived, integration/testing/training with prod-like data.
π€ When use Partner Developer Edition over Developer Edition?
- Need more API/storage/licenses for partner app builds or managed beta testing.
π¦ Dev Models
π’ When to prefer org-based development?
Smaller/legacy teams
Heavier use of sandboxes, more manual change tracking
π¦ Why move to package-based development?
Modularity & immutable versions
Independent releases and rollbacks
Strong CI/CD fit; declare dependencies across packages
π§· What must be enabled to use packages?
- Dev Hub, Unlocked Packages, Second-Generation Managed Packages.
π§ͺ Apex Testing & Test Data
π Why must tests create their own data?
Ensures repeatability; no hidden dependency on org data.
Avoid
SeeAllData=true
except rare cases.
π§° Four ways to create test data?
Brute force inserts
@TestSetup method
Test Data Factory
CSV static resource (great for large volumes)
π’ What are positive tests?
- Valid inputs β confirm expected outputs via asserts.
π΄ What are negative tests?
- Invalid/edge inputs β ensure graceful handling (exceptions captured and verified).
π What are permission-based tests?
- Verify behavior under different users/permissions via
System.runAs()
; check CRUD/FLS/sharing.
π§ͺ How to isolate external dependencies?
HttpCalloutMock
,Test.setMock()
for callouts
StubProvider
for class/method behavior
π¦ When to use CSV static resource?
- Need large data volumes quickly for scale/perf tests.
π§± What belongs in @TestSetup?
- Reusable baseline records shared by tests (reset each method).
β Quality: Standards, PRs, Reviews, Static Analysis
π§Ύ Why adopt coding standards?
- Consistency, maintainability, fewer bugs, faster onboarding.
π§± Whatβs a good trigger framework?
- Single trigger per object, handler class, prevent recursion, bulk-safe, reusable, traceable (e.g., Kevin OβHaraβs pattern).
π Which static analysis tools are common?
- PMD, Salesforce CLI Scanner, CodeScan, Clayton.
π Where should static analysis run?
- In CI on every PR; block on critical findings.
π Whatβs the top AppExchange security failure?
- Missing CRUD/FLS enforcement. Also handle OWASP threats (XSS, injection, etc.).
π§ββοΈ PR checklist essentials?
Tests (positive/negative/permission)
Static analysis passes
CRUD/FLS & governor limits respected
Standards/format/framework followed
Tag/release notes ready
π§ Scenario Quick Hits
π§΅ Multiple teams, different timelinesβbranching?
GitHub Flow (fast) or Gitflow (scheduled)
Feature branches + PRs, protect
main
.
π Need hotfixes during heavy dev?
- Gitflow with release and hotfix branches.
π§ͺ Tests broke after record type changesβfix?
- Update @TestSetup to query record type Ids dynamically, never hard-code.
π Volume jump (e.g., 150 β 300/hr)βwhat to do?
Increase test data accordingly (factory/CSV)
Add negative and permission test coverage.
ποΈ One org, independent releases for many teams?
- Move to package-based + scratch orgs; enable Dev Hub & 2GP.