Pro-Owner perspective: This document frames your systems as a technical estate โ an asset to be stewarded, documented, and bequeathed. Treat these steps as craftsmanship: protect the continuity, auditability, and transferability of your digital legacy.
What it is
A structured process for validating that completed features meet defined acceptance criteria before deployment. Criteria cover functionality (does it work?), performance (is it fast enough?), security (is it safe?), and usability (can users operate it?). Evidence artifacts (screenshots, logs, test results) prove criteria are met. Stakeholder sign-off required before deployment.
Unlike informal "looks good to me" reviews, acceptance testing requires explicit criteria (defined during planning) and documented evidence (captured during testing).
Why it matters
Without acceptance criteria, "done" is subjective. Acceptance testing makes "done" measurable: "Feature X is accepted when [specific conditions] are met with [specific evidence]." This prevents rework ("that's not what I asked for"), incomplete features ("it works on my machine"), and deployment delays ("we need to test that again").
Acceptance testing also creates audit trails: stakeholders can't claim they never approved a feature when evidence shows their sign-off.
How we do it
- Define criteria (during planning):
- Functionality: What user actions must succeed? What edge cases must be handled?
- Performance: What are acceptable load times, throughput, resource usage?
- Security: What security controls must be present (auth, input validation, logging)?
- Usability: What user flows must be intuitive? What errors must be clear?
- Gather evidence (during testing):
- Screenshots: Showing successful user flows, error states, edge cases.
- Logs: Showing security events (auth, access), performance metrics, error handling.
- Test results: Unit test coverage, integration test pass rate, E2E test videos.
- Checklists: Manual validation of non-automatable criteria (accessibility, copy accuracy).
- Stakeholder review:
- Present evidence gallery to stakeholders (product owner, security lead, compliance).
- Address any gaps (missing criteria, insufficient evidence, unclear outcomes).
- Obtain sign-off: "I confirm this feature meets acceptance criteria and is approved for deployment."
- Archive evidence: Store in version control (Git LFS for media) and link to feature ticket (Jira, Linear).
What you receive
- Acceptance checklist: All criteria, status (met/not met), evidence links.
- Evidence gallery: Organized artifacts (screenshots by user flow, logs by event type, test results by suite).
- Sign-off record: Who approved, when, based on what evidence.
- Template for next feature: Pre-filled with common criteria (adapt per feature).
All artifacts stored in feature management tool (Jira, Linear, GitHub Issues) with immutable timestamps.
Evidence
Interactive evidence gallery:
- Evidence objects: Cards showing screenshots, logs, checklists, test results.
- Click to expand: Each object opens detail drawer with full context (user flow, expected vs actual, pass/fail status).
- Acceptance criteria mapping: Each evidence object tagged with criteria it validates.
- Example scenarios:
- User login flow (screenshot series showing success path + error states)
- API performance logs (showing sub-200ms response times under load)
- Security checklist (input validation, auth, logging verified)
Download acceptance testing package (templates + evidence collection guide + stakeholder forms): [Link]
Failure modes & guardrails
Failure mode: Criteria too vague ("should work well")
Guardrail: Criteria must be measurable. Reject vague criteria. Use format: "When [action], then [specific outcome]."
Failure mode: Evidence incomplete
Guardrail: Each criterion requires at least one evidence artifact. No sign-off without complete evidence.
Failure mode: Stakeholder sign-off rubber-stamped
Guardrail: Random sampling of approved features. If sampled feature doesn't meet criteria, escalate to manager.
Failure mode: Criteria changed after feature completion
Guardrail: Criteria locked when development starts. Post-development changes create new feature request.