Back to Blog
SuperPM Blog/Prompt Guide

Write strong acceptance criteria for a user story

Your team writes test cases at sprint kickoff instead of acceptance criteria, and a third of the surprises that hit QA trace back to scope you never debated up front. This drafts crisp acceptance criteria for a single story — testable pass/fail conditions, edge cases, negative criteria — so the team agrees on 'done' before code starts, not at release.

Delivery
12 uses·Published 5/8/2026·Updated 5/8/2026

Acceptance Criteria: The Cheapest Place to Catch a Bug Is in the Story

A bug found at QA review costs roughly an order of magnitude more than the same bug prevented during story refinement, and a bug found in production costs another order of magnitude. Acceptance criteria — the testable pass/fail conditions written before a story enters a sprint — are where that prevention actually happens. Teams that skip them, or that paper over the gap by writing test cases at sprint kickoff, end up debating scope in code review and triaging it during release week. The Agile Alliance and product organizations like Atlassian and Aha! have spent two decades documenting the same pattern: when engineering and product disagree about what "done" means, the disagreement surfaces late and expensive.

ESI International's much-cited industry survey put the figure at 50% of projects failing primarily because of poor requirements planning and communication. Numbers vary across studies, but the qualitative finding is durable — most of what looks like a delivery problem at release is actually a requirements problem at refinement.

Why test cases are not a substitute for acceptance criteria

Many teams write test cases early in a sprint and treat them as acceptance criteria's understudy. The two artifacts have overlapping content but different jobs. Test cases describe how QA will verify behavior; they belong to QA and read like procedures. Acceptance criteria describe what the team has agreed must be true for the story to be accepted; they belong to product and the whole team and read like contracts. The difference matters at three moments:

  • Sprint planning: engineers need a contract to estimate against, not a test plan they cannot yet write. Acceptance criteria let estimation start; test cases assume the estimate is settled.
  • Code review: a reviewer needs the agreed scope to push back on creep. Test cases tell the reviewer what was tested, not what should have been built.
  • QA handoff: QA writes test cases from acceptance criteria, layering verification logic on top of agreed behavior. Skipping the AC step forces QA to invent the contract while writing the verifier — a known source of escaped scope and late-discovered ambiguity.

The Agile Alliance glossary frames acceptance criteria as the conditions a story must satisfy to be accepted by the user or stakeholder. The Definition of Done glossary entry is the standing bar a story must clear regardless of feature. The two work together — story-specific AC plus team-wide DoD — and neither replaces the other.

How the Write strong acceptance criteria for a user story Prompt Works

The prompt runs in six steps. Step 1 validates the story shape itself: if "As a [user], I want [capability] so that [outcome]" has a vague user, capability, or outcome, the story is fixed first. Most weak acceptance criteria trace to a weak underlying story; the prompt refuses to paper over the source.

Step 2 produces a rule-oriented checklist of 6–10 conditions. The prompt enforces three discipline rules: each condition is independently pass/fail (no compound clauses smuggling two tests into one bullet), uses shared vocabulary, and describes WHAT must be true rather than HOW it should be implemented. ProductPlan's glossary entry on acceptance criteria flags the same trap — when criteria leak implementation, they over-constrain engineering and under-specify behavior.

Step 3 adds 2–3 Given/When/Then scenarios for the highest-risk paths. The Given/When/Then format is the more rigorous of the two common templates because it forces a precondition, an action, and an observable outcome — a structure that maps directly to executable tests. Aha!'s requirements guide recommends Given/When/Then for scenarios where timing or sequence of state changes matters, and the rule-oriented checklist for everything else. The prompt uses both because most non-trivial stories need both.

Step 4 surfaces edge cases and negative criteria — the empty input, the boundary maximum, the permission-denied path, the network drop, the concurrent edit. These are the scenarios that bite in production when QA only tested the happy path. The step explicitly asks for 2–3 "out of scope" lines because, in practice, a clear out-of-scope list prevents more bugs than a thorough in-scope list. Engineers who know what NOT to build stop building it.

Step 5 cross-checks the story against the team's standing Definition of Done — tests written, docs updated, telemetry shipped, accessibility checked. Acceptance criteria are story-specific; DoD is the standing bar. Aha!'s comparison page explains the layering: a story can satisfy its AC and still fail DoD if no tests were written. Treating the two as one artifact — the most common confusion — lets one of them quietly slip.

Step 6 names the people who must agree before the story enters the sprint: engineering lead, designer, QA, and any stakeholder mentioned in the story's user role. This is the moment that converts the artifact from a doc into a contract. Without sign-off, the criteria function as one PM's wish list.

Format choice: rule-oriented vs Given/When/Then vs hybrid

Atlassian's user story guide describes acceptance criteria as the necessary conditions for a story to be considered complete. It is silent on which format to use, because format choice is contextual:

  • Rule-oriented checklists are best for stories with many small, independent conditions (settings panels, validation rules, list views).
  • Given/When/Then scenarios are best for stories with branching behavior or stateful transitions (checkout flows, async jobs, permission boundaries).
  • Hybrid is the realistic default — checklist for the breadth of conditions, Given/When/Then for the 2–3 scenarios where timing or sequence determines correctness.

Teams that pick one format dogmatically tend to over- or under-specify. The prompt asks for both because most stories need both. Mountain Goat Software's writing on the user story template makes the same point about the underlying story: format decisions are downstream of clarity about user, capability, and outcome.

When to Use It

  • Your team writes test cases at sprint start instead of acceptance criteria, and you have felt the cost at QA or release.
  • A new PM is joining and you want to install a refinement standard before drift sets in.
  • A complex story keeps surfacing late edge cases in production — the AC for that class of work is too thin.
  • A cross-functional team includes designers, engineers, and QA, and the team needs one artifact each function reads.
  • You are moving from a vendor-driven spec process to product-owned story refinement.

Common Pitfalls

  • Compound criteria. "User can save and the save persists across sessions" is two conditions hiding as one. Split them — each condition must be independently pass/fail.
  • Implementation in disguise. "Use a debounced API call with 300ms delay" describes how, not what. Acceptance criteria describe observable behavior; the implementation belongs to engineering.
  • No out-of-scope list. A 10-line in-scope list with no out-of-scope lines lets engineering quietly extend the story. The out-of-scope list is the negative space that defines the shape.

Sources

Sources

  1. User stories with examples and a templateAtlassian
  2. What are acceptance criteria?Aha!
  3. Acceptance criteria vs definition of doneAha!
  4. Acceptance criteria glossaryProductPlan
  5. Acceptance criteria glossary entryAgile Alliance
  6. Definition of Done glossary entryAgile Alliance
  7. Advantages of the user story templateMountain Goat Software

Prompt details

Category
Delivery
Total uses
12
Created
5/8/2026
Last updated
5/8/2026

Ready to try the prompt?

Open the live prompt detail page for the full workflow.

View prompt details

More Delivery Guides