AI PRD Review & Improvement
Get an AI-powered review of your PRD that checks for completeness, clarity, technical feasibility gaps, and missing edge cases. Returns a scorecard with specific improvement suggestions.
The PRD That Launched a Feature Nobody Wanted
True story from a friend at a growth-stage startup. She wrote a 14-page PRD. It went through three rounds of review. Engineering estimated it, design mocked it up, the team built it over two sprints. The feature launched to 0.3% adoption after six weeks.
Looking back, the PRD had all the right sections. Problem statement, user personas, requirements, success metrics. But it was missing something obvious — nobody had validated the core assumption that users actually had the problem described on page one. The PRD was structurally complete and strategically hollow.
Why PRD Review Is Everyone's Blind Spot
Marty Cagan has been saying this for years: the biggest risk isn't building the thing wrong, it's building the wrong thing. Yet most PRD review processes focus almost entirely on the former. They check for clear requirements, edge cases, and technical feasibility. They rarely pressure-test whether the problem is real, the target user is correctly defined, or the success metrics would actually move if the feature worked perfectly.
According to Pendo's 2024 State of Product Leadership report, 80% of features are rarely or never used. Eight out of ten things product teams build don't matter to users. That's not an engineering problem — it's a PRD problem. The decisions were wrong upstream, and the review process didn't catch them.
The second failure mode is vagueness disguised as flexibility. Requirements like "the experience should feel intuitive" or "loading should be fast" pass review because everyone nods along. Then engineering interprets "fast" as 2 seconds and the PM meant 200 milliseconds and nobody discovers the gap until QA.
How This Prompt Helps
This prompt acts as a Staff PM reviewer for your PRD. It evaluates across eight dimensions: problem clarity, user definition, success metrics quality, requirement specificity, edge case coverage, technical feasibility signals, scope control, and stakeholder alignment. Each dimension gets a score and specific feedback.
The real teeth are in the follow-up questions. Instead of just saying "your success metric is weak," it asks "If this feature works exactly as designed, what specific number changes? By how much? In what timeframe? If you can't answer that, the metric isn't a metric — it's a hope."
When to Reach for This
- Before sharing a PRD with engineering to catch gaps that waste build time
- When you've been staring at a document for days and need fresh, critical eyes
- As a self-review checklist before submitting PRDs in a formal review process
- When you're a new PM and want to calibrate your PRD quality against a high bar
- After a feature underperforms and you want to retrospectively audit the PRD to learn what to check next time
What Good Looks Like
A strong output delivers a scorecard with honest scores — not everything gets a 5/5. The most valuable feedback identifies what's missing, not what's wrong with what's there. Strong reviews flag implicit assumptions ("you're assuming users already understand concept X — is that validated?") and suggest specific rewrites for vague requirements. The goal is a PRD that an engineer can read and start building without asking clarifying questions.
Sources
- Inspired: How to Create Tech Products Customers Love — Marty Cagan / SVPG
- State of Product Leadership 2024 — Pendo
- The PRD is Dead, Long Live the PRD — Lenny's Newsletter
Sources
- Inspired: How to Create Tech Products Customers Love — Marty Cagan / SVPG
- State of Product Leadership 2024 — Pendo
- The PRD is Dead, Long Live the PRD — Lenny's Newsletter
Prompt details
Ready to try the prompt?
Open the live prompt detail page for the full workflow.