Back to Blog
SuperPM Blog/Prompt Guide

Run a DRICE prioritization session with diligence scoring

RICE gave you a ranked list — and three of your top bets flopped last quarter because nobody pressure-tested the reach or impact numbers. DRICE adds a Diligence step that forces evidence for every R/I/C input before scoring, so the ideas that rise to the top earn it and your experiment win rate stops swinging on wishful guesses.

Delivery
5 uses·Published 4/17/2026·Updated 4/17/2026

DRICE: Why the D (Diligence) Separates Win Rates

RICE scoring looks rigorous and routinely isn't — teams T-shirt-size Reach and Impact based on intuition and then treat the ranked output as evidence. The predictable result is a top-three list where one or two ideas collapse in execution because the reach or impact numbers were wrong from the start. Intercom's original RICE writeup flagged the pattern: RICE is a structuring tool, not an evidence-generation tool. Adding a Diligence step — a forced audit of each R/I/C input against real data before re-scoring — separates the ideas that survive first contact with reality from the ones that don't.

How the Run a DRICE prioritization session Prompt Works

The prompt runs a two-pass process. Pass one is classic RICE on the full backlog to surface the top 5-8 candidates. Pass two is a diligence audit on each top candidate:

  • Reach audit from actual analytics, not assumption
  • Impact audit anchored on a historical comparable change
  • Confidence audit naming the two riskiest assumptions and the cheapest test to resolve each
  • Effort audit with engineer re-estimation after looking at the real code area

After diligence, most ideas change tier — some drop off entirely. The prompt forces explicit kill/park decisions on the dropped ideas so they don't silently reappear next quarter. Nielsen Norman Group's prioritization matrix research documents the same core insight: without evidence-anchored inputs, prioritization matrices produce the same answer as intuition with more steps.

When to Use It

  • Your experiment win rate is below 30% and ideas keep surprising you in execution.
  • A quarterly planning cycle is starting and the backlog has 20+ candidate ideas.
  • Engineering is frustrated because "high-confidence" bets keep shipping to no impact.
  • A new growth PM wants to establish a rigor baseline the team will adopt.
  • Leadership is asking why prioritization output has stopped feeling trustworthy.

Common Pitfalls

  • Treating Confidence as a vibe. Confidence is not "I think this will work." Confidence is the percentage of similar past changes that hit their target.
  • Skipping the kill list. Ideas that fail diligence need an explicit "not doing and here's why" note, or they return next quarter with the same vibes.
  • Diligence without historical comps. Impact estimates need at least one past change you can point to. Without a comp, you're guessing with confidence.

Sources

Sources

  1. RICE: Simple Prioritization for Product ManagersIntercom
  2. Prioritization MatricesNielsen Norman Group
  3. Agile EstimationAtlassian
  4. Growth LoopsReforge

Prompt details

Category
Delivery
Total uses
5
Created
4/17/2026
Last updated
4/17/2026

Ready to try the prompt?

Open the live prompt detail page for the full workflow.

View prompt details

More Delivery Guides