Back to Blog
SuperPM Blog/Prompt Guide

RICE Scoring Framework

Apply the RICE prioritization framework (Reach, Impact, Confidence, Effort) to score and rank product ideas objectively. Includes calibration guidance to ensure consistent scoring across your team.

Discovery
2 uses·Published 4/2/2026·Updated 4/2/2026

RICE Changed How Intercom Ships — Here's Why Your Team Still Gets It Wrong

Sean McBride, a PM at Intercom, had 47 feature requests sitting in a spreadsheet. His team spent three hours arguing about which to build next. The loudest voice won. Two quarters later, the feature they shipped had a 4% adoption rate. That's when Intercom built the RICE framework — not as an academic exercise, but as a survival mechanism.

RICE stands for Reach, Impact, Confidence, and Effort. Simple enough to fit on a napkin. But most teams butcher it within the first week of adoption.

The Scoring Problem Nobody Talks About

The biggest failure mode with RICE isn't the formula. It's calibration. When every PM on your team scores Impact differently — one person's "3" is another person's "1" — the whole system collapses into sophisticated-looking garbage. A 2023 Pendo State of Product Leadership report found that 61% of product teams using scoring frameworks abandon them within six months, primarily because of inconsistent scoring across team members.

Here's what goes wrong. Teams treat RICE like a calculator when it's actually a conversation tool. The point isn't to get a mathematically precise score. The point is to force your team to articulate *why* they believe a feature will reach 10,000 users versus 1,000. That conversation is where the real prioritization happens.

The second mistake: treating Confidence as optional. Teams will boldly score Reach and Impact based on gut feel, then slap a 100% Confidence score on it. But Confidence is the honesty check. If you haven't validated demand with customer interviews or data, your Confidence should be 50% or lower. That halves your RICE score — which is exactly the point.

How This Prompt Helps

This prompt doesn't just calculate RICE scores. It forces calibration. You input your ideas, and it walks through each scoring dimension with specific questions: What data supports your Reach estimate? Is that monthly or quarterly? What's your baseline for a "high" versus "medium" Impact? This structure means two PMs using the prompt will arrive at comparable scores.

It also flags common traps — like when you've scored high Impact but have zero customer evidence, or when your Effort estimate doesn't account for dependencies on other teams.

When to Reach for This

  • You're staring at a backlog of 20+ ideas and need to cut it to 5 for the next quarter
  • Your sprint planning meetings have devolved into opinion battles between engineering leads and PMs
  • You've just completed a round of customer interviews and want to score the opportunities you uncovered
  • A stakeholder is pushing hard for their pet feature and you need an objective framework to push back
  • You're onboarding a new PM and want to establish a shared scoring language from day one

What Good Looks Like

A strong RICE output produces a ranked table where each score has a written rationale, not just a number. You should be able to hand the output to someone who missed the meeting and they'd understand why Feature A ranked above Feature B. The Confidence scores should vary meaningfully — if everything is scored at 80%, you're not being honest enough.

Sources

Sources

  1. How We Prioritize at IntercomIntercom Blog
  2. State of Product Leadership 2023Pendo
  3. The Build TrapO'Reilly

Prompt details

Category
Discovery
Total uses
2
Created
4/2/2026
Last updated
4/2/2026

Ready to try the prompt?

Open the live prompt detail page for the full workflow.

View prompt details

More Discovery Guides