ICE Prioritization Helper
The ICE Prioritization Helper Prompt is designed to streamline decision-making by calculating ICE (Impact, Confidence, Ease) scores automatically. This prompt provides a structured and evidence-based framework for evaluating ideas, inspired by Itamar Gilad’s methodologies. By inputting key details like impact, ease of implementation, and confidence levels (derived from various evidence types), you receive not only a calculated priority score but also tailored feedback to refine your approach. With actionable suggestions for improvement and strategic next steps, this tool helps teams and individuals identify high-priority opportunities, uncover blind spots, and accelerate innovation. Whether you’re prioritizing features, projects, or strategies, this prompt ensures your efforts are both data-driven and impactful.
The Confidence Problem in Prioritization Frameworks
Every PM has a prioritization framework. RICE, ICE, MoSCoW, weighted scoring -- the options are endless and the debates about which one is best are even more endless. But all of them share a fundamental weakness: the Confidence score is almost always made up.
ICE -- Impact, Confidence, Ease -- is popular because it is simple. Three scores, multiplied together, producing a rank order. According to a 2023 Productboard survey, over 60% of product teams use some form of scoring framework for prioritization. But simplicity becomes a liability when teams treat fabricated confidence scores as real data.
The Problem
The Impact score is speculative. The Ease score is usually optimistic. But the Confidence score is the most dangerous because it creates an illusion of rigor. A PM assigns "Confidence: 7/10" to an idea and suddenly the entire prioritization model treats that number as if it were derived from evidence.
In reality, confidence scores reflect how the PM feels about an idea, not how much evidence supports it. Research from Daniel Kahneman's work on cognitive bias, published extensively through his collaboration with Amos Tversky, shows that people are systematically overconfident in their predictions -- expert confidence correlates with accuracy at roughly 0.2 on a scale where 1.0 is perfect correlation.
The result is that prioritization frameworks produce precise-looking outputs from imprecise inputs. Teams execute on ranked lists that are essentially random, then blame execution when the outcomes disappoint.
How This Prompt Works
The ICE Prioritization Helper prompt does not just calculate scores -- it challenges them. For each idea you input, it asks what evidence supports your Impact estimate, what assumptions underlie your Ease score, and most critically, it breaks Confidence into sub-components.
Instead of a single confidence number, the prompt evaluates confidence across multiple dimensions: evidence quality (do you have data or just intuition?), comparable precedent (has something similar worked before?), and assumption risk (how many untested assumptions does this require?).
This decomposition surfaces the real uncertainty in your backlog and often reorders priorities dramatically.
When to Use It
- During quarterly planning when the backlog needs to be ruthlessly prioritized
- When stakeholders push pet projects and you need a structured defense
- After a discovery sprint when multiple opportunities need ranking
- When the team disagrees on what to build next and needs a shared evaluation framework
Common Pitfalls
Anchoring on the first score. The first idea you score becomes the reference point for all others. Score ideas independently before comparing, or randomize the order.
Ignoring the confidence decomposition. If the prompt tells you that a high-impact idea has low evidence quality, that is a signal to run a quick experiment -- not to build the feature.
Using ICE for decisions it cannot make. Prioritization frameworks help you rank options within a category. They cannot tell you whether the entire category is worth pursuing. Strategic questions require strategic thinking, not scoring models.
A Pendo study found that 80% of features in the average software product are rarely or never used. Better prioritization is not just about picking the right order -- it is about having the discipline to kill ideas that score poorly.
Sources
Sources
- Feature Adoption Report — Pendo
- Thinking, Fast and Slow — Daniel Kahneman / Penguin Random House
- Product Management Trends — Productboard
Prompt details
Ready to try the prompt?
Open the live prompt detail page for the full workflow.