Back to Blog
SuperPM Blog/Prompt Guide

Run a north-star metric tournament with growth and finance

Your team has been calling weekly active users the north star for two years and the metric stopped predicting growth six months ago. This runs a tournament-style selection: 6-8 candidate metrics scored together by product, growth, and finance against four tests (predicts revenue, captures customer value, the team can move it, it does not Goodhart easily) so the chosen metric earns the role.

Product Strategy
2 uses·Published 5/8/2026·Updated 5/8/2026

A North Star Earns Its Role; It Should Not Be Inherited

A north-star metric that survives unchallenged for years is usually a metric that stopped predicting the business. Companies grow, customer value shifts, and the metric that captured the right thing at Series A often misses the right thing at Series C. Re-tournamenting the north star every 12-18 months keeps the metric honest. The tournament format also brings finance and growth into the same room so the chosen metric reflects more than product's view of value.

Amplitude's writing on north-star metrics frames the role: a single number that captures the value the product delivers, that predicts revenue, and that the team can move. Reforge's north-star playbook collects examples of north stars that have aged well and ones that have not.

Four tests, not six

Most north-star debates collapse under their own weight when the team uses too many criteria. Four tests carry the decision:

1. Predicts revenue 12 months ahead. A north star that does not lead revenue is a vanity metric in disguise.

2. Captures customer value. A higher number must mean customers got more value, not just that they used the product more.

3. Team can move it. The metric must be influenceable in 1-2 quarters through product, growth, or marketing actions.

4. Resists Goodhart. The metric must not be gameable without delivering real value. If a team can ship a quick hack that triples the metric without any user benefit, the metric is broken.

HBR's foundational essay on strategy frames the tradeoff: a strategy is a choice about what not to optimize. The north star is the operationalization of that choice. Picking it without finance and growth in the room produces a metric that the rest of the company will not believe.

How the Run a north-star metric tournament prompt works

Step 1 gathers candidates from product, growth, and finance. Three perspectives produce three different views of value: usage value, loop value, revenue value. The right north star usually sits at the intersection.

Step 2 scores 0-3 on the four tests. A simple rubric beats elaborate weighting. Total scores fall out cleanly; ties get tiebreakers.

Step 3 applies tiebreakers. Simplicity (a single number anyone can name), frequency match (the metric updates at a pace that matches decision cadence), and diagnosability (when it moves, the team can tell why). SVPG's writing on product strategy emphasizes that strategy operating well requires a metric that the team can debug, not just track.

Step 4 stress-tests with historical data. The top two candidates are run against the last 4-8 quarters. A candidate that has historical disconnects with revenue is rejected regardless of conceptual elegance. This step catches the metric that looks right in theory and fails in practice.

Step 5 picks the metric and defines the input metrics. The north star is the single number leadership tracks; the input metrics are what teams actually move. Three to five input metrics is the right size; fewer leaves teams without levers, more dilutes attention.

Step 6 documents the change. A memo to the org explains which metric won, why it won, what the team is dropping, and what changes in dashboards. The 90-day check-in is the safety net; the metric earned its role through a tournament, and it can lose its role through evidence.

Why composite north stars usually lose

Companies sometimes propose composite metrics ("active days plus value transactions plus collaboration events"). Composites fail three of the four tests in practice:

  • They are hard to name (nobody can recite a four-component metric).
  • They are hard to move (the team does not know which component to push).
  • They are hard to diagnose (when the metric drops, no one can tell why).

A clean single number wins. If the candidate is genuinely composite, define one of its components as the north star and the others as inputs.

When to use it

  • The current north star has not predicted revenue for the last two quarters and the team is using it out of habit.
  • A new business model is launching (e.g. shifting from seat-based to usage-based) and the metric needs to follow.
  • A new CFO or CEO is asking for a north star they can rely on for capital allocation decisions.
  • Product, growth, and finance are using different "primary" metrics and the misalignment is producing conflicting plans.
  • A board is asking for a single number to track in operating reviews and the team has not committed to one.

Common pitfalls

  • Inheriting the metric. A north star that survives years without challenge is rarely still the right metric.
  • Composite metrics. They fail simplicity, movability, and diagnosability. Pick one component.
  • Choosing without finance. A metric that does not predict revenue will not survive the next budget cycle.

Sources

Sources

  1. North-star metric primerAmplitude
  2. Amplitude north-star resourcesAmplitude
  3. North-star metric playbookReforge
  4. Product strategy overviewSilicon Valley Product Group
  5. What is strategyHarvard Business Review

Prompt details

Category
Product Strategy
Total uses
2
Created
5/8/2026
Last updated
5/8/2026

Ready to try the prompt?

Open the live prompt detail page for the full workflow.

View prompt details

More Product Strategy Guides