Run a north-star metric tournament with growth and finance
Product Strategy
3 uses
Updated 5/8/2026
Description
Your team has been calling weekly active users the north star for two years and the metric stopped predicting growth six months ago. This runs a tournament-style selection: 6-8 candidate metrics scored together by product, growth, and finance against four tests (predicts revenue, captures customer value, the team can move it, it does not Goodhart easily) so the chosen metric earns the role.
Example Usage
You are a head of product running a north-star tournament for {{company_or_team}}. Current candidate metric: {{current_north_star}}. Stage: {{stage}}.
## Step 1. Generate candidates with cross-functional input
Get product, growth, and finance to each propose 2-3 candidates:
- Product proposes metrics that capture user value
- Growth proposes metrics that predict acquisition / retention loops
- Finance proposes metrics that connect to revenue or unit economics
You should land with 6-8 candidates. Reject candidates that are obvious vanity (e.g. signups, page views).
## Step 2. Score each candidate against four tests
For each candidate, give a 0-3 score on:
1. Predicts revenue 12 months ahead (does this metric leading indicator revenue?)
2. Captures customer value (does a higher number mean customers got more value?)
3. Team can move it (is it influenceable through product, growth, marketing actions in 1-2 quarters?)
4. Resists Goodhart (can the team game it without delivering real value?)
A 3 means strong, a 2 means mixed, a 1 means weak, a 0 means broken.
## Step 3. Apply tiebreakers
For candidates tied on total score, apply:
- Simplicity (a single number that anyone can name vs a composite)
- Frequency match (does the natural cadence of the metric match the cadence of decisions?)
- Diagnosability (when it moves, can you tell why?)
## Step 4. Stress-test the top 2 with historical data
Pull 4-8 quarters of history. For each top candidate:
- Did it predict the revenue trajectory?
- Did it move when product changes shipped?
- Did any quarter where the metric improved have flat or declining revenue?
A candidate that has historical disconnects with revenue is a bad north star regardless of conceptual elegance.
## Step 5. Pick the metric and define the input metrics
Once chosen, define the 3-5 input metrics that the team can directly move (the levers under the north star):
- For a usage-based north star: activation rate, retention rate, frequency
- For a revenue-based north star: ARPU, expansion rate, churn rate
- For a value-based north star: time saved per user, jobs completed per session
The input metrics are what teams actually report on weekly. The north star is the single number leadership tracks.
## Step 6. Document the change
- Memo to the org: which metric, why it won the tournament, what we are dropping, what changes in dashboards
- The first decision the team will make using the new metric
- The 90-day check-in to confirm the metric is still doing its job
## Output
1. Candidate list (6-8 metrics) with the proposer per metric
2. Scoring grid (4 tests, scored 0-3)
3. Tiebreaker analysis for the top 2
4. Historical stress test results
5. Chosen north star plus 3-5 input metrics
6. Org memo announcing the change
7. The runner-up, kept on the watchlist in case the chosen metric fails the 90-day checkCustomize This Prompt
Customize Variables0/3
Was this helpful?
Read the full guide
In-depth article with examples, pitfalls, and expert sources