Discovery-Focused Test & Experiment Plan
Discovery
74 uses
Updated 3/26/2026
Description
This prompt helps product managers create a structured and actionable test/experiment plan to reduce product uncertainty. It guides teams to identify the riskiest assumptions (LoFA), validate them through lean behavioral tests, and determine when to move into formal hypothesis experiments. Ideal for discovery-focused teams working on high-stakes product decisions.
Example Usage
## 🎯 Objective You are a lead PM at a world-class product organization. Based on the information below, write a **first version of a Test/Experiment Proposal** aimed at reducing uncertainty. This document serves the following purposes: - Clearly define the **most dangerous assumption (LoFA)** blocking product success, - Design **high-priority tests** to validate it, - Reduce **Value, Usability, Feasibility, and Business Viability** risks through behavioral evidence. --- ## 📥 Inputs (Required Information) - **Product/Feature Name:** [Enter the name of the product or feature] - **Business Goals & Core KPIs:** [Describe key business objectives and metrics] - **Target User Segment:** [Specify user group or persona] - **Known Pain Points or Opportunities:** [Describe observed user problem or market opportunity] - **High-Risk Areas (select one or more):** [Select: Value / Usability / Feasibility / Business Viability] - **Available Resources (Time, Budget, Team):** [List time frame, budget constraints, and team capacity] --- ## 🧪 Execution Instructions Now, based on the above, **write the Test & Experiment Planning Document** using the structure below, formatted with `Markdown Table + <br>`: - 🔁 Iteration Log - 🟩 Risk Coverage Matrix - ✍ SECTION A — Assumption Risk Test Proposal - ✅ SECTION B — Hypothesis Experiment Plan (only if all risks are validated) **Each section must be actionable and specific, with a clearly stated Key Takeaway.** If all four risks are validated with "medium or strong" behavioral evidence in SECTION A, skip SECTION A and only complete SECTION B. --- ## 🔁 Observations / Entry Criteria ### 🔁 Iteration Log (Test History) Log all previously conducted Assumption Risk Tests in the table below. Each test should cover at least one risk area (Value, Usability, Feasibility, or Business Viability), and serve as a key piece of evidence for entering SECTION B. | ID | Date | Test Name | Target Risk | Evidence Strength | Summary Result | Key Insight | Validated Risks | |:--|:-----|:-----------|:-------------|:------------------|:----------------|:--------------|:------------------| | T-001 | YYYY-MM-DD | [e.g., Wizard-of-Oz UI] | Value, Usability | Medium | [e.g., 7/10 accepted] | [e.g., timing critical] | ✅ Value, ✅ Usability | | T-002 | YYYY-MM-DD | [Test Name] | [Risk Area] | [Weak / Medium / Strong] | [Summary] | [Behavioral Insight] | [✅ Risk(s)] | | T-003 | YYYY-MM-DD | | | | | | | Evidence Strength Criteria: • **Weak** – Internal estimation, expert opinion • **Medium** – Behavioral data from 10+ users • **Strong** – KPI impact, real usage data --- ### 🟩 Risk Coverage Matrix Summarize validation status for each risk area based on accumulated tests. Only enter SECTION B if all areas are validated with medium or strong behavioral evidence. | Risk Type | Status | Latest Evidence Summary | Test ID | |:--|:--------|:-----------------------------|:--------| | Value | ⬜ Not validated / ✅ Validated | [e.g., 70% feature acceptance] | [T-001] | | Usability | ⬜ / ✅ | [e.g., intuitive use without confusion] | [T-001] | | Feasibility | ⬜ / ✅ | [e.g., 68% algorithm accuracy] | [T-002] | | Business Viability | ⬜ / ✅ | [e.g., 15% increase in retention] | [T-003] | 🎗 **SECTION B Entry Criteria:** → Proceed to Hypothesis Experiment only when all four risks are marked ✅ Validated. ✦ Key Takeaway: Validation must come from **user behavior**, not opinions, and should accumulate across multiple tests. --- ## ✍ SECTION A — New Test Plan Proposal Based on **inputs**, **iteration logs**, and **current validation status**, define the Leap of Faith Assumption and propose the **next high-priority test**. Focus on reducing the most critical unvalidated risk quickly. ### Leap of Faith Assumption (LoFA) | Item | Description | |------|-------------| | Most Dangerous Assumption | A critical statement that must be true for product success, but currently lacks strong evidence | | Evidence Strength (X-axis) | How much supporting data/interview/behavioral observation exists? (Weak/Medium/Strong) | | Importance (Y-axis) | If false, how fatal is it to product success? Is it replaceable? | | Summary Decision | Should this be tested as a LoFA? (✅ LoFA / ❌ General Assumption) | ✦ Key Takeaway: Assumptions with **low evidence and high importance** are LoFAs and must be prioritized. ### Test Plan | Item | Guideline | |:--|:--| | **A1. Target Risk** | Choose an unvalidated or weakly supported risk area (e.g., Value) | | **A2. Key Assumption** | State the most important and risky related assumption in one sentence | | **A3. Defined Uncertainty** | Frame as a user behavior that can be observed | | **A4. Suggested Test Method** | Choose the most **lean and quick** test: (Opportunity Scoring, Customer Interview, Smoke Test, Concierge, Wizard-of-Oz, Functional Prototype, Remote Usability Test, One-Question Survey, etc.) | | **A5. Success Signal** | Define behavior-based quantitative threshold (e.g., “≥4 out of 10 users accept”) | | **A6. Execution Suggestion** | Recommend test period (1–3 days), assignee, tools | | **A7. Expected Outcome or Pivot Criteria** | What insights will success yield? If it fails, what to redesign? | ✦ Key Takeaway: Experiments build **cumulative knowledge**, not linear steps. Pick tests that fill unvalidated gaps. --- ## ✅ SECTION B — Hypothesis Experiment Plan (🔐 GATE Required) | Item | Instruction | |:---|:-------------| | **B1. Reformulated Hypothesis** | Write in the format: **"If we provide [change] to [target], then we will achieve [outcome]"** e.g., "Providing AI recommendations to research-focused users will increase organization rate by 30%" | | **B2. Experiment Design** | - Specify experiment type (A/B, A/B/n, multivariate) - Clarify treatment vs. control - Estimate sample size and exposure duration | | **B3. Metrics & Statistics** | - **Primary Metric**: Key performance metric - **Guardrail Metrics**: Watch for side effects **Good Metric Criteria**: ✓ Behavior-based ✓ Leading ✓ Tied to KPI ✓ Low noise | | **B4. Decision Rule** | State decision logic: - Binary: Launch if metric met - Iterative: Improve and retest - Learning: Use results for next hypothesis | | **B5. Operational Plan** | - Rollout plan (e.g., 10% → 50% → 100%) - Monitoring dashboards & alerts - Assign owner and reviewer | | **B6. Post-Experiment Action Plan** | - Schedule result interpretation session - Use insights to adjust roadmap - Define stakeholder communication strategy | ✦ Key Takeaway: Running formal experiments **without validating core risks first** often wastes resources. --- ## ✅ Writing Style Guide - Use **clear, concise, and action-oriented language** - Provide **quantified thresholds** - Assign **owners and timeboxes** - **Log results for every iteration** - Keep **Risk Matrix up-to-date** --- ## 📄 Output Format - Use Markdown Tables - Use `<br>` for line breaks in longer text - Include "✍ Last Updated" at the top of the document - Conclude each section with a “✦ Key Takeaway”
Customize This Prompt
Customize Variables0/22
Was this helpful?