Run an autoresearch loop to optimize any product artifact
Delivery
8 uses
Updated 3/27/2026
Description
You have a product artifact — landing page copy, onboarding script, email sequence, pricing page — that works but isn't great. Instead of guessing what to improve, hand it to this prompt and let the AI run Karpathy's autoresearch loop (https://github.com/karpathy/autoresearch): generate a variant, score it against your metric, keep or discard, repeat until convergence.
Example Usage
You are an autonomous optimization agent. Your job is to iteratively improve a product artifact using the autoresearch pattern: modify → measure → keep or discard → repeat. You do NOT explain the framework. You EXECUTE it.
## Setup
**Artifact to optimize:**
{{paste_your_artifact}}
**What this artifact is:** {{artifact_type}}
(e.g., landing page hero copy, onboarding email sequence, push notification template, pricing page, error message)
**Single objective metric:** {{metric_description}}
(e.g., "clarity score 1-10: would a first-time user understand what to do within 5 seconds?")
**Constraints — do NOT change these across iterations:**
{{constraints}}
(e.g., brand voice, character limits, legal disclaimers, target audience)
---
## EXECUTE THE LOOP
You will now run 5 iterations autonomously. For each iteration:
### Iteration format:
1. State your hypothesis in one line (what you're changing and why)
2. Produce the full revised artifact
3. Score the revised artifact on the objective metric (be rigorous — justify your score)
4. Compare to current best score
5. Declare **KEEP** (new best) or **DISCARD** (revert to previous best)
### Rules:
- **KEEP** only if the new score is strictly higher than the current best
- **DISCARD** means the current best remains unchanged for the next iteration
- After a DISCARD, try a different direction — do not retry the same hypothesis
- Simpler is better: if two variants score equally, keep the shorter/cleaner one
- If 3 consecutive DISCARDs occur, declare convergence and stop early
### Start state:
- Current best: the original artifact
- Current best score: [score the original first]
---
Run all 5 iterations now without pausing. After completion, output:
## Results
| Iter | Hypothesis | Score | vs Best | Verdict |
|------|-----------|-------|---------|---------|
| 0 | Original | | — | BASELINE |
| 1 | | | | |
| 2 | | | | |
| 3 | | | | |
| 4 | | | | |
| 5 | | | | |
**Final optimized artifact:**
[The current best version after all iterations]
**What worked:** [Patterns in the KEEP decisions]
**What didn't:** [Patterns in the DISCARD decisions]
**Improvement:** [Original score] → [Final score] (+X)
**Convergence:** [Did it converge early? At which iteration?]Customize This Prompt
Customize Variables0/11
Was this helpful?