AI User Research Synthesizer
Transform raw user research data (interviews, surveys, support tickets) into structured insights using AI. Automatically identifies themes, sentiment patterns, and actionable recommendations.
Your User Research Is Sitting in a Google Doc Graveyard
I once watched a PM present a quarterly roadmap to their VP. The VP asked a simple question: "What did users say about this in the last round of interviews?" The PM froze. They'd done 22 interviews that quarter. The transcripts were in a shared drive somewhere. The insights were in their head, sort of, mixed with memory bias and wishful thinking.
This happens constantly. Teams invest thousands in research — recruiting participants, running sessions, recording everything — then dump the transcripts into a folder and extract maybe 10% of the signal.
The Synthesis Bottleneck Is Real
Here's the math. A 45-minute user interview produces roughly 7,000 words of transcript. A typical research sprint has 8-15 interviews. That's 56,000 to 105,000 words of raw data. A single PM reading and coding all of that takes 20-30 hours — time that rarely exists between sprint commitments and stakeholder fires.
Dovetail's 2024 State of Research found that 67% of product teams have more research data than they can analyze. The insights are there. Nobody has time to find them.
This matters because partial synthesis creates dangerous blind spots. You remember the emotional quote from Interview #3 and the feature request from Interview #11, but you miss the pattern across Interviews #5, #8, and #14 that points to a deeper unmet need. Confirmation bias fills in the rest.
Teresa Torres talks about this in her Continuous Discovery work — the goal isn't more research, it's better synthesis. Your competitive advantage isn't access to user data. Everyone has that now. It's the speed and accuracy of your sense-making.
How This Prompt Helps
This prompt takes your raw research data — interview transcripts, survey responses, support tickets, or all three — and produces a structured analysis with theme extraction, sentiment mapping, and prioritized recommendations. It identifies patterns that are easy to miss in manual coding: contradictions between what users say and do, emergent themes that don't fit your existing categories, and severity signals buried in polite language.
When to Reach for This
- You just finished a research sprint and have 10+ transcripts to synthesize before the next planning cycle
- Your support ticket volume makes manual analysis impossible but you know there are product signals in there
- You're combining data from multiple sources (NPS comments + interview transcripts + app reviews) and need a unified view
- You want to validate whether your manual synthesis missed anything before presenting to leadership
- You need to turn messy qualitative data into something an engineering team can actually act on
What Good Looks Like
The output should surface 5-8 themes with supporting evidence from across your dataset, a sentiment breakdown that goes beyond positive/negative into specific emotions (frustration, confusion, delight), and a recommendations section that ties insights to product decisions. The best outputs flag data quality issues too — "3 of your 12 interviews were with power users, which may skew the theme weighting toward advanced features."
Sources
- State of Research in Product 2024 — Dovetail
- Continuous Discovery Habits — Teresa Torres / Product Talk
- When Research Goes to Waste — Harvard Business Review
Sources
- State of Research in Product 2024 — Dovetail
- Continuous Discovery Habits — Teresa Torres / Product Talk
- When Research Goes to Waste — Harvard Business Review
Prompt details
Ready to try the prompt?
Open the live prompt detail page for the full workflow.