AI TOOL

Researchers spend more time writing up findings than running the study.

I built ResearchSnap to fix that. It takes raw interview transcripts and returns structured themes, insights, journey maps, and a stakeholder brief in under 20 minutes - with every AI output reviewable before anything ships.

Try ResearchSnap live
RoleSolo Designer + Engineer
TypeAI-Assisted UX Tool
StackReact + Claude API
StatusLive on Vercel
01 What it does differently

Same study, fraction of the time

These numbers are based on a 4-participant usability study synthesized two ways - once manually, once with ResearchSnap. Your mileage will vary by study complexity.

4-8h
Before
Typical manual synthesis time
~20m
After
AI synthesis + full human review
5
Output types
vs. one unstructured doc

The real shift

Tagging is no longer your job
The AI clusters quotes into themes with sentiment labels. I reviewed and trimmed - but I didn't start from scratch.
Weak findings get flagged
Every output has a confidence score. A theme mentioned once gets a Low badge - not the same treatment as one that came up in every session.
Contradictions surface automatically
When two participants said opposite things about the same feature, the AI caught it and wrote out the design tension.
The brief writes itself
After review, one click generates a 280-word stakeholder brief from only the findings I approved.
Arena 1 - AI as efficiency tool
There's a clear before and after here - hours of manual synthesis reduced to a review session.
Arena 2 - AI built into the product
Every AI output starts as Pending. I made human review the default - not an option.
Built end-to-end
I designed and coded this solo. It's a live product, not a prototype.
02 Why this needed to exist

The synthesis session nobody talks about

"After every study I'd spend a full day turning notes into something I could show a stakeholder. By that point, half the details had already blurred."

The raw notes from a study are basically useless to anyone who wasn't in the room. Getting from transcripts to a shareable synthesis is 80% of the work - and it's all manual.

Time
It takes longer than the research
A 4-person study with 60-minute sessions generates 4+ hours of notes. Synthesis can easily double that.
Consistency
Two researchers, two different outputs
Put the same transcripts in front of two people and you'll get different themes, different framings.
Evidence
Strong and weak findings look the same
A theme one person mentioned gets written up the same way as one that came up in every session.
Fig 1 - Full output view, themes tab
Fig 1 - Full output view, themes tab
03 How the AI fits in

One call. Five outputs.

The synthesis runs in a single Claude API call with a strict JSON schema. No chaining, no post-processing.

1
Transcript goes in
Paste text directly, or upload TXT, PDF, PNG, or JPG. PDFs and images run a separate extraction call first.
2
Strict schema enforced
The system prompt specifies the exact JSON shape. The AI returns themes, insights, journey stages, contradictions, and confidence scores in one pass - or nothing parses.
"confidence": "high",
"confidenceReason": "Mentioned by 3+",
"contradictions": [{ sideA, sideB, designImplication }]
3
Everything arrives as Pending
I inject _review:"pending" on every item before it hits the UI. The AI doesn't get to approve its own output.
4
Researcher reviews
Accept, edit, or reject each item. A progress bar shows completion. Bulk approve is available but opt-in.
5
Brief uses approved items only
A second API call takes only accepted and edited items. Anything rejected never gets in - I checked this in the code, not just the prompt.
What comes back from the API
Themestitle · frequency · sentiment · quotes
Insightstitle · body · priority · confidence
ContradictionssideA · sideB · designImplication
Journey stagesstage · emotion · pain · opportunities
SummarytopNeed · friction · opportunity
NoteThe confidence score is set by the AI based on how many participants corroborated a finding. Hover any badge to see the AI's reasoning.
Fig 2 - Input screen
Fig 2 - Input screen
Fig 3 - Loading state
04 Where trust lives

The AI proposes. You decide.

Every output has four possible states. Nothing moves forward without a human decision.

Pending
Where every item starts. The AI does not approve its own output.
Accepted
You confirmed it. Flows into the stakeholder brief.
Edited
Click any text to edit inline. State updates automatically.
Rejected
Excluded from outputs. Stays visible so you can trace what the AI got wrong.
Review Progress75% reviewed
3 accepted2 edited1 rejected2 pending
Everything is reversible. Undo on individual items, or reset all to Pending.
Fig 4 - Accepted state
Fig 4 - Accepted state
Fig 5 - Rejected state with Undo
Fig 5 - Rejected state with Undo
05 The parts I'm most proud of

Showing what the AI doesn't know

Confidence scoring

The AI rates each output High, Medium, or Low based on how many participants mentioned it. Hover any badge to read the reasoning.

Confidence per theme
Navigation + Find.
High
Notification overload
High
Onboarding gap
Med
Weekly planning
Low
Tooltip on hover
High confidence - P1, P2, and P3 all brought this up without being asked. P4 mentioned it when probed.

Contradiction detector

When participants said opposite things, the AI flags it as a design tension with a specific implication. These get their own tab.

!
Tension
Solo vs. collaborative experience
P4, Sam
"Solo work: fast, clean, great."
P3, Priya
"Team rolled out twice. Back to email both times."
Design implication
The product optimises for individual use. Collaborative onboarding may need a separate path.
Add screenshot here
Journey map tab - SVG emotion curve with stage nodes, zone bands, detail cards below
Fig 6 - Journey map, emotion curve view
Add screenshot here
Insights list cards with priority dot, confidence badge, review actions
Fig 7 - Insights tab
Add screenshot here
Contradiction card expanded - opposing quotes side-by-side, design implication below
Fig 8 - Contradictions tab, expanded card
Add screenshot here
Brief modal open - Research Brief, Key Findings, Recommendations, Open Questions
Fig 9 - Stakeholder Brief modal
Add screenshot here
Deck visible, thumbnail strip at bottom, nav controls
Fig 10 - Slides tab
06 When things break

Every failure has a path forward

I mapped four failure modes before building and designed each one so the user never ends up stranded.

API failure
Back to input, nothing lost
If the Claude call fails or times out, the view returns to the input screen with the transcript still there.
Analysis failed: 529 - Overloaded
File extraction failure
Upload resets, paste still works
PDF or image extraction fails mid-request. Upload label resets. The paste fallback is always there.
Extraction failed - paste content manually
Invalid JSON
Error surfaced, not swallowed
The strict schema prompt makes malformed responses rare. If it happens, the try/catch shows the error explicitly.
No contradictions found
A positive message, not a blank tab
No contradictions found - participants largely agreed on this one.
DOCX
Unsupported format, honest about it
DOCX files can't be read in-browser. Rather than a generic error, the text area pre-fills: "DOCX not supported - paste content instead."
Add screenshot here
Input screen with red error banner visible below the upload card
Fig 11 - API failure state, transcript preserved
Add screenshot here
Contradictions tab showing the positive empty state message
Fig 12 - Empty contradictions state
Full view

Everything in one screen

Summary strip at the top, review progress below, then tabs across all output types.

Add screenshot here
Summary strip (Top Need, Friction, Opportunity) + progress bar + all tabs visible
Fig 13 - Full results view