for UX + product researchers · 3 min read

user interviews with NDAs that don't ride a network.

user research interviews leak. the participant signed an NDA. the company hasn't shipped the product yet. the transcription service has a breach in the news every other month. on-device transcription removes the leak surface entirely.

why this matters for UX research

user research interviews are confidential by default. the participant typically signs an NDA at recruiting. the conversation usually involves pre-launch product feedback, competitor mentions, internal feature names, and the participant's own work — which often falls under their employer's confidentiality policies. the audio file from a single user interview can contain three or four overlapping confidentiality regimes.

most UX research workflows handle this by: signing a BAA-style confidentiality addendum with a transcription vendor, hoping the vendor's security posture matches their claims, and dealing with breach notifications when they happen. it's working until it isn't. and "until it isn't" includes the increasingly common case where a participant Googles the product name post-interview, finds it indexed in a vendor's public search results, and emails the researcher.

what changes with on-device transcription

when the speech-recognition model runs in your browser, the interview audio doesn't leave your laptop. there's no vendor. there's nothing to breach, nothing to subpoena, nothing in a search index, nothing in a vendor's training data. the audio stays where the participant agreed it would stay.

this isn't a paranoid measure. it's the structurally simple answer to a confidentiality regime that's already complex, and to a vendor-breach pattern that has been getting worse year over year.

workflow

  1. record the interview as you already do. zoom local recording, dovetail recording (then export), a phone-line recorder, an in-room recorder. the file format doesn't matter — anything ffmpeg reads.
  2. open audiohighlight in private mode. drop the file in. transcription runs locally — chrome, edge, or arc on a current laptop handles a 60-minute interview at roughly real-time.
  3. fix labels in bulk and verify quotes. "speaker 1" becomes "interviewer", "speaker 2" becomes the participant ID. propagates through every turn. proper nouns (product names, competitor mentions, internal jargon) get fixed once and remembered across future interviews in the same study.
  4. tag insights and clip moments. highlight passages in the editor that map to research questions. each highlighted passage exports with a timestamp link back to the audio for verification before sharing.
  5. export for your synthesis tool. dovetail .docx import, condens .json, .docx for paste-into- airtable workflows, .csv for spreadsheet-based affinity mapping. the audio stays on your laptop.

where this fits

handoff to synthesis tools

dovetail, condens, marvin, eleventh-hour, custom-rolled airtable bases — UX research synthesis tools all expect transcripts in a roughly common shape: speaker-segmented, timestamped, with proper nouns spelled correctly. our exports fit each one. for tools we haven't profiled, the .json export is everything (word-level timestamps, speaker turns, highlights, custom vocabulary), and you can transform from there.

for researchers using NVivo or ATLAS.ti for academic-style coding, the format-specific export profiles also apply — see NVivo or ATLAS.ti.

pricing for research

$0.25 per minute. a 60-minute user interview is $15. a 5-day generative-research week with 12 interviews of 45 minutes each is $108. private mode and cloud mode are the same price. no subscription, no minimum. for research teams with steady monthly volume (50+ hours), batch pricing arrives after launch.

related

lifetime deal while we're in beta.

join the waitlist to get a lifetime deal — your first month free, plus 50% off forever. private invite when we ship; no drip campaign.