How to write user research questions that open people up
How to write user research questions that pull real answers from real people. A craft guide for async voice studies: anchors, framing, weak vs strong rewrites.
Most research-question advice teaches you to avoid leading questions. That's table stakes. The harder craft is writing a prompt that sounds like a person asked it, because when the medium is voice, a bad question doesn't just return a bad answer. It returns silence.
This piece is specifically about how to write user research questions for voice: prompts read on a phone screen by someone alone, with no live follow-up, who decides in four seconds whether to tap the record button or close the tab. If you've been getting thin responses from survey forms, the usual first instinct is to add a question. The better instinct is to rewrite the one you already have.
Why writing questions for voice is a different craft
A question you read silently and a question you read aloud are two different objects. The silent one can get away with commas, qualifiers, and nested clauses. The aloud one has to land in a single breath. Participants don't parse a voice prompt the way they parse a survey row: they scan it, decide whether they have a story to tell, and either start talking or bail.
This is also why most question banks you find online fail when you paste them into a voice tool. They were written for a context where the participant can read, re-read, and edit their answer while typing. Voice has no re-read. Voice has one take, thirty seconds of courage, and a transcript that captures every hesitation the survey form would have erased.
Three constraints follow from that:
- The prompt has to invite a story, not a summary. Summaries are what people default to in writing. Stories are what they default to in speaking, if the prompt gives them permission.
- The prompt has to land without a moderator. There's no one in the room to reframe the question if it lands wrong.
- The prompt has to be answerable by someone with a sleeping baby in the next room. Voice happens in real life. Make it friendly to interruption.
How to write user research questions that work aloud
Six rules, in the order we see them broken.
01 · Read the question out loud before you ship it
This is the cheapest, most-skipped step in user research. Read the prompt aloud, at the speed a participant would, and notice where your own mouth trips. That's where theirs will too. If you can't say the sentence without rephrasing it halfway through, the participant will either rephrase it for you (and answer a different question) or give up on the prompt entirely.
A shorter version of this rule: if it reads like an internal Slack message, it's probably good. If it reads like a legal disclosure, it's not.
02 · Anchor to a specific moment
Abstract questions yield abstract answers. "How do you use the dashboard?" gets a summary: three generic sentences, present tense, useful to nobody. "Walk me through the last time you opened the dashboard. What were you looking for, and what did you end up doing?" gets a story, with a date, a goal, a friction point, and the thing they fell back on when it didn't work.
This is one of the most consistent findings in interview research. Maria Rosala writes for Nielsen Norman Group that open-ended questions deliver deeper qualitative signal than closed ones, and within open-ended questions, memory for specific incidents is reliably sharper than memory for general patterns.
The pattern to copy, almost verbatim: "Tell me about the last time you [did the thing]. What happened?"
03 · Strip yes/no framings
Yes/no framings don't just limit the answer. They tell the participant you'd be happier with one side of it. "Did you find onboarding frustrating?" primes them toward yes. "Did the feature work for you?" primes them toward no. Either way, you've leaked your hypothesis, and what you get back is half-agreement, half-rehearsal.
The fix isn't complicated. Replace every yes/no opener with a what or a how:
- Not "Did onboarding feel clear?" Instead: "What stood out to you during onboarding, in either direction?"
- Not "Is the pricing fair?" Instead: "How did you think through the pricing? Where did you hesitate?"
- Not "Would you recommend this to a colleague?" Instead: "Who's the person in your life who'd benefit from this, and what would you say to them?"
NN/g's list of interview question mistakes puts it plainly: a leading question produces data that confirms the researcher's assumptions rather than revealing the participant's. The bias is baked in before the recording starts.
04 · One question per question
The most common mistake in a first draft is the stacked question. "What was the hardest part of onboarding, and how did you overcome it, and what would you change about it?" Three questions. Participants answer one, usually the easiest of the three, and the other two die quietly on the cutting-room floor.
If you find yourself using and in a prompt, that's usually a signal to split. Voice studies can hold more questions than people assume, as long as each one is clean. Talkful's builder lets you set a max recording between 15 seconds and 5 minutes per question (default 120s), but the realistic ceiling on participant patience is six to eight well-formed prompts per study. Two stacked questions count as four.
05 · Put context in the intro, not the prompt
Most first drafts bleed context into the question itself: "We've been thinking about changing how the dashboard handles notifications. Right now users get pinged for every new comment, and we've heard some people find it noisy. In your experience, how...". By the time the participant gets to the word how, they've forgotten what they were going to say.
Context belongs higher up. Talkful gives you two places to put it: the study's optional intro message (shown on the consent screen, with the creator's name and photo) and the per-question description (a single-line subtitle under the prompt, for tone or scope). Use the intro to tell the participant why the study exists and how their answer will be used. Keep the prompt itself lean.
06 · Pick the question type that matches the shape of the answer
Not every question wants a voice recording. If you're asking someone to pick between five prices, give them a multiple-choice question. If you're asking them to rate something on a scale, give them a rating. Talkful ships three question types for this reason: voice for stories, multiple-choice for picks, and rating (1 to 3, 5, 7, or 10) for scores.
The failure mode is putting a survey question into a voice field and watching participants stumble. Voice is for the things a radio button can't compress. Rule of thumb: if the answer has more than one verb in it, it's a voice question. If it has a number, a rank, or a label, it's not.
Images attach when the prompt needs a visual anchor (up to six per question in Talkful). Show the onboarding screen you're asking about. Show the three options you want reactions to. Don't describe the screen in words and hope the participant remembers it.
Three weak-to-strong rewrites
The rules above are easier to feel than to list. Three real prompts, one per dimension.
Weak. Did you find it useful? Better. What, if anything, did the tool help you with this week? Why. Strips the yes/no. Replaces useful (abstract) with this week (temporal anchor). Allows nothing as an honest answer.
Weak. How do you typically manage your team? Better. Tell me about a moment this month when managing your team got hard. What happened? Why. General becomes specific. Typically smuggles in a summary; this month forces a story.
Weak. On a scale of 1 to 5, how satisfied are you with onboarding? Please elaborate. Better (two questions). Rating: "Rate your onboarding from 1 to 5." Voice: "Whatever you picked, tell me about the specific moment in onboarding that made you pick that number." Why. One score per question. One story per question. Linking the voice prompt to the rating they just gave gets you their actual reasoning, not a polished justification.
How to tell if a question is working
You can tell whether a prompt is written well inside the first three responses. Here's what to look for.
"Sorry, can you... I guess I don't really know what you mean by 'engaged'. Um. It's fine?"
When a participant pauses long and asks for clarification, or gives a short generic answer that doesn't name anything specific, the prompt is doing the work wrong. A well-written question produces a transcript with at least one concrete noun (a day, a tool, a person, a moment) in the first two sentences. If the first ten words of every response are So, like, I think..., your prompt is inviting performance, not memory.
Talkful's per-response AI analysis makes this faster to diagnose. Each voice answer is transcribed by Deepgram and then analyzed by Claude for sentiment, themes, pain points, and quotable passages with word-level timestamps. If the quote extractor keeps returning an empty set response after response, the prompt isn't pulling specifics. The analysis needs something to hold onto.
Two diagnostic patterns worth watching for, both visible in the per-response view:
- Flat sentiment across every response. If every participant comes out neutral, the prompt isn't touching anything they care about. Rewrite to anchor to a specific moment.
- Short transcripts with no named people, tools, or dates. A voice response under thirty words, with no concrete references, is almost always a prompt problem, not a participant problem.
The feedback loop is fast. You don't need to wait for study completion. If the first three responses land flat, close the study, rewrite, relaunch.
FAQ
What makes a good user research question?
A good user research question invites a story, anchors to a specific moment, and doesn't leak the researcher's hypothesis. Practically: it starts with what or how or walk me through, it references a recent time window (this week, the last time), and it leaves room for the answer to be nothing or I gave up. If the question implies a right answer, rewrite it.
Should every research question be open-ended?
No. Open-ended prompts are the right shape when you want a story or a reason. For rankings, counts, and categorical picks, a multiple-choice or rating question is faster for the participant and cleaner for you to analyze. The mistake is defaulting to voice for everything. The craft is matching the question type to the shape of the answer. Talkful's builder offers voice, multiple-choice, and rating for exactly this reason.
How many questions should a voice study have?
Four to six well-formed prompts is usually enough. Eight is the practical ceiling before fatigue sets in. Our end-to-end process guide covers why shorter studies outperform longer ones: by the seventh question, participant attention drops and the data gets thinner. If you feel like you need twelve questions, two studies will usually serve you better than one.
How long should participants have to answer each question?
90 to 120 seconds per question is the sweet spot. Talkful defaults to 120 seconds and lets you configure anywhere from 15 seconds (for short factual prompts) to 5 minutes (for flagship story questions). Below 60 seconds, participants feel rushed. Above 180, completion rates drop because the timer starts to feel like homework.
How do you avoid leading questions?
The cleanest test: can the participant disagree with the premise of your question without feeling awkward? "Why did you find this frustrating?" presumes they did. "What was the experience like, in either direction?" doesn't. When in doubt, read the prompt aloud to a colleague who doesn't know the project, and ask what they think you expect the answer to be. If they can guess, rewrite.
Does question wording matter more for voice than for text?
It matters more visibly. Text surveys let participants edit their answers; typos get cleaned up, hesitations get deleted, tone gets flattened. A voice prompt doesn't get that polish. A badly-written question produces a transcript full of um and I guess and short generic replies, and you can hear the confusion instead of reading the finished draft. The upside is faster feedback: you can tell whether a prompt is working after three responses, not thirty.
The craft compresses into one sentence: write the prompt you'd want to answer, out loud, while walking the dog. If that feels like a joke, try it. The questions that survive that test are almost always the ones that produce usable research. If you want to put a few of them to work, Talkful has a free plan that's enough for a first study, and the process guide covers what to do once the responses start landing.