How to write a user research plan
How to write a user research plan that survives contact with the team: one decision, one research question, the right method, and a synthesis cadence.
Most user research plans are written for the wrong audience. They are written for the person who asked whether the team has a plan, not for the person who is going to run the study. The document is twelve pages, has a Gantt chart, names six stakeholders, lists every method the author has ever heard of, and tells you almost nothing about what the team is actually trying to learn. Two weeks later the study is mid-flight, the plan is closed in a tab nobody opens, and the questions in the field bear only a faint resemblance to the ones in the doc.
This is a working guide on how to write a user research plan that the people running the study will actually use. It is opinionated about one thing: a research plan is not a report. It is a one-page artifact whose job is to keep the study honest while it is happening. If it is longer than a page, it has stopped being a plan and started being a deck.
What a user research plan is
A user research plan is a short, written document that names the decision the study will inform, the research question the study will answer, the method and sample, the prompts the participants will see, and the synthesis cadence. It exists to keep the study aligned to a real product decision while the responses come in. The best ones fit on one page, get written before the recruitment starts, and get rewritten the first time a question fails in the field.
A research plan is not a project plan, a kickoff deck, or a stakeholder briefing. Those documents have their own jobs. The research plan is the artifact the person running the study reads on a Wednesday morning to remember why they are running the study.
Why most user research plans don't survive contact with the team
The diagnosis is almost always the same. The plan was written to win permission, not to run the study.
Permission-seeking research plans are long, ceremonial, and full of detail nobody will read once the study starts. They include a literature review, a stakeholder matrix, a risk register, and a Gantt chart that assumes the study runs on the calendar the team committed to in the planning meeting. They do not include a one-sentence research question, because the author was not asked for one. They were asked for "a plan", and "a plan" in most product organizations means "a document that looks rigorous enough to defend the budget".
This is a real problem and it is worth saying out loud: the budgeting culture in most product teams penalizes short plans. A one-page plan looks under-thought. A twelve-page plan looks rigorous. The author optimizes for the part of the process they are graded on, which is the approval step, not the field work.
The fix is structural. Write the plan for the person running the study, and write a separate, three-paragraph briefing for the stakeholders who need to approve it. The plan and the briefing are different artifacts. Conflating them produces a long document that fails at both jobs. The rest of this guide is about the plan, not the briefing.
How to write a user research plan, step by step
Seven steps. They take roughly 90 minutes the first time and 20 the fifth time. The order matters: skipping step 01 (the decision) is the failure mode that produces every other downstream problem. If you do nothing else from this guide, do step 01.
01 · Name the decision the plan will inform
Before anything else, write one sentence: what will the team do differently after this study, depending on what we hear?
If the honest answer is "nothing, we are doing this for context", the study is generative and that is fine. Write that down. Generative studies need a different plan shape from validation studies, and the difference shows up immediately in the next step. If the answer is "we will ship feature A if users want it and feature B if they don't", the study is evaluative, and you need to define the threshold for "want it" before the recruitment starts, not after the data comes back.
The reason this step matters: research without a named downstream decision is research that gets reinterpreted in the meeting where the decision is actually made. Whoever has the loudest voice in that meeting picks the quote that supports their view, and the study becomes ornamentation rather than input. Erika Hall's Just Enough Research makes the same point in different words: a research project that cannot name its decision is a research project that cannot finish.
02 · Write the research question, one sentence
A research question is not a topic. "Onboarding" is a topic. "Why do users who finish onboarding not invite a teammate in their first session?" is a research question. The first is a category; the second is a falsifiable claim about a specific behavior in a specific moment, and it tells the person running the study what to listen for.
Three rules for the sentence:
- It is specific to a moment. "Why do users not engage?" is too broad. "What stops a user in the first 24 hours after sign-up from sharing the product with a colleague?" is the right size.
- It is falsifiable. You should be able to imagine an answer that disconfirms your prior. If every answer the participants could give would still leave you with the same conclusion, the question is doing no work.
- It maps to the decision in step 01. If the question can be answered in a way that does not change the decision, either the question is wrong or the decision is.
For the longer treatment of how to phrase prompts the participants will actually see, see how to write user research questions. The research question in this step is not the same as the prompts in step 05: the research question lives in the plan, the prompts live in the field.
03 · Pick the study type and method
The research question constrains the method. A "why" question wants thematic interviews. A "what happens in the moment" question wants a diary study. A "rank these" question wants a survey. A "do users notice this control" question wants a usability test. You pick the method that fits the question, not the method you used last time.
Five common shapes:
- Thematic interviews for "why" questions and motivation. 6 to 12 participants per segment is the saturation default, per Greg Guest, Arwen Bunce, and Laura Johnson's 2006 study in Field Methods.
- Switch interviews for "what made them change tools" questions. 8 to 12 recent switchers. The longer playbook is in how to run jobs-to-be-done interviews.
- Diary studies for "what happens in the moment" questions. 15 to 30 participants, three to seven entries each. See how to run a diary study with voice notes.
- Usability tests for "can users complete this task" questions. 5 per persona per task, per the Nielsen / Landauer model.
- Continuous feedback for "is anything breaking out there right now" questions. Not a study at all: a standing link inside the product, on the marketing site, in the churn flow. The framing is in how to build a customer feedback loop.
If the question fits two methods, pick the cheaper one and write the second into the plan as a possible follow-up if the first is inconclusive.
04 · Decide sample size and recruitment
Sample size is a function of the method, not of the company's size. Six to twelve per segment for thematic interviews. Five per persona per task for usability. Fifteen to thirty for diary studies. The full breakdown, with the saturation literature it rests on, is in how many user interviews do you need.
Two practical notes for the plan. First, the number you write is the completed number, not the invited number. Recruit roughly 1.5x your target to absorb the no-shows. If you need 12 completed thematic interviews, send the link to 18 to 20. Second, the recruitment channel belongs in the plan, not in a separate doc. Write down whether the participants come from a customer list, a panel, an outbound message, an in-product surface, or some combination, and which ones get the link first. The longer guide to sourcing the right participants is in how to recruit user research participants.
If the segments are heterogeneous (B2B buyers and end users, new and power users), do not pool them. Allocate 6 to 12 per segment and treat them as parallel studies that share an analysis pass.
05 · Draft four to eight questions, tuned to the method
This is the only part of the plan that the participant will ever see, and it is the part that goes wrong most often. The instinct is to write twelve questions. The good plans have four to eight. By the eighth prompt the participant is tired and the data is thin.
Three rules:
- Open, not closed. "Tell me about the last time you tried to invite a teammate" beats "Did you try to invite a teammate?" Closed questions belong in a survey, not in an interview script.
- Concrete, not abstract. Memory for general patterns is unreliable. Memory for specific incidents is good. "Walk me through your first ten minutes" beats "How do you usually use the product?"
- No leading constructions. Avoid "because" and "even though" in the question stem. The participant will agree with the construction whether or not it matches their experience.
Tune the questions to the method. A thematic interview prompt is open and long. A diary study prompt is short and event-triggered. A usability test prompt is task-shaped ("try to invite a teammate; describe what you are doing as you do it"). If the prompts read the same across methods, you are not actually using the method, you are running the same study with different labels.
If the study uses an AI interviewer for the follow-ups, the plan also names the probing depth per question. Shallow probing (at most one clarification) preserves response rate on short, in-product surfaces. Medium probing extends the answer when the participant's response is vague and is the right default for most product-discovery work. Expert probing keeps asking until the system has the context a senior researcher would dig out in a moderated session. The choice belongs in the plan, not in a setting screen the day of launch. The breakdown lives in AI follow-up questions for user research.
06 · Set the synthesis cadence, not the report cadence
This is the step that most plans skip. They name a "report date" three weeks out and assume the work in between is solo analysis by a researcher in a notebook. That assumption was true when interviews were expensive and the analysis pass had to wait until they all came in. It is no longer true when responses come back as transcripts plus per-response themes within minutes of submission.
Write the cadence in two parts. First, the synthesis cadence: how often you look at what is coming in, with which people in the room, with what artifact in front of you. "Every Tuesday and Thursday at 10am, the PM and the designer read the new responses together and update the theme list" is a synthesis cadence. "We will analyze the data at the end" is not.
Second, the stakeholder check-in cadence: when you brief the people who need to know what is emerging, before the formal report. The mistake is to surprise the stakeholders with a finished deck at the end. The fix is to share the emerging themes weekly, in three bullet points, so the decision in step 01 is being warmed up while the study is still running. The deeper craft of synthesis is in how to synthesize user research.
"The plan called for one synthesis pass at the end. By week two the themes were already telling a different story than we expected. If we'd waited for the final readout, we'd have shipped a roadmap based on the prior, not the data."
07 · Define what done looks like
The last step in the plan is the stopping rule. Write down, in one sentence, the condition under which the study is finished. There are three common shapes.
- Saturation: "We stop when the last three interviews introduce no new codes." This is the default for thematic interviews, and it is the form Guest, Bunce and Johnson's saturation literature supports.
- Target count: "We stop at 30 completed responses." This is the right form for evaluative studies where the decision threshold is numeric ("we ship if 60% of respondents prefer the new flow").
- Calendar: "We stop at the end of the sprint regardless of count." This is the right form for continuous feedback loops where the link is standing and the synthesis is rolling, not for scoped studies.
Mixing the shapes is the most common mistake. A study that names both a target count and a saturation rule will stop at whichever happens first, which means it sometimes stops before saturation. Pick one.
When the plan stops being a document and becomes an instrument
Most of this guide treats the plan as a one-time artifact for a scoped study. That model still applies for evaluative work and for set-piece research that informs a specific roadmap decision. It does not apply to the standing instruments product teams now run alongside the scoped studies.
The same Talkful link that captures answers for a scoped study can live inside the product, on the marketing site, in the churn flow, or in onboarding emails, and the responses come back through the same synthesis pipeline. When the surface is standing rather than scoped, the plan changes shape: the decision is recurring rather than singular, the question is broader ("what is breaking in the product this week"), the cadence is weekly rather than at-end, and the stopping rule is "never, until the product line shuts down". The longer treatment is in async user research methodology, and the operational version is in how to build a customer feedback loop.
The same instrument works for internal feedback. Before a feature ships, the same link goes into the engineering, design, support, and finance channels and returns a synthesized cross-functional view of objections in less time than it would take to schedule the meeting. The plan in that case is a paragraph, not a page. The point is that "user research plan" is a category that includes both the scoped study plan and the standing instrument plan, and conflating them is the second most common reason plans fail in the field.
FAQ
How long should a user research plan be?
One page for a scoped study. Three paragraphs for a standing instrument or an internal pre-launch check. If the plan is longer than a page, it has probably absorbed the stakeholder briefing and the kickoff deck. Pull those out into separate documents. The plan exists to keep the study honest in the field, not to win the approval meeting, and it stops doing its job once nobody on the study reads it during the work.
What's the difference between a research plan and a study guide?
The plan names the decision, the research question, the method, the sample, and the synthesis cadence. The study guide is a subset: the participant-facing prompts, the consent screen, the routing logic. A small study can collapse them into one document. A larger study should keep them separate so the prompts can be revised in the field without rewriting the plan.
Do you need a research plan for every study?
For any study that informs a real product decision, yes. The plan can be three sentences for a quick in-product feedback prompt and a full page for a multi-week thematic series. The form scales with the scope; the artifact itself does not become optional. The case where a plan is genuinely not needed is the case where the team has not actually decided to run a study and is treating "we should talk to users" as a substitute for a decision.
How often should you update a user research plan?
Once before recruitment. Once after the first three to five responses come back, because the prompts will need tuning. Once at the synthesis cadence if the themes drift far from the expected ones. Standing instruments get reviewed quarterly rather than at the end of a scoped study. The version history lives in the same doc as the plan, not in a separate change log.
Who owns the user research plan?
The person running the study. Not the head of research, not the PM who requested the study, not the stakeholders who will read the report. If three people own the plan, none of them do. The owner has the right to revise the prompts after the first responses come in, to pause the study if the question stops being answerable, and to call the stopping rule when it triggers. The stakeholders have the right to be briefed at the cadence in step 06. Those are two different rights and they belong to two different people.
A research plan is a small document with a specific job. It keeps the study honest while it is happening, names the decision the study will inform, and gives the person running the study a one-page anchor for the parts that go wrong in the field. Write it for them, not for the approval meeting. The studies that ship decisions are the ones whose plans fit on one page and get rewritten the first time a question fails. The Talkful free plan is the cheapest place to try this with a real study attached to a real decision.