AI chatbots (like ChatGPT, Google’s Gemini, and Microsoft Copilot) can support UX researchers of all experience levels in planning their research.

What Is a Research Plan?

Good research always starts with a plan.

A research plan is a document that outlines the research objectives and how the research will be executed.

Research plans should include:

  • The research goals or questions that the research is hoping to achieve or answer
  • The method to be used and a description of how it will be carried out
  • The tasks or questions that will be given to study participants
  • The profile of the target participants
  • The screener questionnaire used to recruit participants

Creating a research plan can be time-consuming. Even with a good template, a researcher must generate research questions, select the appropriate method(s), decide how to run sessions, and often create study collateral (like screeners and tasks) from scratch. The good news is that AI can help with many, if not all, of these tasks!

Using AI Chatbots to Write a Research Plan

It can be tempting to just ask an AI tool to give you a research plan for a project. Don’t do that.

Bad Prompt:

Generate a research plan for a usability test of a food-delivery app.

This kind of request results in a generic, template-like response, partly because AI lacks the context to propose a complete research plan and partly because of how the model has been trained.

To construct a useful research plan, deconstruct the plan into parts and have the AI chatbot tackle each part individually. You can assemble responses in a final research plan.

Conversational AI can’t give you the assembled final output straight away from a simple prompt. Break the research plan into its components and tackle each one before assembling it back together.

Don’t expect the AI to ask you the right questions in order to get a comprehensive outcome. View the AI tool as a UX assistant, not as a UX mentor. You need to feed into the AI all the steps and details you want it to consider.

Follow the steps below for the best results.

Step 1: Provide Context

Imagine asking an experienced colleague for recommendations on what research study to run without giving them any context on what project you’re working on and what you’re trying to achieve. They would probably say, "It depends on what you’re trying to learn.” Conversing with AI presents no exception.

Start your conversation with enough context, especially if the AI tool does not already have this information stored about you or your current project. (ChatGPT 4 allows users to store context for each new conversation.)

The contextual information you share should include who you work for, the scope of your project, and what you’re looking to achieve. Remember that some AI tools may use your conversations to train their models, so share only information you would be comfortable with being in the open domain.

 Prompt Template:

I work for [type or organization]. My company provides [what kind of product, service, or experience]. I need to run a user-research study to learn [something about users or a problem].

Step 2: Generate Research Questions

After providing the AI tool with context, ask it to suggest specific research questions for the study. Here’s an example from a recent study on research repositories that I conducted for NN/g.

Prompt Example:

I will run a research study and report on how UX practitioners across different organizations set up and use research repositories. Please generate 10 research questions that would be suitable for this sort of study.

It’s a good idea to ask generative AI tools for more options or ideas than you think you’ll need, because some will not be a good fit.

Microsoft Copilot responded to the prompt above with the 10 research questions below.

10 Research questions generated by Microsoft Copilot

  1. Who are the primary users of the research repository within different organizations?
  2. What motivates users to engage with the research repository?
  3. How do different teams (design, product, sales, marketing) access and benefit from the research repository?
  4. What challenges do practitioners face when contributing to or retrieving information from the repository?
  5. What types of research artifacts (reports, raw data, templates) are most commonly stored in the repository?
  6. How do organizations structure their research repositories?
  7. What tools and templates are commonly used to facilitate research repository management?
  8. How do practitioners ensure the quality and accuracy of information in the repository?
  9. What impact does the research repository have on decision-making and project outcomes?
  10. What strategies can organizations implement to encourage widespread adoption and active participation in the research repository?

To broaden your pool of research questions, ask for further research questions or provide the same prompt in a new session. Repeat this process as many times as you need to generate a list that feels exhaustive of what you’d like to learn.

Some of the research questions Copilot generated for me were appropriate, and others were not. As a result, I had AI generate more research questions to choose from.

Providing the same prompts over multiple sessions or asking the AI chatbot to provide additional research questions can generate a large pool of research questions.

Unfortunately, most AI chatbots don’t offer an easy way to dismiss suggestions or combine specific responses and work from only these (a behavior called apple-picking).

Pulling in generated research questions into an offline document (like a FigJam or Google Doc) allows you to easily group items, remove duplicates, or reword suggested research questions.

If you have many research questions generated by AI, consider moving them to an external document so you can sort or rewrite them, remove duplicates, and select which ones you’d like to proceed with.

Begin a new chat session with your selected and refined set of research questions, so that the unwanted research questions are removed from the chat history.

Step 3: Request Methods

After sharing the context and your chosen research questions, ask the AI tool to identify suitable research methods.

Example Prompt:

What study would you suggest to answer these research questions? Please be specific; cite which research questions would be answered by which research method if you suggest multiple methods.

Generative-AI advice is not always good advice. Often, these tools will suggest various methods and suggest you triangulate data from multiple sources. This approach is not always needed. Also, not all methods will be practical or the best fit for your study. Additionally, AI may suggest interviews and focus groups even for research questions better suited to a behavioral research method.

Ask AI chatbots to tell you which research methods would be suited to which research question and why. We also recommend doing some further reading on your own about any methods that are unfamiliar to you.

In response to the prompt above (and given my chosen research questions), ChatGPT recommended a survey, interviews with select UX practitioners, and case studies. These were all my chosen methods, so AI had done well here!

Step 4: Request Inclusion Criteria

AI can create inclusion criteria — a necessary component of your research plan. Do this step only after generating research questions and methods since these will inform who should participate in the research study.

Inclusion criteria (or recruitment criteria) are specific characteristics of the target population that need to be represented in your sample.

Start with inclusion criteria before asking the AI to help you write a screening questionnaire; AI can only craft an appropriate screener after it “knows” who you’re looking to recruit.

Example Prompt:

So that I recruit the right people for my interviews, help me create some inclusion criteria. What characteristics or behaviors should I recruit for?

Step 5: Request Help with Screeners, Interview Questions, and Tasks

Finally, ask the AI to put together:

  • A screener
  • Interview questions or an interview guide (if conducting interviews)
  • Tasks for a usability test
  • Diary-study prompts (if relevant)
  • Recruitment confirmation emails or other communication messages.

Unfortunately, there are a lot of bad examples of the above on the web. Conversational AI has been trained on all this data. Therefore, don’t be surprised if it produces poor study collateral on its first attempt! This is a major risk area for new researchers.

One way to mitigate this danger is to give the AI tool advice when crafting any of these outputs. Think of AI as a new research assistant who can learn extremely quickly.

Common mistakes that AI tools make include:

  • Using words that appear in the interface in task instructions (priming)
  • Creating task instructions that ask users to imagine they are someone that they are not
  • Not including a goal or a call to action in the task instruction
  • Not including distractor options in screening questionnaires
  • Using overenthusiastic marketing language in recruitment materials

It’s not surprising that AI makes these mistakes since UX practitioners also make them!

To improve outputs, feed the AI essential tips, such as:

  • When crafting tasks: Do not use the name of words or link labels in the task instruction. Find a natural-language equivalent to explain what the participant should do. (You can ask AI to “read” a website or an image.)
  • When crafting recruitment materials: Use neutral and concise language in the recruitment email. Avoid using overly enthusiastic marketing language.
  • When crafting a screener: Include multiple-choice questions and answer options that might disguise what the study is about and who I am looking to recruit.

Additionally, when possible, feed the AI with good examples of screener questionnaires, tasks, or interview questions, so it can follow their format or style.

Even with this advice, AI can still make mistakes. If you’re doubting its answers, check primary sources or speak with an experienced researcher for old-fashioned human guidance.

If you have ChatGPT’s Plus Plan, you can use our GPT for planning your research.

Conclusion

With the proper context, examples, and advice, AI tools, like ChatGPT and Microsoft Copilot, can craft helpful research questions, tasks, interview questions, and other study collateral far more quickly than you could if you started from scratch.

Research leads and ResearchOps personnel can support junior researchers and PWDRs (People Who Do Research) by providing examples and advice that can be fed to AI agents. Experienced researchers can benefit from using AI to speed up their research-planning process and obtain further inspiration.