9 principles for using large language models in user research
AI tools are changing how researchers work. Transcription, synthesis, note-taking, analysis; there is a growing list of tasks where Large Language Models (LLMs) can genuinely speed up your work, reduce manual effort, and free up time for the parts of research that really need your expertise.
These tools and approaches offer significant potential, but also present real risks. Speed is not the same as rigour, and in research that distinction matters.
The risk is how easy it is to use AI tools poorly without realising it. To present findings that sound credible but miss the point. To present participant voices that are flattened into misleading, tidy categories. To create quick outputs that yield little genuine insight.
Used in the right way, LLMs offer significant advantages. The goal is making sure that the speed and efficiency of working with LLMs does not come at the cost of the quality of your research or the safety of your participants.
This article outlines 9 principles designed as a quick-start guide for research practitioners. They are not a list of things to avoid. They are a practical, flexible framework to help you get started using AI tools in a way that is effective, safe, and compliant.
There is no single right way to use AI in research yet. This is a starting point, not a rulebook.
1. Humans lead. AI assists
An LLM can speed up your work, but it cannot replace your judgement, empathy, or professional responsibility. Think of it as a fast, capable assistant that still needs you to direct, check, and own the work.
In practice:
Always have a human researcher lead the work and make final decisions
Read raw data and transcripts yourself. Don’t let AI be your only lens on participant’s experiences
Use an LLM to draft, summarise, or organise. Use your own expertise to decide what matters.
Supplement analysis of transcript data with behavioural observations. What users do is often more insightful than what they say.
Review and edit all AI outputs before sharing or acting on them.
2. Protect people’s data
Not all AI tools are secure, and free tools often pay for themselves by using your data and sharing it. Assume that anything you put into an AI tool could be stored, reused, or exposed unless you change the settings accordingly. You are responsible for protecting participants’ privacy and complying with data protection rules (e.g. GDPR)
In practice:
Never input personal, sensitive, confidential, or politically sensitive data into public AI tools.
Remove or anonymise all identifiers before using AI for analysis, including indirect ones like job role, location, or organisation.
Use privacy controls in the tools you use. Most tools are set to public by default.
If working for a client, only use AI tools that have been explicitly vetted and approved by their security or data protection team.
3. Real users are irreplaceable
AI-generated ‘synthetic users’ are based on probability, not lived experience. They cannot replicate the emotional depth, context, or surprises that come from speaking with real people.
In practice:
Prioritise direct engagement with real participants over AI-generated personas or simulations.
Use AI to help brainstorm questions or scenarios, never to generate ‘user feedback’.
Treat any synthetic or AI-generated data as a hypothesis that must be validated by real research.
Remember: an algorithm can predict likely answers, but it cannot feel, experience barriers, or be genuinely surprised.
4. Be transparent and get consent
People have a right to know how their data is used, and colleagues need clarity on how AI contributed to the work. Transparency builds trust and protects both participants and researchers.
In practice:
Obtain explicit consent if AI will be used to analyse participant data.
Never use hidden AI note-takers or transcription tools without clear participant permission.
State in consent materials, in plain language, when ‘AI-assisted methods’ will be used.
Document AI use in your methodology notes or reports so it is visible to your team and stakeholders.
5. Give AI clear instructions, or expect bad results
AI output quality depends entirely on the quality of your instructions. Vague prompts produce vague, unreliable outputs. Specificity is everything.
In practice:
Provide clear context: who the users are, what the research is for, and what you need.
Break tasks into structured steps rather than asking one broad question.
Share relevant materials (e.g. discussion guides, research objectives) to ground the AI’s responses.
Specify constraints explicitly. For example, ‘use verbatim quotes only’ or ‘do not add information that is not in the transcript’.
6. Manage your context window carefully
Every LLM has a memory limit called a context window. The more you add (questions, documents, follow-ups), the less the LLM can ‘hold in mind’. As the window fills up, earlier information fades, responses become less reliable, and the AI may contradict itself or lose the thread of complex tasks.
In practice:
Start a fresh conversation for each distinct task rather than chaining many different requests together.
Keep sessions focused: provide only the documents and context that are directly relevant to the task at hand.
If a session has been running a long time or across many follow-up questions, treat outputs with extra scepticism. Quality degrades as the context fills.
For large analysis tasks (e.g. multiple long transcripts), break them into smaller, separate sessions rather than pasting everything in at once to a single chat.
Be mindful of the quality of responses. If they start to feel inconsistent or the AI seems to ‘forget’ earlier instructions, this is a sign the context window is under pressure. Start a new chat.
7. Expect and correct for bias
AI reflects the biases of its training data. Without active correction, it can flatten nuance, amplify majority perspectives, and overlook marginalised voices.
In practice:
Use neutral, open-ended prompts. Ask for ‘themes’ rather than ‘problems’.
Actively look for missing voices: edge cases, minority viewpoints, and outlier responses.
Be especially cautious when researching groups that are under-represented in mainstream online data.
Validate AI-identified themes against verbatim quotes to ensure no perspectives have been flattened or lost.
8. Never trust AI output without checking it
AI can sound confident even when it is wrong. Hallucinations; fabricated quotes, invented themes, plausible-sounding nonsense are a real risk. Always treat AI output as a first draft, not a final answer.
In practice:
Check AI-generated summaries, themes, and quotes against your source data.
Trace important quotes back to transcripts to confirm they have not been fabricated.
Do manual spot-checks by reading or listening to raw data in full.
If something feels off, assume it is incorrect until you have confirmed it against real evidence.
9. Treat AI use as experimental, and share what you learn
There is no single “right” way to use AI in user research yet.
Using AI in research is still evolving.
Learning comes from trying things, seeing where they succeed and fail, and adjusting.
In practice:
Treat AI use as a hypothesis, not a proven method.
Start small and low-risk before scaling to critical work.
Expect mistakes and partial failures.
Share what worked and what didn’t with your research community.
These principles will evolve. AI tools are moving fast, and our understanding of how to use them well in research is still developing. What works today may look different in six months.
What will not change is the fundamentals: protect your participants, own your analysis, and keep human judgement at the centre of the work. If you try something, share what you learn. The best guidance we have right now comes from practitioners experimenting, reflecting, and being honest about what worked and what did not. That includes you.
A note on using AI in making this blog
The authors used AI tools, including Claude.ai and ChatGPT, to help write this post.
Every blog post starts with a theory or an idea we want to explore. This often begins as a long-form written summary including all of the main points and arguments. From there, we use LLMs to help with secondary research, structuring thinking, and editing drafts.
All the ideas, arguments, and conclusions here are the authors’ own.
The work is human-led, AI-assisted
About the authors
Nick Buckland
With over 18 years in user and market research, Nick helps organisations understand people through mixed methods research and human-centred design practice. Now focused on the intersection of AI and qualitative inquiry, his work is about developing practical protocols and workflows that help research teams use AI safely and effectively, without losing the human judgement that makes qualitative work meaningful.
Erica Kucharczyk
Erica is a UX research and design leader who helps organisations build better products and services through evidence-driven design. Her work increasingly focuses on the practical and ethical application of AI, drawing on expertise in research ethics and data protection to help research practitioners adopt emerging technologies responsibly. Erica holds a PhD in Psychology and brings a background in academic research, ethics and experimental design, with experience leading research and design initiatives across public and private sector organisations.
You may also like
Blog
LLMs explained: Part 1 – How LLMs actually work, and what that means for your business
Blog
Coding with LLMs: are we re-inventing linguistics with prompts?
Blog
Experimenting with AI in delivery
Get in touch
Solving a complex business problem? You need experts by your side.
All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.
If you’d like to find out more about working with us – get in touch. We’d love to hear from you.