Guides
How do customer interviews improve user experience?
This guide explains how customer interviews reveal real behaviours, reduce assumptions, and help teams improve user experience with confidence.
Three key methods used in this
guide
Who this page helps and why interviews matter
Customer interviews are often discussed but rarely executed well. This page is for people responsible for improving websites or online services who do not have a research background. They are usually communications officers, digital leads, service managers, or project officers who know something is not working but are unsure where to start.
They are dealing with practical pressures. Complaints are increasing. Support teams are answering the same questions repeatedly. Forms are being abandoned. Stakeholders have strong opinions, but no shared evidence. Redesigning everything feels risky, expensive, and hard to justify.
This is where customer interviews earn their place.
Interviews are not about asking people what colour they like or which feature they want next. When done properly, they help teams understand how people actually experience a service, what they are trying to do, and where things quietly fall apart. They replace assumptions with firsthand insight and provide teams with language to explain problems to decision-makers.
For public and community organisations, interviews are especially valuable because services often support people under stress. Someone applying for housing, submitting a permit, making a donation, or trying to understand eligibility rules is not browsing for interest. They are trying to complete a task and move on with their day. Small points of confusion can have outsized consequences.
This guide focuses on interviews as a practical investigation tool. It shows how to prepare, what to ask, what to avoid, and how to turn conversations into clear next steps. It also explains how interviews fit into early discovery work, before time and money are spent on solutions that may not address the real issue.
If you are also exploring broader early-stage work, this connects closely with What is product discovery and why does it matter before improving user experience? and can be used alongside it.
Why most interviews fail to reveal anything useful
Many teams say they have spoken to users, yet still feel unsure about what to fix. This usually comes down to how interviews are planned and run.
A common issue is that interviews are treated as a checklist. A few questions are written quickly, sessions are squeezed into busy calendars, and notes are taken without a clear plan for how they will be used. At the end, teams are left with quotes that sound interesting but do not clearly point to action.
Another frequent problem is leading questions. People are asked questions such as “Did you find this easy?” or “Would this feature be useful?” These questions prompt participants to offer polite answers and opinions rather than real experiences. Most people will say something is fine, even when it was confusing or frustrating.
Teams also tend to focus too much on the interface and not enough on the person’s context. They ask about screens instead of situations. As a result, they miss important factors such as time pressure, emotional state, accessibility needs, or workarounds people use when the service does not support them.
In public-facing services, this leads to predictable outcomes:
People say different things in interviews than what their behaviour shows
Findings feel vague and open to interpretation
Stakeholders argue over which quotes matter most
Decisions revert back to internal opinion or senior preference
There is also a fear of “doing research wrong.” Without a clear structure, interviews can feel intimidating. Teams worry about asking the wrong thing, influencing answers, or not being rigorous enough. This often leads to avoiding interviews altogether or relying solely on analytics and complaints, which rarely tell the full story.
The before-and-after examples in this article clearly highlight this gap. In the “before” state, questions are generic and surface-level, producing friendly but unhelpful feedback. In the “after” state, questions are grounded in real tasks and moments, revealing specific barriers that can be acted on.
The core problem is not a lack of effort. It is a lack of framing.
How to run interviews that lead to clear decisions
Effective customer interviews follow a simple but disciplined approach. They focus on past behaviour, real situations, and concrete examples. Below is a practical way to structure interviews so they support confident decisions rather than adding noise.
Start with a clear investigation goal
Before recruiting anyone, be clear about what you are trying to learn. This is not a research objective written for a report. It is a plain question the team needs answered.
Examples include:
Why do people start this form but not finish it?
What makes it hard to know where to begin on this page?
How do people decide whether they are eligible before applying?
A single, focused goal keeps interviews on track and makes it easier to spot patterns later. Without it, conversations drift, and findings become unfocused.
Recruit people who reflect real situations
The value of interviews depends heavily on who you speak to. Aim to recruit people who have recently attempted the task you are exploring. Memory fades quickly, and hypothetical answers are unreliable.
In many organisations, recruitment can feel like a blocker. Start small. Even five to eight interviews can surface clear themes when participants are chosen carefully. Support teams, call logs, and recent submissions are often good starting points for finding participants.
Avoid only speaking to “power users” or people already comfortable with digital services. Those who struggle often reveal the most useful insights.
Ask about what happened, not what people think
Strong interviews focus on events, not opinions. Instead of asking what someone likes or prefers, ask them to walk through what they did last time.
For example:
“Can you tell me about the last time you tried to complete this?”
“What was the first thing you looked for?”
“What made you stop or hesitate at that point?”
These questions encourage storytelling. They help people recall details they would not include in a general opinion. Silence is also useful. Pausing often prompts participants to add clarifying information.
Follow confusion, not your script
A discussion guide is important, but it should not be followed rigidly. When someone hesitates, backtracks, or uses vague language, that is a signal to dig deeper.
Simple follow-ups like “what did that mean for you?” or “what did you expect to happen next?” often uncover mismatches between what the service offers and what people assume.
This is where interviews differ from surveys. The value comes from exploring the edges, not from asking every question on the list.
Capture patterns, not just quotes
After interviews, avoid jumping straight to solutions. First, identify recurring issues across sessions. These might include:
The same point where people feel unsure
The same misunderstanding about eligibility or next steps
The same workaround to avoid reading content
Group findings by behaviour or barrier rather than by individual. This makes it easier to explain issues to stakeholders and to prioritise changes.
Before-and-after examples in the article show how reframing findings in this way turns scattered notes into clear problem statements.
Connect insights to decisions
Interviews are most effective when they directly inform the next steps. Each key finding should point to a decision or action.
For example:
Clarify entry points and labels
Reorder steps to match people’s expectations
Add reassurance or confirmation at moments of doubt
This is where interviews support practical improvement rather than becoming a standalone activity. They reduce the risk of implementing changes that appear sound internally but fail to address real barriers.
Test understanding, not just solutions
Once changes are proposed, short follow-up testing sessions can verify whether the original problems have been resolved. This closes the loop and builds confidence that time and effort were well spent.
In many teams, this step is skipped. Bringing even a small amount of testing into the process helps prevent the same issues from reappearing in different forms later.
What to do next
If interviews reveal issues that feel larger than a single screen or form, the next step is often to step back and look at the broader journey. Pairing interviews with early discovery work helps teams decide what to fix first and what can wait.
If you want help planning or running interviews, or turning findings into clear actions, this is where structured research support can save time and reduce risk. The goal is not more insight, but better decisions and fewer avoidable mistakes.
Customer interviews, done well, give teams a shared understanding of what is really happening. That clarity is what makes improvement possible.
You might find these helpful
A few related articles that build on what you’ve just read.


