User research surveys: How to design good questions and get meaningful results
When running user research surveys, the type of questions you ask - and how you ask them - will determine the type of answers you receive.
Imagine we asked people whether they would like some ice cream, and after 80% of respondents said “yes,” we bought an ice cream truck. After we had done our research, would we expect the hotdog place across the street to be getting many more sales? Why wouldn’t people buy the ice cream they said they wanted?
This is because our question “tricked” people into saying “yes”: We asked them in a leading way, didn’t give them any other options, and forgot to mention that the ice cream wasn’t for free. Our question, therefore, wasn’t designed to uncover people’s real needs, and so, neither did the answer reflect their true preferences.
In this article, we’ll run through the key survey design practices to help you gain more accurate understanding of your users.
Questions
- Ask one specific question at a time!
It’s tempting to want to include as much information as possible in a question, but doing so may compromise your insights. You can confuse participants and receive responses that will be difficult to interpret.
❌ How likely are you to recommend this product because you are happy with it?
✔️ How would you rate your experience with the product?
Here you’ll receive a score, let’s say one from 1 to 5, but what does it describe? Does it indicate how satisfied your customer is with the product, or rather how likely they are to recommend?
These aren’t the same thing. Someone may be very satisfied with the product, but never recommend it, because they don’t know anyone else who might need it or because they don’t like recommending things in general. Equally, someone may recommend the product, while not being satisfied at all. For instance, if it’s the only option available to them or because they believe it can fit someone else’s need better than theirs. So why not ask precisely what we want to know?
2. Don’t ask people for predictions
People are notoriously bad at predicting things. Asking someone to predict the occurrence of a future circumstance will, at the very best, provide you with nothing more than their best guess. This is why we don’t recommend asking:
❌ How likely are you to recommend this product?
Even when it is phrased as a single question, the reply doesn’t actually predict how likely someone is to recommend your product.
If your intention is to learn about the customer’s satisfaction, ask about their satisfaction. If your intention is to predict your product’s virality, you’re actually better off looking at quantitative data from your referral scheme and plotting out how many people a customer refers to on average. If you have no data available and you’re stuck with asking people, ask about their current or recent behavior.
You might ask for instance:
✔️ Have you ever recommended this product?
3. Avoid leading questions
Accidentally introducing a leading question into your survey is easier than you think. Let’s look at how to measure satisfaction, for instance. What is wrong with the question:
❌ How satisfied are you with this product?
The answer is priming. Meaning that being exposed to one side of the spectrum in the question, the participant may be more likely to be influenced by this side of the spectrum
A more balanced option is:
✔️ How satisfied or dissatisfied are you with this product?
4. Common-sense check and eliminate the need for clarification
Nothing compromises your insights like unclear questions. If there’s even the slightest chance your participant may need clarification, it’s not clear enough. Phrase your questions in a self-explanatory way, then ask others to test them. Avoid double-negatives, jargon, or ambiguous expressions.
Answer options
Let’s talk about answer options, because …well, they’re just as important as the questions.
- Use a meaningful amount of reply options.
It’s common to see a Net Promoter Score survey on sites these days, and many will copy the design by default, giving their participants a scale from 1 to 10 to choose from. However, this scale is not always meaningful. Some questions are best answered with a simple yes/no.
For most other questions, a scale from 1-5 will suffice. The meaning of the steps in that scale are 3 = neutral, 2 and 4 = slight tendency towards one end, 1 or 5 = strong tendency towards one end. What’s the difference between a 6 and a 7 or a 7 and an 8?
When choosing the range of your scale, ask yourself:
- Will your participant know the precise meaning of the differences between each step?
- Will you have the same interpretation of the differences which will allow you to interpret their reply accurately?
- What additional information will this level of detail provide you with?
If you can answer these questions with certainty, you have found your correct scale. If unsure, you can probably reduce the number of options.
2. Use a balanced scale
Make sure you have an equal number of options on either side of the middle option in your scale - and an equal number of positive and negative steps.
❌
Difficult | Easy | Very easy |
✔️
Difficult | Neither easy nor difficult | Easy |
In this example, you could add ‘very easy’ and ‘very difficult’ on either end for a 1-5 scale, as long as there is an equal amount of options on each side.
3. Stick to consistent naming
Within one survey you should name the same scale the same way. Anything else would at the very least look sloppy, and at worst could confuse your participants and lead to unusable data.
❌
Very hard | Very easy | Not sure |
Very difficult | Extremely easy | Don’t know |
Awful | Great | I’m not sure |
While “hard” and “difficult” as well as “very” and “extremely” or “not sure” and “don’t know” can mean the same thing, using the terms interchangeably is not helpful. It adds to the participants’ mental load and draws their attention away from the actual content of the question. Decide on any one way to do it and stay with it throughout.
✔️
Very difficult | Very easy | Not sure |
Very confusing | Very clear | Not sure |
Very negative | Very positive | Not sure |
Very slow | Very fast | Not sure |
4. Give one option at a time
Much like how you should only ask one question at a time, every option for an answer should focus on one clear answer. In multiple-choice questions, the user chooses multiple options, of course - but every option should only represent one concept.
❌ I don’t need this product, because it’s not engaging enough
✔️ I don’t need this product
✔️ The product is not engaging enough
Having no need for a product is a separate reason from being bored with it. It’s possible that someone may experience both, in which case you should allow multiple choice. However, it’s possible to experience one without the other, so these should be presented separately.
5. Stay on one level
What’s wrong with the following options for the question “Why are you unsubscribing?”
❌
- I’ve encountered problems
- I’ve encountered problems with registration
- I’ve encountered problems with shipping
- I prefer a different payment method
- I prefer to pay by credit card
- I prefer to pay by bank transfer
These options aren’t on the same level, as some of them are actually subcategories of others. ‘Problems with payment’ is a subcategory of problems in general, like paying by credit card is a subcategory or payment methods. If you need detailed information, consider a conditional follow-up question:
✔️
- I’ve encountered problems
- I prefer a different payment method
For those who chose ‘I’ve encountered problems’, you can follow up with:
What have you experienced problems with?
✔️
- Registration
- Shipping ...
For those who chose “I prefer a different payment method”:
“Which payment method(s) do you prefer?”
✔️
- Credit card
- Bank transfer ...
6. Come up with an exhaustive list
When you are trying to quantify how different options are spread in the population, the more exhaustive your list of options will be, the more accurate your numbers.
If you have no idea what these options could be, you’re probably jumping into quantification too early. You should do some qualitative research to get to know your topic. You could do interviews or open-ended surveys to uncover possible options first.
7. Give a way out
Even with the most exhaustive list you could come up with, there may be edge cases you haven’t accounted for. So, it’s best to give people a way out if none of the options apply. Depending on the context, you could go with “other”, “not applicable”, “not sure”, “don’t know”, or “don’t remember”.
This prevents participants from having to click the next best thing and keeps your data closer to the truth. The “other” option, when accompanied by a free-text option, gives you the added benefit of discovering new options to add.
Final words: Get feedback, pilot, iterate
Even after years of designing surveys, biased language or inconsistencies can still sneak into our work. We’re all people, and we will make mistakes sometimes. The best way to create a good survey is through iteration.
Get feedback from colleagues on your initial draft. Often, someone from support or customer success will suggest a couple of options you hadn’t considered.
Pilot on a smaller sample first. Especially for important work: add a couple of free-text fields for feedback and present the survey to a couple of people from your target group. Real participants could point you to unclear questions or missing options that never occurred to the team.
Finally, improve your survey over time. Many customer experience surveys are run continuously or regularly. As new functionality is added to the product, old problems are solved, and user knowledge expands, so should your surveys change and develop over time.
Want to learn more about best practices for gathering user insights? Check out our guide on how to plan and facilitate user research, where you’ll find out how to set goals, choose the right methods, and more.