The ultimate guide to usability testing
If you’re building something that needs to have a slick and intuitive user journey, unless your UX powers are near-superhuman, you won’t get there through design alone. To get something that’s effortlessly enjoyable and meets a user’s needs, you need to show it to some people and see how they do. This is where usability testing comes in.
What is usability testing?
Usability testing is about making sure the user, their needs, and your product’s ability to meet them are at the center of your design. It's about understanding how easy and enjoyable your product is to use or, if it’s not, how to iterate and fix any sticking points. By understanding potential users’ thoughts, feelings, and actions, you can make the improvements that matter and get to where you want to be.
Usability testing is about observing users as they interact with your product to complete specific tasks. It's about understanding how they think, feel, and behave while using it.
What isn’t usability testing?
Usability testing is not:
- An indicator of product-market fit.
- An indicator on priority of tasks.
- A replacement for QA'ing prior to a product launch.
If you ask someone from your own team to go through a set of tasks, this is also not usability testing. However well-designed a set of tasks or tests, they can’t remove the bias caused by prior knowledge of the product. This means any test carried out by a developer or designer attached to the project, no matter how good the data or insights, isn’t usability testing.
To design and build things that are relevant and reflective of your end users' needs and priorities, you'll need to speak with and observe individuals who are as representative of the end user as possible. This will reduce the likelihood of assumptions and biases.
Different types of usability testing
There are a wide range of methods to consider if you want to conduct usability testing, though they fall into several key categories:
Quantitative vs. qualitative
Quantitative usability testing
Quantitative usability testing gathers data, offering an assessment based on performance. How fast did a user complete a set of tasks? How many times did it take them to get to the outcome they wanted? How many wrong turns or navigational errors did they make? What score out of ten did a user give the product? What these have in common is that they’re indirect markers of usability rather than direct ones.
When using a quantitative method, your output is likely to be a large volume of numbers. This can be extremely useful if you have something to compare them to. If you have test data on a previous version of your product, or even on a competitor’s product, you can directly compare the numbers and draw inferences.
If you don’t already have a benchmark, then these numbers can end up being fairly abstract. In fact, if you’re doing a study without a benchmark, it’s likely your quantitative results will become just that for future studies.
Some standard quantitative usability methods include eye-tracking software, time- or success-based task metrics, and user surveys.
Qualitative usability testing
Where quantitative testing is indirect markers, qualitative is very much more direct. Observing users work through tasks, seeing where they breeze through and where they struggle, is the start. From here, moderators can ask questions, either live as the tasks are being carried out or as follow-ups to recorded tests.
If a particular design feature, or set of features, seems to be causing users problems, a specific study can be created to target just that subset of interactions.
In this way, qualitative studies are ideal for identifying issues or flaws in a design and better understanding why something isn’t working, whether that’s an overall view of the product or—more usually—processes like submitting a form or making an order.
Moderated vs. unmoderated
Moderated usability testing
In moderated testing, a facilitator guides users through tasks and asks questions in real-time. They will observe as they go, which makes the posing of highly useful follow-up questions a big advantage of the method.
However, due to the need for a moderator to be either physically present or at least linked via a high-quality internet video connection, it can limit the number and, therefore, diversity of test participants. This can also have budget implications.
Unmoderated usability testing
If a test is unmoderated, users complete tasks independently, without a facilitator. This means that users are on their own and can’t ask questions to help them on their journey.
It can be more cost-effective and allows for a larger sample size. It’s often used in the final stage of design when a prototype or working sample is near completion and already works well.
Remote vs. in-person usability testing
Remote usability testing
Users participate in the test from their own location, often using video conferencing or online tools. This is convenient for both users and researchers, meaning it’s usually easier to source participants, whether you are running a moderated or unmoderated setup.
For moderated tests, a video link to the researcher allows for additional questions and closer observations. One potential downside is reliance on a solid high-speed internet connection, which might fail.
In unmoderated tests, users can complete their tasks wherever and whenever suits both them and the researchers. One downside is a lack of control over the testing environment, and any follow-up questions will be distinct from the initial test.
In-person usability testing
This is the classic approach to usability testing. Users complete tasks in a controlled environment, with a researcher present to observe and question.
This has quite a few advantages and often yields the richest data set if the tests and scripts are properly designed. However, it is more expensive and time-consuming and relies on the lab environment bearing a close enough resemblance to real-world conditions.
Key benefits of usability testing
Usability testing offers a wide range of benefits which can directly lead to a much better product:
- Improved user experience: Identify and fix pain points before launch.
- Increased user satisfaction: Create products that delight users.
- Reduced development costs: Catch issues early and avoid costly redesigns.
- Competitive advantage: Gain a deeper understanding of your users than competitors.
- Data-driven decision-making: Make informed choices based on real user feedback.
When to conduct usability testing
Usability testing can be valuable at various stages of product development. At each stage, particular areas of study will yield the most valuable data.
Early stage
Here you can test to identify core concepts and gather feedback on prototypes. At this stage, these can often be low-fidelity rather than high-fidelity prototypes. Focussing on qualitative methods will bring the best benefits, whether remotely or in person.
Design stage
Use targeted tests to evaluate the usability of wireframes and mock-ups, again focussing on qualitative methods to gain the most user insight you can to guide the next iteration.
Development stage
Testing early versions of the product allows you to identify issues as they arise. Once the product is sufficiently advanced, it’s time to use comparative data sets and quantitative methods to iron out particular pain points.
Launch stage
You may have worked incredibly hard pre-launch but there is always something to improve. Launch-stage usability testing, also known as QA, lets you gather feedback on the final product and plan for improvements.
Step-by-step guide to running a usability test
A well-designed usability test can give you significant insight into your product and allow genuine user-guided design improvements. A poorly set up one can be worse than not testing at all, creating false expectations or unusable data at the expense of significant time and effort. Here’s how to make sure you get the best out of your usability testing:
- Define your goals: Clearly outline what you want to learn from the test, the outputs needed, and how they will be expressed. This will allow for much easier analysis once the data is in.
- Recruit participants: Identify your target audience and recruit participants who represent them. This is where audience creation and selection tools like Prolific can really boost your ability to collect relevant data.
- Create tasks: Develop tasks that reflect how users will interact with your product. It’s important to place the user clearly at the center of what you are trying to do so that you are solving their needs.
- Prepare your test script: Outline the questions and instructions you'll use, working with your researchers to ensure they are thoroughly versed in the needed outcomes and how to obtain them.
- Set up your testing environment: Choose a location for your test if you are working in person. For remote testing, set your online platform and feedback methods to ensure absolute consistency.
- Conduct the test: Whether you are using moderated or unmoderated testing, you will observe users as they complete the tasks you have set, whether directly or indirectly.
- Analyze your findings: Review the data and identify your key insights.
Report your results: Share your findings with the team and recommend improvements. A deeper dive with teams responsible for specific functions can also yield insights that can be applied to future projects.
Remember, usability testing is an ongoing process. By regularly testing your product, you can continuously improve the user experience and drive customer satisfaction.
Prolific can help you build products using the best possible research data. Conduct usability testing studies using targeted samples from our vetted pool of 200k+ active, diverse, and engaged participants.